RESET with IBM
This project is currently under development.
Post-Selected Fidelity over Iterations
Protocol: Repeated mid-circuit measurement (all-zero post-selection)¶
Objects and assumptions¶
- Initial (post-RESET) ground probability: \(\alpha = P(\text{state}=0 \ \text{before any measurement})\).
- We condition on seeing “0” at each mid-circuit measurement (discard as soon as a “1” appears).
- Let \(p_{s}^{(0,s')}\) be the joint probability (given pre-state \(s\in\{0,1\}\)) that the outcome is \(o=0\) and the post-measurement state is \(s'\in\{0,1\}\).
- Define \(A_s := P(o=0\mid s)=p_{s}^{(0,0)}+p_{s}^{(0,1)}\).
-
Define the zero-conditioned transition matrix (columns = old state, rows = new state):
\[ Q_{s'\!,\,s} \;=\; P(\text{new}=s'\mid o=0,\,\text{old}=s) \;=\; \frac{p_{s}^{(0,s')}}{A_s}, \qquad Q=\begin{pmatrix} a & b\\[2pt] 1-a & 1-b \end{pmatrix}, \]with
\[ a = P(\text{new}=0\mid o=0,\text{old}=0),\quad b = P(\text{new}=0\mid o=0,\text{old}=1). \] -
Note: \(Q\) is column-stochastic (each column sums to 1), so if \(v^{(0)}=(\alpha,1-\alpha)^T\) is normalized, then \(v^{(M)}:=Q^M v^{(0)}\) is normalized for all \(M\).
Acceptance probability and post-selected fidelity¶
-
Let \(A=(A_0,A_1)^T\). The acceptance probability (see \(M\) zeros in a row) is
\[ P_M \;=\; \prod_{j=0}^{M-1} \bigl( (Q^j v^{(0)})\cdot A \bigr). \](Each factor is the probability to see the next “0,” given all zeros so far.)
-
The post-selected fidelity after \(M\) zeros is the ground component of \(v^{(M)}\):
\[ F_M \;=\; \bigl[\,Q^M v^{(0)}\,\bigr]_0. \]
Closed form in the 2 by 2 case¶
With
the first component \(F_M\) obeys the linear recurrence
with solution
- \(\pi_0\) is the steady-state ground population of the zero-conditioned instrument: \(Q\,(\pi_0,1-\pi_0)^T=(\pi_0,1-\pi_0)^T\).
-
Change relative to the reset quality:
\[ \boxed{\,F_M-\alpha \;=\; \bigl(1-\lambda^M\bigr)\,(\pi_0-\alpha)\,}. \]
Physical meaning of parameters¶
- \(a\) (QND-ness on 0): probability a true \(|0\rangle\) stays \(|0\rangle\) given you observed 0 (low excitation back-action on the “0” branch).
- \(b\) (healing on 1): probability a true \(|1\rangle\) ends in \(|0\rangle\) given you observed 0 (decay or “helpful” back-action on the “0” branch).
-
\(\pi_0 = \frac{b}{1-a+b}\) grows with either knob:
\[ \frac{\partial \pi_0}{\partial a}=\frac{b}{(1-a+b)^2}>0,\qquad \frac{\partial \pi_0}{\partial b}=\frac{1-a}{(1-a+b)^2}>0. \]
When does repetition help?¶
-
Direction of change: sign is set by \(\pi_0-\alpha\).
- If \(\pi_0>\alpha\): repetition improves fidelity, pulling \(F_M\) upward.
- If \(\pi_0<\alpha\): repetition degrades fidelity, pulling \(F_M\) downward.
-
Rate and monotonicity: controlled by \(\lambda=a-b\).
- \(0\le \lambda < 1\) \(\Rightarrow\) monotone convergence to \(\pi_0\).
- \(-1<\lambda<0\) \(\Rightarrow\) alternating “wiggles” (overshoot/undershoot) with amplitude \(|\lambda|^M\).
- Always \(|\lambda|<1\) for a valid \(2\times2\) stochastic \(Q\), so \(F_M\to\pi_0\).
Acceptance probability asymptotics (yield–purity trade-off)¶
-
Each per-step factor tends to the steady value \(( \pi \cdot A )\) with \(\pi=(\pi_0,1-\pi_0)^T\), so for large \(M\):
\[ P_M \;\approx\; C\;\bigl( \pi\cdot A \bigr)^{M} \quad (\text{for some prefactor }C>0). \] -
Trade-off: \(F_M\) approaches \(\pi_0\) while \(P_M\) decays roughly geometrically; choose \(M\) to meet a fidelity target at acceptable yield.
Connecting to mis-read / back-action parameters¶
From the correlated joint model:
- Increase \(a\) by reducing hidden excitation of true \(|0\rangle\) on the zero branch (better QND).
- Increase \(b\) by increasing decay to \(|0\rangle\) for true \(|1\rangle\) on the zero branch (more effective “reset” when you see 0).
-
Independent-error limit (no back-action, only mis-reads \(\epsilon_0,\epsilon_1\)) reproduces the familiar forms:
\[ P_M \approx \alpha(1-\epsilon_0)^M + (1-\alpha)\epsilon_1^M,\qquad F_M \approx \frac{\alpha(1-\epsilon_0)^M}{\alpha(1-\epsilon_0)^M+(1-\alpha)\epsilon_1^M}. \]
Protocol checklist¶
- Estimate \(p_s^{(0,s')}\) from calibration sequences (prepare \(s\), measure twice, record outcome & post-state). Compute \(A_s\), then \(a,b\).
- Compute \(\pi_0=\frac{b}{1-a+b}\) (measurement-conditioned quality) and compare to \(\alpha\) (reset quality).
- Decide monotonicity via \(\lambda=a-b\): \(\lambda\ge0 \Rightarrow\) monotone; \(\lambda<0 \Rightarrow\) small wiggles.
- Choose \(M\) to meet a target \(F_M\) with acceptable \(P_M\) (e.g., maximize \(P_M\cdot F_M\) or hit \(F_M\ge F_\star\)).
- Tune knobs: push \(a\uparrow\) (QND on 0) and \(b\uparrow\) (healing on 1) to raise \(\pi_0\) and reduce \(|\lambda|\) (faster convergence).