Uncertainty relations: An operational approach to the error-disturbance tradeoff
Joseph M. Renes1, Volkher B. Scholz1,2, and Stefan Huber1,31Institute for Theoretical Physics, ETH Zürich, Switzerland 2Department of Physics, Ghent University, Belgium
3Department of Mathematics, Technische Universität München, Germany
Accepted in
Quantum
July 10, 2017The notions of error and disturbance appearing in quantum uncertainty relations are often quantified by the discrepancy of a physical quantity from its ideal value. However, these real and ideal values are not the outcomes of simultaneous measurements, and comparing the values of unmeasured observables is not necessarily meaningful according to quantum theory. To overcome these conceptual difficulties, we take a different approach and define error and disturbance in an operational manner. In particular, we formu-late both in terms of the probability that one can successfully distinguish the actual measurement device from the relevant hypothetical ideal by any experimental test whatsoever. This definition itself does not rely on the formalism of quantum theory, avoiding many of the conceptual difficulties of usual definitions. We then derive new Heisenberg-type uncertainty relations for both joint measurability and the error-disturbance tradeoff for arbitrary observables of finite-dimensional systems, as well as for the case of position and mo-mentum. Our relations may be directly applied in information processing settings, for example to infer that devices which can faithfully transmit information regarding one observable do not leak any information about conjugate observables to the environment. We also show that Englert’s wave-particle duality relation [Phys. Rev. Lett.77, 2154 (1996)]can be viewed as an error-disturbance uncertainty relation.
1 Introduction
It is no overstatement to say that the uncertainty principle is a cornerstone of our understanding of quan-tum mechanics, clearly marking the departure of quanquan-tum physics from the world of classical physics. Heisen-berg’s original formulation in 1927 mentions two facets to the principle. The first restricts the joint measur-ability of observables, stating that noncommuting observables such as position and momentum can only be simultaneously determined with a characteristic amount of indeterminacy[1, p. 172](see[2, p. 62]for an English translation). The second describes an error-disturbance tradeoff, noting that the more precise a mea-surement of one observable is made, the greater the disturbance to noncommuting observables[1, p. 175] ([2, p. 64]). The two are of course closely related, and Heisenberg argues for the former on the basis of the latter. Neither version can be taken merely as a limitation on measurement of otherwise well-defined values of position and momentum, but rather as questioning the sense in which values of two noncommuting ob-servables can even be said to simultaneously exist. Unlike classical mechanics, in the framework of quantum mechanics we cannot necessarily regard unmeasured quantities as physically meaningful.
More formal statements were constructed only much later, due to the lack of a precise mathematical description of the measurement process in quantum mechanics. Here we must be careful to draw a distinction between statements addressing Heisenberg’s original notions of uncertainty from those, like the standard Kennard-Robertson uncertainty relation[3,4], which address the impossibility of finding a quantum state with well-defined values for noncommuting observables. Entropic uncertainty relations[5,6]are also an example of this class; see[7]for a review. Joint measurability has a longer history, going back at least to the seminal work of Arthurs and Kelly[8]and continuing in[9–27]. Quantitative error-disturbance relations have only been formulated relatively recently, going back at least to Braginsky and Khalili[28, Chap. 5]and continuing in[20,29–35].
Beyond technical difficulties in formulating uncertainty relations, there is a perhaps more difficult con-ceptual hurdle in that the intended consequences of the uncertainty principle seem to preclude their own straightforward formalization. To find a relation between, say, the error of a position measurement and its disturbance to momentum in a given experimental setup like the gamma ray microscope would seem to re-quire comparing the actual values of position and momentum with their supposed ideal values. However, according to the uncertainty principle itself, we should be wary of simultaneously ascribing well-defined val-ues to the actual and ideal position and momentum since they do not correspond to commuting observables. Thus, it is not immediately clear how to formulate either meaningful measures of error and disturbance, for instance as mean-square deviations between real and ideal values, or a meaningful relation between them.1 This question is the subject of much ongoing debate[25,30,36–39].
1Uncertainty relations like the Kennard-Robertson bound or entropic relations do not face this issue as they do not attempt to compare actual and ideal values of the observables.
Without drawing any conclusions as to the ultimate success or failure of this program, in this paper we pro-pose a completely different approach which we hope sheds new light on these conceptual difficulties. Here, we define error and disturbance in an operational manner and ask for uncertainty relations that are state-ments about the properties of measurement devices, not of fixed experimental setups or of physical quantities themselves. More specifically, we define error and disturbance in terms of thedistinguishing probability, the probability that the actual behavior of the measurement apparatus can be distinguished from the relevant ideal behavior in any single experiment whatsoever. To characterize measurement error, for example, we imagine a black box containing either the actual device or the ideal device. By controlling the input and observing the output we can make an informed guess as to which is the case. We then attribute a large measurement error to the measurement apparatus if it is easy to tell the difference, so that there is a high probability of correctly guessing, and a low error if not; of course we pick the optimal input states and output measurements for this purpose. In this way we do not need to attribute a particular ideal value of the observable to be measured, we do not need to compare actual and ideal values themselves (nor do we necessarily even care what the possible values are), and instead we focus squarely on the properties of the device itself. Intuitively, we might expect that calibration provides the strictest test, i.e. inputting states with a known value of the observable in question. But in fact this is not the case, as entanglement at the input can increase the distinguishability of two measurements. The merit of this approach is that the notion of distinguishability itself does not rely on any concepts or formalism of quantum theory, which helps avoid conceptual difficulties in formalizing the uncertainty principle.
Defining the disturbance an apparatus causes to an observable is more delicate, as an observable itself does not have a directly operational meaning (as opposed to the measurement of an observable). But we can consider the disturbance made either to an ideal measurement of the observable or to ideal preparation of states with well-defined values of the observable. In all cases, the error and disturbance measures we consider are directly linked to a well-studied norm on quantum channels known as the completely bounded norm or diamond norm. We can then ask for bounds on the error and disturbance quantities for two given observables that every measurement apparatus must satisfy. In particular, we are interested in bounds de-pending only on the chosen observables and not the particular device. Any such relation is a statement about measurement devices themselves and is not specific to the particular experimental setup in which they are used. Nor are such relations statements about the values or behavior of physical quantities themselves. In this sense, we seek statements of the uncertainty principle akin to Kelvin’s form of the second law of thermo-dynamics as a constraint on thermal machines, and not like Clausius’s or Planck’s form involving the behavior of physical quantities (heat and entropy, respectively). By appealing to a fundamental constraint on quan-tum dynamics, the continuity (in the completely bounded norm) of the Stinespring dilation[40, 41], we find error-disturbance uncertainty relations for arbitrary observables in finite dimensions, as well as for po-sition and momentum. Furthermore, we show how the relation for measurement error and measurement disturbance can be transformed into a joint-measurability uncertainty relation. Interestingly, we also find that Englert’s wave-particle duality relation[42]can be viewed as an error-disturbance relation.
The case of position and momentum illustrates the stark difference between the kind of uncertainty state-ments we can make in our approach with one based on the notion of comparing real and ideal values. Take the notion of joint measurability, where we would like to formalize the notion that no device can accurately measure both position and momentum. In the latter approach one would first try to quantify the amount of position or momentum error made by a device as the discrepancy to the true value, and then show that they cannot both be small. The errors would be in units of position or momentum, respectively, and the hoped-for uncertainty relation would pertain to these values. Here, in contrast, we focus on the performance of the actual device relative to fixed ideal devices, in this case idealized separate measurements of position or momentum. Importantly, we need not think of the ideal measurement as having infinite precision. Instead, we can pick any desired precision and ask if the behavior of the actual device is essentially the same as this precision-limited ideal. Now the position and momentum errors do not have units of these quantities (they are unitless and always lie between zero and one), but insteaddepend on the desired precision. Our uncer-tainty relation then implies that both errors cannot be small if we demand high precision in both position and momentum. In particular, when the product of the scales of the two precisions is small compared to Planck’s constant, then the errors will be bounded away from zero (see Theorem3for a precise statement). It is certainly easier to have a small error in this sense when the demanded precision is low, and this accords nicely with the fact that sufficiently-inaccurate joint measurement is possible. Indeed, we find no bound on the errors for low precision.
in establishing simple proofs of the security of quantum key distribution[6,7,43–45]. Here we show that the error-disturbance relation implies that quantum channels which can faithfully transmit information regarding one observable do not leak any information whatsoever about conjugate observables to the environment. This statement cannot be derived from entropic relations, as it holds for all channel inputs. It can be used to construct leakage-resilient classical computers from fault-tolerant quantum computers[46], for instance.
The remainder of the paper is structured as follows. In the next section we give the mathematical back-ground necessary to state our results, and describe how the general notion of distinguishability is related to the completely bounded norm (cb norm) in this setting. In Section3we define our error and disturbance measures precisely. Section4presents the error-disturbance tradeoff relations for finite dimensions, and de-tails how joint measurability relations can be obtained from them. Section5considers the error-disturbance tradeoff relations for position and momentum. Two applications of the tradeoffs are given in Section6: a for-mal statement of the information disturbance tradeoff for information about noncommuting observables and the connection between error-disturbance tradeoffs and Englert’s wave-particle duality relations. In Section7 we compare our results to previous approaches in more detail, and finally we finish with open questions in Section8.
2 Mathematical setup 2.1 Distinguishability
The notion of the distinguishing probability is independent of the mathematical framework needed to describe quantum systems, so we give it first. Consider an apparatusE which in some way transforms an inputAinto an outputB. To describe how differentE is from another such apparatusE0, we can imagine the following scenario. Suppose that we randomly place eitherE orE0 into a black box such that we no longer have any access to the inner workings of the device, only its inputs and outputs. Now our task is to guess which device is actually in the box by performing a single experiment, feeding in any desired input and observing the output in any manner of our choosing. In particular, the inputs and measurements can and should depend onEand E0. The probability of making a correct guess, call itpdist(E,E0), ranges from 12to 1, since we can always just make a random guess without doing any experiment on the box at all. Therefore it is more convenient to work with the distinguishability measure
δ(E,E0):=2pdist(E,E0)−1 , (1)
which ranges from zero (completely indistinguishable) to one (completely distinguishable). Later on we will show this quantity takes a specific mathematical form in quantum mechanics. But note that the definition implies that the distinguishability is monotonic under concatenation with a channelF to bothEandE0, since this just restricts the possible tests. That is, bothδ(EF,E0F)≤δ(E,E0)andδ(FE,FE0)≤δ(E,E0)hold for all channelsF whose inputs and outputs are such that the channel concatenation is sensible. Here and in the remainder of the paper, we denote concatenation of channels by juxtaposition, while juxtaposition of operators denotes multiplication as usual.
2.2 Systems, algebras, channels, and measurements
In the finite-dimensional case we will be interested in two arbitrary nondegenerate observables denotedX andZ. Only the eigenvectors of the observables will be relevant, call them|ϕx〉and|θz〉, respectively. In
infinite dimensions we will confine our analysis to positionQand momentumP, takingħh=1. The analog of QandPin finite dimensions are canonically conjugate observablesXandZfor which|ϕx〉= p1dPzωxz|θz〉,
wheredis the dimension andωis a primitivedth root of unity.
It will be more convenient for our purposes to adopt the algebraic framework and use the Heisenberg picture, though we shall occasionally employ the Schrödinger picture. In the Heisenberg picture we describe systems chiefly by the algebra of observables on them and describe transformations of systems by quantum channels, completely positive and unital maps from the algebra of observables of the output to the observables of the input[10,47–50]. This allows us to treat classical and quantum systems on an equal footing within the same framework. When the input or output system is quantum mechanical, the observables are the bounded operatorsB(H)from the Hilbert spaceHassociated with the system to itself. Classical systems, such as the results of measurement or inputs to a state preparation device, take values in a set, call itY.
For arbitrary input and output algebrasAAandAB, quantum channels are precisely those mapsE which
are unital,E(1B) =1A, and completely positive, meaning that not only doesE map positive elements ofAB
to positive elements ofAA, it also maps positive elements ofAB⊗B(Cn)to positive elements ofAA⊗B(Cn)
for all integern. This requirement is necessary to ensure that channels act properly on entangled systems.
A E B
Y
Figure 1: A general quantum apparatusE. The apparatus measures a quantum systemAgiving the outputY. In so doing,E also transforms the inputAinto the output systemB. Here the wavy lines denote quantum systems, the dashed lines classical systems. Formally, the apparatus is described by a quantum instrument.
A general measurement apparatus has both classical and quantum outputs, corresponding to the mea-surement result and the post-meamea-surement quantum system. Channels describing such devices are called quantum instruments; we will call the channel describing just the measurement outcome ameasurement. In finite dimensions any measurement can be seen as part of a quantum instrument, but not so for idealized position or momentum measurements, as shown in Theorem 3.3 of[10](see page 57). Technically, we may anticipate the result since the post-measurement state of such a device would presumably be a delta function located at the value of the measurement, which is not an element ofL2(Q). This need not bother us, though, since it is not operationally meaningful to consider a position measurement instrument of infinite precision. And indeed there is no mathematical obstacle to describing finite-precision position measurement by quan-tum instruments, as shown in Theorem 6.1 (page 67 of[10]). For any bounded functionα∈L2(Q)we can define the instrumentEα: L∞(Q)⊗B(H)→B(H)by
Eα(f ⊗a) = Z
dq f(q)A∗
q;αaAq;α, (2)
whereAq;αψ(q0) =α(q−q0)ψ(q0)for allψ ∈ L2(Q). The classical output of the instrument is essentially the ideal value convolved with the functionα. Thus, setting the width of αsets the precision limit of the instrument.
2.3 Distinguishability as a channel norm
The distinguishability measure is actually a norm on quantum channels, equal (apart from a factor of one half) to the so-called norm of complete boundedness, the cb norm[51–53]. The cb norm is defined as an extension of the operator norm, similar to the extension of positivity above, as
kTkcb:=sup
n∈N
k1n⊗Tk∞, (3)
wherekTk∞is the operator norm. Then
δ(E1,E2) =12kE1−E2kcb. (4)
In the Schrödinger picture we instead extend the trace normk·k1, and the result is usually called the diamond norm[51,53]. In either case, the extension serves to account for entangled inputs in the experiment to test whetherE1 orE2 is the actual channel. In fact, entanglement is helpful even when the channels describe projective measurements, as shown by an example given in AppendixA. This expression for the cb or diamond norm is not closed-form, as it requires an optimization. However, in finite dimensions the cb norm can be cast as a convex optimization, specifically as a semidefinite program[54,55], which makes numerical computation tractable. Further details are given in AppendixB.
2.4 The Stinespring representation and its continuity
According to the Stinespring representation theorem[52,56], any channelE mapping an algebraAtoB(H) can be expressed in terms of an isometryV :H→Kto some Hilbert spaceKand a representationπofAin B(K)such that, for alla∈A,
The isometry in the Stinespring representation is usually called thedilationof the channel, andKthe dilation space. In finite-dimensional settings, calling the inputAand the outputB, one usually considers maps taking A=B(HB)toB(HA). Then one can chooseK=HB⊗HE, whereHEis a suitably large Hilbert space associated
to the “environment” of the transformation (HE can always be chosen to have dimension dim(HA)dim(HB)).
The representationπis justπ(a) =a⊗1E. Using the isometryV, we can also construct a channel fromB(HE)
toB(HA)in the same manner; this is known as the complementE]ofE.
The advantage of the general form of the Stinespring representation is that we can easily describe mea-surements, possibly continuous-valued, as well. For the case of finite outcomes, consider the ideal projective measurementQX of the observableX. Choosing a basis{|bx〉}ofL2(X)and definingπ(δx) =|bx〉〈bx|forδx
the function taking the value 1 atxand zero elsewhere, the canonical dilation isometryWX:H→L2(X)⊗H is given by
WX=
X
x
|bx〉 ⊗ |ϕx〉〈ϕx|. (6)
Note that this isometry defines a quantum instrument, since it can describe both the measurement outcome and the post-measurement quantum system. If we want to describe just the measurement result, we could simply useWX=Px|bx〉 〈ϕx|with the sameπ. More generally, a POVM with elementsΛx has the isometry
WX =Px|bx〉 ⊗pΛx.
For finite-precision measurements of position or momentum, the form of the quantum instrument in (2) immediately gives a Stinespring dilationWQ:H→KwithK=L2(Q)⊗Hwhose action is defined by
(WQψ)(q,q0) =α(q−q0)ψ(q0), (7)
and whereπis just pointwise multiplication on theL∞(Q)factor, i.e. for f ∈L∞(Q), anda∈B(H),[π(f ⊗ a)(ξ⊗ψ)](q,q0) =f(q)ξ(q)·(aψ)(q0)for allξ∈L2(Q)andψ∈H.
A slight change to the isometry in (6) gives the dilation of the device which prepares the state |ϕx〉
for classical input x. Formally the device is described by the map P :B(H) → L2(X)for whichP(Λ) = P
x|bx〉〈bx| 〈ϕx|Λ|ϕx〉. Now considerWX0 :L2(X)→H⊗L2(X)given by
W0
X=
X
x |
ϕx〉 ⊗ |bx〉〈bx|. (8)
Choosingπ(Λ) =Λ⊗1X, we haveP(Λ) =WX0∗π(Λ)WX0.
The Stinespring representation is not unique[41]. Given two representations(π1,V1,K1)and(π2,V2,K2) of the same channelE, there exists a partial isometryU :K1→K2 such thatUV1 = V2, U∗V2 = V1, and Uπ1(a) =π2(a)U for alla∈A. For the representationsπas usually employed for the finite-dimensional case, this last condition implies thatUis a partial isometry from one environment to the other, forU(a⊗1E) =
(a⊗1E0)U can only hold for allaifU acts trivially onB. For channels describing measurements, finite or continuous, the last condition implies that any suchU is a conditional partial isometry, dependent on the outcome of the measurement result. Thus, for any set of isometriesUx:HS→HR,Px|bx〉 ⊗Ux|ϕx〉〈ϕx|Ux∗
is a valid dilation ofQX, just as isWX in (6). Similarly,(WQ0ψ)(q,q0) =α(q−q0)[Uqψ](q0)is a valid dilation
ofEαin (2).
The main technical ingredient required for our results is the continuity of the Stinespring representation in the cb norm[40,41]. That is, channels which are nearly indistinguishable have Stinespring dilations which are close and vice versa. For completely positive and unital mapsE1andE2,[40,41]show that
1
2kE1−E2kcb≤ inf πi,Vik
V1−V2k∞≤ÆkE1−E2kcb, (9)
where the infimum is taken over all Stinespring representations(πi,Vi,Ki)ofEi.
2.5 Sequential and joint measurements
Using the Stinespring representation we can easily show that, in principle, any joint measurement can always be decomposed into sequential measurement.
Lemma 1. Suppose thatE:L∞(X)⊗L∞(Z)→B(H)is a channel describing a joint measurement. Then
Proof. DefineM0 :L∞(X)→B(H)to be just theXoutput ofE, i.e. M0(f) =E(f ⊗1). Now suppose that V :H→L2
(X)⊗L2(Z)⊗H00is a Stinespring representation ofE andVX:H→L2(X)⊗H0is a representation
ofM0, both with the standard representationπof L∞ intoL2. By construction,V is also a dilation of M0, and therefore there exists a partial isometry UX such that V =UXVX. More specifically, conditional on the
valueX = x, each Ux sendsH0 to L2(Z)⊗H00. Thus, settingA(f ⊗a) = VX∗(π(f)⊗a)VX andMx(f) =
U∗
x(π(f)⊗1)Ux, we haveE=AM.
3 Definitions of error and disturbance 3.1 Measurement error
To characterize the error"X an apparatusE makes relative to an ideal measurementQX of an observableX,
we can simply use the distinguishability of the two channels, taking only the classical output ofE. Suppose that the apparatus is described by the channel E :B(HB)⊗ L∞(X) →B(HA)and the ideal measurement
by the channel QX : L∞(X)→ B(HA). To ignore the output system B, we make use of the partial trace
mapTB :L∞(X)→B(HB)⊗L∞(X)given byTB(f) =1B⊗ f. Then a sensible notion of error is given by
"X(E) =δ(QX,ETB). If it is easy to tell the ideal measurement apart from the actual device, then the error is
large; if it is difficult, then the error is small.
As a general definition, though, this quantity is deficient to two respects. First, we could imagine an apparatus which performs an idealQX measurement, but simply mislabels the outputs. This leads to"X(E) =
1, even though the ideal measurement is actually performed. Second, we might wish to consider the case that the classical output set of the apparatus is not equal toXitself. For instance, perhapsE delivers much more output than is expected fromQX. In this case we also formally have"X(E) =1, since we can just examine the
output to distinguish the two devices.
We can remedy both of these issues by describing the apparatus by the channelE :B(HB)⊗L∞(Y)→ B(HA)and just including a further classical postprocessing operationR : L∞(X)→ L∞(Y)in the
distin-guishability step. Since we are free to choose the best such map, we define
"X(E):=inf
R δ(QX,ERTB). (10)
The setup of the definition is depicted in Figure2.
A E R X ≈"X A QX X
B
Y
Figure 2: Measurement error. The error made by the apparatusE in measuringX is defined by how distinguishable the actual device is from the ideal measurementQX in any experiment whatsoever, after suitably processing the classical outputYofEwith the mapR. To enable a fair comparison, we
ignore the quantum output of the apparatus, indicated in the diagram by graying outB. If the actual and ideal devices are difficult to tell apart, the error is small.
3.2 Measurement disturbance
Defining the disturbance an apparatusEcauses to an observable, sayZ, is more delicate, as an observable itself does not have a directly operational meaning. But there are two straightforward ways to proceed: we can either associate the observable with measurement or with state preparation. In the former, we compare how well we can mimic the ideal measurementQZof the observable after employing the apparatusE, quantifying
this using measurement error as before. Additionally, we should allow the use of recovery operations in which we attempt to “restore” the input state as well as possible, possibly conditional on the output of the measurement. Formally, let QZ : L∞(Z) → B(H
A) be the ideal Z measurement and R be a recovery
mapR:B(HA)→B(HB)⊗L∞(X)which acts on the output ofE conditional on the value of the classical outputX(which it then promptly forgets). As depicted in Figure3, the measurement disturbance is then the
measurement error after using the best recovery map:
νZ(E):=inf
A E R QZ Z ≈νZ A QZ Z
Y
Figure 3: Measurement disturbance. To define the disturbance imparted by an apparatusE to the measurement of an observable Z, consider performing the idealQZ measurement on the output B ofE. First, however, it may be advantageous to “correct” or “recover” the original inputAby some operationR. In general, Rmay depend on the outputXofE. The distinguishability between the
resulting combined operation and just performingQZon the original input defines the measurement
disturbance.
3.3 Preparation disturbance
For state preparation, consider a device with classical input and quantum output that prepares the eigenstates ofZ. We can model this by a channelPZ, which in the Schrödinger picture produces|θz〉upon receiving the
inputz. Now we compare the action ofPZ to the action ofPZ followed byE, again employing a recovery
operation. Formally, let PZ : B(HA) → L∞(Z) be the ideal Z preparation device and consider recovery
operationsRof the formR:B(HA)→B(HB)⊗L∞(X). Then the preparation disturbance is defined as
ηZ(E):=infRδ(PZ,PZERTY). (12)
Z PZ E R A ≈ηZ Z PZ A
Y
Figure 4: Preparation disturbance. The ideal preparation device PZ takes a classical inputZ and
creates the corresponding Z eigenstate. As with measurement disturbance, the preparation distur-bance is related to the distinguishability of the ideal preparation devicePZ andPZ followed by the
apparatusE in question and the best possible recovery operationR.
All of the measures defined so far are “figures of merit”, in the sense that we compare the actual device to the ideal, perfect functionality. In the case of state preparation we can also define a disturbance measure as a “figure of demerit”, by comparing the actual functionality not to the best-case behavior but to the worst. To this end, consider a state preparation deviceCwhich just ignores the classical input and always prepares the same fixed output state. These are constant (output) channels, and clearlyEdisturbs the state preparationPZ considerably ifPZEhas effectively a constant output. Based on this intuition, we can then make the following
formal definition:
b
ηZ(E):= d−1d −C:const.inf δ(C,PZE). (13)
The disturbance is small according to this measure if it is easy to distinguish the action ofPZE from having
a constant output, and large otherwise. To see thatηbZ is positive, use the Schrödinger picture and let the
output ofC∗be the stateσfor all inputs. Then note that infCδ(C,PZE) =minCmaxzδ(σ,E∗(θz)), where the
latterδis the trace distance. Choosingσ= 1d
P
zE∗(θz)and using joint convexity of the trace distance, we
have infCδ(C,PZE)≤ d−d1.
Z PZ E B ≈d−d1−ηbZ Z C B
Y Y
Figure 5: Figure of “demerit” version of preparation disturbance. Another approach to defining preparation disturbance is to consider distinguishability to a non-ideal device instead of an ideal device. The apparatusE imparts a large disturbance to the preparationPZif the output of the
com-binationPZE is essentially independent of the input. Thus we consider the distinguishability ofPZE
and a constant preparationC which outputs a fixed state regardless of the inputZ.
For finite-dimensional systems, all the measures of error and disturbance can be expressed as semidefinite programs, as detailed in AppendixB. As an example, we compute these measures for the simple case of a non-idealXmeasurement on a qubit; we will meet this example later in assessing the tightness of the uncertainty relations and their connection to wave-particle duality relations in the Mach-Zehnder interferometer. Consider the ideal measurement isometry (6), and suppose that the basis states|bx〉are replaced by two pure states|γx〉
which have an overlap〈γ0|γ1〉=sinθ. Without loss of generality, we can take|γx〉=cosθ2|bx〉+sinθ2|bx+1〉. The optimal measurementQfor distinguishing these two states is just projective measurement in the |bx〉
basis, so let us consider the channelEMZ=WQ. Then, as detailed in AppendixB, forZcanonically conjugate toX we find
"X(EMZ) =12(1−cosθ) and (14)
νZ(EMZ) =ηZ(E) =ηbZ(E) = 12(1−sinθ). (15)
In all of the figures of merit, the optimal recovery mapRis to do nothing, while inηbZthe optimal channelC
outputs the average of the two outputs ofPZE.
4 Uncertainty relations in finite dimensions 4.1 Complementarity measures
Before turning to the uncertainty relations, we first present several measures of complementarity that will appear therein. Indeed, we can use the above notions of disturbance to define several measures of comple-mentarity that will later appear in our uncertainty relations. For instance, we can measure the complemen-tarity of two observables just by using the measurement disturbanceν. Specifically, treatingQX as the actual
measurement andQZ as the ideal measurement, we definecM(X,Z):=νZ(QX). This quantity is equivalent
to"Z(QX)since any recovery mapRX→Zin"Zcan be used to defineR0X→AinνZbyR0=RPZ. Similarly, we
could treat one observable as defining the ideal state preparation device and the other as the measurement apparatus, which leads tocP(X,Z):=ηZ(QX). Here we could also use the “figure of demerit” and define
b
cP(X,Z):=ηbZ(QX).
Though the three complementarity measures are conceptually straightforward, it is also desireable to have closed-form expressions, particularly for the bounds in the uncertainty relations. To this end, we derive lower bounds as follows. First, considercM and choose as inputsZ basis states. This gives, for random choice of input,
cM(X,Z)≥inf
R δ(PZQZ,PZQXR) (16a)
≥1−max
R
1
d
X
xz |〈
ϕx|θz〉|2Rz x (16b)
≥1−max
R
1
d
X
x
max
z |〈ϕx|θz〉|
2X
z0
Rz0x (16c)
=1−1d
X
x
max
z |〈ϕx|θz〉|
2, (16d)
where the maximization is over stochastic matricesR, and we use the fact thatPzRz x =1 for allx. ForcP we
mapRX→Z, we have
cP(X,Z)≥Rinf
X→Aδ(PZQZ
,PZQXRQZ) (17a)
=Rinf
X→Z
δ(PZQZ,PZQXR) (17b)
≥1− 1
d
X
x
max
z |〈ϕx|θz〉|
2. (17c)
ForbcP(X,Z)we have
bcP(X,Z) =d−1
d −C:const.inf δ(C,PZQX) (18a)
=d−d1−minP maxz δ(P,Q∗X(θz)) (18b)
≥d−1
d −maxz 12 X
x
|1d− |〈ϕx|θz〉|2|, (18c)
where the bound comes from choosing P to be the uniform distribution. We could also choose P(x) =
|〈ϕx|θz0〉|2for some z0 to obtain the boundbcP(X,Z)≥ d−1d −minz0maxz 12 P
x
Tr[ϕx(θz−θz0)]
. However, from numerical investigation of random bases, it appears that this bound is rarely better than the previous one.
Let us comment on the properties of the complementarity measures and their bounds in (16d), (17c), and (18c). Both expressions in the bounds are, properly, functions only of the two orthonormal bases in-volved, depending only on the set of overlaps. In particular, both are invariant under relabelling the bases. Uncertainty relations formulated in terms of conditional entropy typically only involve the largest overlap or largest two overlaps[7,57], but the bounds derived here are yet more sensitive to the structure of the overlaps. Interestingly, the quantity in (16d) appears in the information exclusion relation of[57], where the sum of mutual informations different systems can have about the observablesX andZ is bounded by log2dPxmaxz|〈ϕx|θz〉|2.
The complementarity measures themselves all take the same value in two extreme cases: zero in the trivial case of identical bases,(d−1)/din the case that the two bases are conjugate, meaning|〈ϕx|θz〉|2=1/dfor
all x,z. In between, however, the separation between the two can be quite large. Consider two observables that share two eigenvectors while the remainder are conjugate. The bounds (16d) and (17c) imply thatcM andcPare both greater than(d−3)/d. The bound onbcPfrom (18c) is zero, though a better choice of constant
channel can easily be found in this case. In dimensionsd=3k+2, fix the constant channel to output the distributionPwith probability 1/3 of being either of the last two outputs, 1/3kfor anykof the remainder, and zero otherwise. Then we have ˆcP≥ d−1d −maxzδ(P,Q∗XPZ∗(z)). It is easy to show the optimal value is 2/3
so that ˆcP≥(d−3)/3d. Hence, in the limit of larged, the gap between the two measures can be at least 2/3.
This example also shows that the gap between the complementary measures and the bounds can be large, though we will not investigate this further here.
4.2 Results
We finally have all the pieces necessary to formally state our uncertainty relations. The first relates measure-ment error and measuremeasure-ment disturbance, where we have
Theorem 1. For any two observables X and Z and any quantum instrumentE, Æ
2"X(E) +νZ(E)≥cM(X,Z) and (19)
"X(E) +
Æ
2νZ(E)≥cM(Z,X). (20)
Due to Lemma1, any joint measurement of two observables can be decomposed into a sequential measure-ment, which implies that these bounds hold for joint measurement devices as well. Indeed, we will make use of that lemma to derive (20) from (19) in the proof below. Of course we can replace thecM quantities
with closed-form expressions using the bound in (16d). Figure6shows the bound for the case of conjugate observables of a qubit, for whichcM(X,Z) =cM(Z,X) =12. It also shows the particular relation between error and measurement disturbance achieved by the apparatusEMZmentioned at the end of §3, from which we can conclude the that bound is tight in the region of vanishing error or vanishing disturbance.
0 1/2 1/2
Error
Disturbance
("X,νZ)
("X,ηZ)&("X,ηbZ)
EMZ
Figure 6: Error versus disturbance bounds for conjugate qubit observables. Theorem1restricts the possible combinations of measurement error"X and measurement disturbanceνZ to the dark gray
region bounded by the solid line. Theorem2additionally includes the light gray region. Also shown are the error and disturbance values achieved byEMZfrom §3.
Theorem 2. For any two observables X and Z and any quantum instrumentE, Æ
2"X(E) +ηZ(E)≥cP(X,Z) and (21)
Æ
2"X(E) +ηbZ(E)≥bcP(X,Z). (22)
Returning to Figure6but replacing the vertical axis withηZorηbZ, we now have only the upper branch of the
bound, which continues to the horizontal axis as the dotted line. Here we can only conclude that the bounds are tight in the region of vanishing error.
4.3 Proofs
The proofs of all three uncertainty relations are just judicious applications of the triangle inequality, and the particular bound comes from the setting in whichPZ meetsQX. We shall make use of the fact that an
instrument which has a small error in measuringQX is close to one which actually employs the instrument
associated withQX. This is encapsulated in the following
Lemma 2. For any apparatusEA→YB there exists a channel FXA→YB such that δ(E,Q0XF)≤
p 2"X(E),
whereQ0
X is a quantum instrument associated with the measurementQX. Furthermore, ifQXis a projective
measurement, then there exists a state preparationPX→YBsuch thatδ(E,QXP)≤p2"X(E).
Proof. LetV :HA→HB⊗HE⊗L2(X)andWX:HA→L2(X)⊗HAbe respective dilations ofE andQX. Using
the dilationWX we can define the instrumentQ0X as
Q0X:L∞(X)⊗B(HB)→B(HA)
g⊗A7→W∗
X(π(g)⊗A)WX.
(23)
SupposeRY→Xis the optimal map in the definition of"X(E), and letR0Y→XY be the extension ofRwhich keeps the inputY; it has a dilationV0:L2(Y)→L2(Y)⊗L2(X). By Stinespring continuity, in finite dimensions
there exists a conditional isometryUX:L2(X)⊗H
A→L2(X)⊗L2(Y)⊗HB⊗HE such that
V0V−U
XWX
∞≤ Æ
2"X(E). (24)
Now consider the map
E0:L∞(Y)⊗B(HB)→B(HA)
f ⊗A7→W∗
XUX∗(1X⊗π(f)⊗A⊗1E)UXWX.
By the other bound in Stinespring continuity we thus haveδ(E,E0)≤p2"X(E). Furthermore, as described
in §2.4, UX is a conditional isometry, i.e. a collection of isometriesUx :HA → L2(Y)⊗HB⊗HE for each
measurement outcomex. Note that we may regard elements of L∞(X)⊗B(H)as sequences(A
x)x∈Xwith Ax∈B(H)for all x∈Xsuch that ess supxkAxk∞<∞. Therefore we may define
F:L∞(Y)⊗B(HB)→L∞(X)⊗B(HA)
f ⊗A7→(Ux∗(π(f)⊗A⊗1E)Ux)x∈X,
(26)
so thatE0=Q0
XF. This completes the proof of the first statement.
If QX is a projective measurement, then the outputB of Q0X can just as well be prepared from the X
output. Describing this with the mapP0
X→XAwhich prepares states inAgiven the value ofXand retainsXat
the output, we haveQ0
X=QXP0. SettingP=P0Fcompletes the proof of the second statement.
Now, to prove (19), start with the triangle inequality and monotonicity. Suppose PX→YB is the state
preparation map from Lemma2. Then, for anyRYB→A,
δ(QZ,QXPRQZ)≤δ(QZ,ERQZ) +δ(ERQZ,QXPRQZ) (27a)
≤δ(QZ,ERQZ) +δ(E,QXP) (27b)
=δ(QZ,ERQZ) +
Æ
2"X(E). (27c)
Observe thatPRQZ is just a mapR0
X→Z. Taking the infimum overRwe then have Æ
2"X(E) +νZ(E)≥infR δ(QZ,QXPRQZ) (28a)
≥inf
R δ(QZ,QXR). (28b)
To show (20), letRYB→AandR0
Y→X be the optimal maps inνZ(E)and"X(E), respectively. Now apply Lemma1toM=ER0RQ
Zand suppose thatEA0→ZBis the resulting instrument andMZB→Xis the conditional measurement. By the above argument,p2"Z(E0) +νX(E0)≥inf
Rδ(QX,QZR). But"Z(E0)≤δ(QZ,E0TB) =
νZ(E)andνX(E0)≤δ(QX,E0M) ="X(E), where in the latter we use the fact that we could always reprepare
anX eigenstate and then letQXmeasure it. Therefore the desired bound holds.
To establish (21), we proceed just as above to obtain
δ(PZ,PZQXPR)≤δ(PZ,PZER) +
Æ
2"X(E). (29)
NowPX→YBRYB→Ais a preparation mapPX→A, and taking the infimum overRgives
Æ
2"X(E) +ηZ(E)≥inf
R δ(PZ,PZQXPR) (30a)
≥inf
P δ(PZ,PZQXP). (30b)
Finally, (22). Since theηbZ disturbance measure is defined “backwards”, we start the triangle inequality
with the distinguishability quantity related to disturbance, rather than the eventual constant of the bound. For any channelCZ→XandPX→YBfrom Lemma2, just as before we have
δ(CP,PZE)≤δ(CP,PZQXP) +δ(PZQXP,PZE) (31a)
≤δ(C,PZQX) +
Æ
2"X(E). (31b)
Now we take the infimum over constant channelsCZ→X. Note that
inf
CZ→YBδ(C
,PZE)≤ inf CZ→X
δ(CP,PZE). (32)
Therefore, we have
Æ
2"X(E) +ηbZ(E)≥ d−1d −infC δ(C,PZQX). (33)
This last proof also applies to a more general definition of disturbance which does not usePZat the input,
be thought of as the result of performing an idealZ measurement, but forgetting the result. More formally, lettingQ\
Z=WZTZwithWZ:a→WZ∗aWZ, we can define
e
ηZ(E) = d−1d −infC δ(C,Q\ZE). (34)
Though perhaps less conceptually appealing, this is a more general notion of disturbance, since now we can potentially use entanglement at the input to increase distinguishability ofQ\
ZE from any constant channel.
However, due to the form ofQ\
Z, entanglement will not help. Applied to any bipartite state, the mapQ\Z
produces a state of the formPzpz|θz〉〈θz| ⊗σz for some probability distribution pz and set of normalized
statesσz, and therefore the input toEitself is again an output ofPZ. Since classical correlation with ancillary
systems is already covered inηbZ(E), it follows thatηeZ(E) =ηbZ(E).
5 Position & momentum
5.1 Gaussian precision-limited measurement and preparation
Now we turn to the infinite-dimensional case of position and momentum measurements. Let us focus on Gaussian limits on precision, where the convolution functionα described in §2.2 is the square root of a normalized Gaussian of widthσ, and for convenience define
gσ(x) =p1 2πσe
−2xσ22 . (35)
One advantage of the Gaussian choice is that the Stinespring dilation of the idealσ-limited measurement device is just a canonical transformation. Thus, measurement of positionQjust amounts to adding this value to an ancillary system which is prepared in a zero-mean Gaussian state with position standard deviationσQ,
and similarly for momentum. The same interpretation is available for precision-limited state preparation. To prepare a momentum state of width σP, we begin with a system in a zero-mean Gaussian state with
momentum standard deviationσP and simply shift the momentum by the desired amount.
Given the ideal devices, the definitions of error and disturbance are those of §3, as in the finite-dimensional case, with the slight change that the first term ofηbis now 1. To reduce clutter, we do not indicateσQandσP
specifically in the error and disturbance functions themselves.
Since our error and disturbance measures are based on possible state preparations and measurements in order to best distinguish the two devices, in principle one ought to consider precision limits in the distin-guishability quantityδ. However, we will not follow this approach here, and instead we allow test of arbitrary precision in order to preserve the link between distinguishability and the cb norm. This leads to bounds that are perhaps overly pessimistic, but nevertheless limit the possible performance of any device.
5.2 Results
As discussed previously, the disturbance measure of demeritηbcannot be expected to lead to uncertainty relations for position and momentum observables, as any non-constant channel can be perfectly differentiated from a constant one by inputting states of arbitrarily high momentum. We thus focus on the disturbance measures of merit.
Theorem 3. Set c=2σQσP for any precision valuesσQ,σP >0. Then for any quantum instrumentE,
Æ
2"Q(E) +νP(E)
"Q(E) +Æ2νQ(E)
)
≥ 1−c2
(1+c2/3+c4/3)3/2 and (36)
q
2"Q(E) +ηP(E)≥ (1+c
2)1/2
((1+c2) +c2/3(1+c2)2/3+c4/3(1+c2)1/3)3/2. (37)
Before proceeding to the proofs, let us comment on the properties of the two bounds. As can be seen in Figure7, the bounds take essentially the same values forσQσP 12, and indeed both evaluate to unity at σQσP =0. This is the region of combined position and momentum precision far smaller than the natural
scale set byħh, and the limit of infinite precision accords with the finite-dimensional bounds for conjugate
The distinction between these two cases is a result of allowing arbitrarily precise measurements in the dis-tinguishability measure. It can be understood by the following heuristic argument. Consider an experiment in which a momentum state of widthσinP is subjected to a position measurement of resolutionσQand then a
momentum measurement of resolutionσoutP . From the uncertainty principle, we expect the position measure-ment to change the momeasure-mentum by an amount∼1/σQ. Thus, to reliably detect the change in momentum,
σoutP must fulfill the condition σoutP σinP +1/σQ. The Heisenberg limit in the measurement disturbance
scenario isσoutP =2/σQ, meaning this condition cannot be met no matter how small we chooseσinP. This is
consistent with no nontrivial bound in (36) in this region. On the other hand, for preparation disturbance the Heisenberg limit isσinP =2/σQ, so detecting the change in momentum simply requiresσoutP 1/σQ. A more
satisfying approach would be to include the precision limitation in the distinguishability measure to restore the symmetry of the two scenarios, but this requires significant changes to the proof and is left for future work.
1/2 1
1
0
σQσP
lower
bound
measurement preparation
Figure 7: Uncertainty bounds appearing in Theorem3in terms of the combined precisionσQσP.
The solid line corresponds to the bound involving measurement disturbance, (36), the dashed line to the bound involving preparation disturbance, (37).
5.3 Proofs
The proof of Theorem3is broadly similar to the finite-dimensional case. We would again like to begin with FQA→YBfrom Lemma2such thatδ(E,Q0QF)≤
Æ
2"Q(E). However, the argument does not quite go through,
as in infinite dimensions we cannot immediately ensure that the infimum in Stinespring continuity is attained. Nonetheless, we can consider a sequence of maps(Fn)n∈Nsuch that the desired distinguishability bound holds
in the limitn→ ∞.
To show (36), we follow the steps in (27). Now, though, consider the mapF0
nwhich just appendsQto the
output ofFn, and defineN=Q0QFnRQP, whereQQ0 is the instrument associated with position measurement
QQ. Then we have
δ(QP,N TQ)≤δ(QP,ERQP) +δ(ERQP,N TQ) (38a)
≤δ(QP,ERQP) +δ(E,Q0QFn). (38b)
Taking the limitn→ ∞and the infimum over recovery mapsRproducesÆ2"Q(E) +νP(E)on the righthand
side. We can bound the lefthand side by testing with pure unentangled inputs:
δ(QP,N TQ)≥sup
ψ,f 〈ψ, QP(f)−[N TQ](f)
ψ〉. (39)
Now we want to show that, sinceQP is covariant with respect to phase space translations, without loss of generality we can takeN to be covariant as well. Consider the translated version of bothQP andN TQ, obtained by shifting their inputs and outputs correspondingly by some amount z = (q,p). For the states ψ this shift is implemented by the Weyl-Heisenberg operatorsVz, while for tests f only the value of p is
to the work of Werner[22]for furter details. SinceTQjust ignores theQoutput of the measurementN, we
may thus proceed by assuming thatN is a covariant measurement. Any covariantN has the form
N(f) = Z
R2
dz
2π f(z)VzmVz∗, (40)
for some positive operatormsuch that Tr[m] =1. Due to the definition of N, the position measurement result is precisely that obtained fromQQ. By the covariant form ofN, this implies that the position width of
mis justσQ (or rather that of the parity version ofm, see[22]). Suppose the momentum distribution has
standard deviationσbP; thenσQσbP≥1/2 follows from the Kennard uncertainty relation[3].
Now we can evaluate the lower bound term by term. Let us choose a Gaussian state in the momentum representation and test function: ψ = g12
σψ and f =
p2πσ
fgσf. Then the first term is a straightforward Gaussian integral, since the precision-limited measurement just amounts to the ideal measurement convolved withgσP:
〈ψ,QP(f)ψ〉=
Z
R2
d p0d p g σψ(p
0)g σP(p
0−p)f(p) (41a)
=Ç σf
σ2f+σ2P+σ2ψ
. (41b)
The second term is the same, just withσbP instead ofσP, so we have
δ(QP,N TQ)≥
σf
Ç
σ2f +σ2P+σ2ψ−
σf
Ç
σ2f+σb2P+σ2ψ
. (42)
The tightest possible bound comes from the smallestσbP, which is 1/2σQ, and the bound is clearly trivial if
σQσP ≥1/2. If this is not the case, we can optimize our choice ofσf. To simplify the calculation, assume
thatσψ is small compared toσf (so that we are testing with a very narrow momentum state). Then, with
c=2σQσP, the optimalσf is given by
σ2f = σ 2
P
c2/3(1+c2/3). (43) Using this in (42) gives (36).
For preparation disturbance, proceed as before to obtain
δ(PP,PPQ0QFn0RTQ)≤δ(PP,PPER) +δ(PPER,PPQQ0Fn0RTQ) (44a)
≤δ(PP,PPER) +δ(E,Q0QFn) (44b)
Now the limitn→ ∞and the infimum over recovery mapsRproducesÆ2"Q(E) +ηP(E)on the righthand
side. A lower bound on the quantity on the lefthand side can be obtained by usingPP to prepare aσP-limited
input state and making aσm-limited momentum measurement ¯QP measurement on the output, so that, for
N as before,
δ(PP,PPQ0QFn0RTQ)≥ sup
ψ:Gaussian;f〈ψ, ¯QP(f)−[N TQ](f)
ψ〉. (45)
The only difference to (39) is that the supremum is restricted to Gaussian states of widthσP. The covariance
argument nonetheless goes through as before, and we can proceed to evaluate the lower bound as above. This yields
δ(PP,PPQ0QFn0RTQ)≥
σf
Ç
σ2f+σ2m+σ2P −
σf
r
σ2f +4σ12Q+σ 2
P
. (46)
We may as well considerσm→0 so as to increase the first term. The optimalσf is then given by the optimizer
6 Applications
6.1 No information aboutZwithout disturbance toX
A useful tool in the construction of quantum information processing protocols is the link between reliable transmission ofXeigenstates through a channelNandZeigenstates through its complementN], particularly when the observables X and Z are maximally complementary, i.e.|〈ϕx|ϑz〉|2 = 1d for all x,z. Due to the
uncertainty principle, we expect that a channel cannot reliably transmit the bases to different outputs, since this would provide a means to simultaneously measureX andZ. This link has been used by Shor and Preskill to prove the security of quantum key distribution[58]and by Devetak to determine the quantum channel capacity[59]. Entropic state-preparation uncertainty relations from[6,44]can be used to understand both results, as shown in[60,61].
However, the above approach has the serious drawback that it can only be used in cases where the specific X-basis transmission overN andZ-basis transmission overN]are in some sense compatible and not coun-terfactual; because the argument relies on a state-dependent uncertainty principle, both scenarios must be compatible with the same quantum state. Fortunately, this can be done for both QKD security and quantum capacity, because at issue is whetherX-basis (Z-basis) transmission is reliable (unreliable) on averagewhen the states are selected uniformly at random. Choosing among either basis states at random is compatible with a random measurement in either basis of half of a maximally entangled state, and so bothX andZbasis sce-narios are indeed compatible. The same restriction to choosing input states uniformly appears in the recent result of[33], as it also ultimately relies on a state-preparation uncertainty relation.
Using Theorem 2we can extend the method above to counterfactual uses of arbitrary channelsN, in the following sense: If acting with the channelN does not substantially affect the possibility of performing anX measurement, thenZ-basis inputs toN]result in an essentially constant output. More concretely, we have
Corollary 1. Given a channelN and complementary channelN], suppose that there exists a measurement ΛX such thatδ(QX,NΛX)≤". Then there exists a constant channelCsuch that
δ(Q\ZN],C)≤
p
2"+ d−1
d −bcP(X,Z). (47)
For maximally complementary X and Z,δ(Q\ZN],C)≤
p2".
Proof. LetV be the Stinespring dilation ofN such thatN] is the complementary channel and define
E =
VNΛX. For C the optimal choice in the definition of ηbZ(E), (22), (34), and ηeZ = ηbZ imply δ(Q\ZE,C) ≤
p2"
+d−d1−bcP(X,Z). SinceN]is obtained fromE by ignoring theΛX measurement result,δ(Q\ZN],C)≤
δ(Q\ZE,C).
This formulation is important because in more general cryptographic and communication scenarios we are interested in the worst-case behavior of the protocol, not the average case under some particular probability distribution. For instance, in [46]the goal is to construct a classical computer resilient to leakage of Z -basis information by establishing that reliableX basis measurement is possible despite the interference of the eavesdropper. However, such anX measurement is entirely counterfactual and cannot be reconciled with the actualZ-basis usage, as theZ-basis states will be chosendeterministicallyin the classical computer.
It is important to point out that, unfortunately, calibration testing is in general completely insufficient to establish a small value ofδ(QX,NΛX). More specifically, the following example shows that there is no
dimension-independent bound connecting infΛXδ(QX,NΛX)to the worst case probability of incorrectly iden-tifying an X eigenstate input to N, for arbitrary N. Let the quantities pyz be given by py,0 = 2/d for y = 0, . . . ,d/2−1, py,1 = 2/d for y = d/2, . . . ,d−1, and py,z = 1/d otherwise, where we assume d is even, and then define the isometryV:HA→HB⊗HC⊗HDas the map taking|z〉AtoPyppyz|y〉B|z〉C|y〉D.
Finally, letN:B(HB)⊗B(HC)→B(HA)be the channel obtained by ignoringD, i.e. in the Schrödinger picture N∗(%) =TrD[V%V∗]. Now consider inputs in theX basis, withX canonically conjugate toZ. As shown in
AppendixC, the probability of correctly determining any particularX input is the same for all values, and is equal to d12
P
y
P
zppy,z
2
= (d+p2−2)2/d2. The worst caseX error probability therefore tends to zero like 1/dasd→ ∞. On the other hand,Z-basis inputs 0 and 1 to the complementary channelE]result in completely disjoint output states due to the form of pyz. Thus, if we consider a test which inputs one of these randomly and checks for agreement at the output, we find infCδ(Q
\