In probability theory and statistics , complex random variables are a generalization of real-valued random variables to complex numbers , i.e. the possible values a complex random variable may take are complex numbers.[ 1] Complex random variables can always be considered as pairs of real random variables: their real and imaginary parts. Therefore, the distribution of one complex random variable may be interpreted as the joint distribution of two real random variables.
Some concepts of real random variables have a straightforward generalization to complex random variables—e.g., the definition of the mean of a complex random variable. Other concepts are unique to complex random variables.
Applications of complex random variables are found in digital signal processing ,[ 2] quadrature amplitude modulation and information theory .
A complex random variable
Z
{\displaystyle Z}
on the probability space
(
Ω
,
F
,
P
)
{\displaystyle (\Omega ,{\mathcal {F}},P)}
is a function
Z
:
Ω
→
C
{\displaystyle Z\colon \Omega \rightarrow \mathbb {C} }
such that both its real part
ℜ
(
Z
)
{\displaystyle \Re {(Z)}}
and its imaginary part
ℑ
(
Z
)
{\displaystyle \Im {(Z)}}
are real random variables on
(
Ω
,
F
,
P
)
{\displaystyle (\Omega ,{\mathcal {F}},P)}
.
Consider a random variable that may take only the three complex values
1
+
i
,
1
−
i
,
2
{\displaystyle 1+i,1-i,2}
with probabilities as specified in the table. This is a simple example of a complex random variable.
Probability
P
(
z
)
{\displaystyle P(z)}
Value
z
{\displaystyle z}
1
4
{\displaystyle {\frac {1}{4}}}
1
+
i
{\displaystyle 1+i}
1
4
{\displaystyle {\frac {1}{4}}}
1
−
i
{\displaystyle 1-i}
1
2
{\displaystyle {\frac {1}{2}}}
2
{\displaystyle 2}
The expectation of this random variable may be simply calculated:
E
[
Z
]
=
1
4
(
1
+
i
)
+
1
4
(
1
−
i
)
+
1
2
2
=
3
2
.
{\displaystyle \operatorname {E} [Z]={\frac {1}{4}}(1+i)+{\frac {1}{4}}(1-i)+{\frac {1}{2}}2={\frac {3}{2}}.}
Another example of a complex random variable is the uniform distribution over the filled unit circle, i.e. the set
{
z
∈
C
∣
|
z
|
≤
1
}
{\displaystyle \{z\in \mathbb {C} \mid |z|\leq 1\}}
. This random variable is an example of a complex random variable for which the probability density function is defined. The density function is shown as the yellow disk and dark blue base in the following figure.
Complex normal distribution [ edit ]
Complex Gaussian random variables are often encountered in applications. They are a straightforward generalization of real Gaussian random variables. The following plot shows an example of the distribution of such a variable.
Cumulative distribution function [ edit ]
The generalization of the cumulative distribution function from real to complex random variables is not obvious because expressions of the form
P
(
Z
≤
1
+
3
i
)
{\displaystyle P(Z\leq 1+3i)}
make no sense. However expressions of the form
P
(
ℜ
(
Z
)
≤
1
,
ℑ
(
Z
)
≤
3
)
{\displaystyle P(\Re {(Z)}\leq 1,\Im {(Z)}\leq 3)}
make sense. Therefore, we define the cumulative distribution
F
Z
:
C
→
[
0
,
1
]
{\displaystyle F_{Z}:\mathbb {C} \to [0,1]}
of a complex random variables via the joint distribution of their real and imaginary parts:
F
Z
(
z
)
=
F
ℜ
(
Z
)
,
ℑ
(
Z
)
(
ℜ
(
z
)
,
ℑ
(
z
)
)
=
P
(
ℜ
(
Z
)
≤
ℜ
(
z
)
,
ℑ
(
Z
)
≤
ℑ
(
z
)
)
{\displaystyle F_{Z}(z)=F_{\Re {(Z)},\Im {(Z)}}(\Re {(z)},\Im {(z)})=P(\Re {(Z)}\leq \Re {(z)},\Im {(Z)}\leq \Im {(z)})}
Eq.1
Probability density function [ edit ]
The probability density function of a complex random variable is defined as
f
Z
(
z
)
=
f
ℜ
(
Z
)
,
ℑ
(
Z
)
(
ℜ
(
z
)
,
ℑ
(
z
)
)
{\displaystyle f_{Z}(z)=f_{\Re {(Z)},\Im {(Z)}}(\Re {(z)},\Im {(z)})}
, i.e. the value of the density function at a point
z
∈
C
{\displaystyle z\in \mathbb {C} }
is defined to be equal to the value of the joint density of the real and imaginary parts of the random variable evaluated at the point
(
ℜ
(
z
)
,
ℑ
(
z
)
)
{\displaystyle (\Re {(z)},\Im {(z)})}
.
An equivalent definition is given by
f
Z
(
z
)
=
∂
2
∂
x
∂
y
P
(
ℜ
(
Z
)
≤
x
,
ℑ
(
Z
)
≤
y
)
{\displaystyle f_{Z}(z)={\frac {\partial ^{2}}{\partial x\partial y}}P(\Re {(Z)}\leq x,\Im {(Z)}\leq y)}
where
x
=
ℜ
(
z
)
{\displaystyle x=\Re {(z)}}
and
y
=
ℑ
(
z
)
{\displaystyle y=\Im {(z)}}
.
As in the real case the density function may not exist.
The expectation of a complex random variable is defined based on the definition of the expectation of a real random variable:[ 3] : p. 112
E
[
Z
]
=
E
[
ℜ
(
Z
)
]
+
i
E
[
ℑ
(
Z
)
]
{\displaystyle \operatorname {E} [Z]=\operatorname {E} [\Re {(Z)}]+i\operatorname {E} [\Im {(Z)}]}
Eq.2
Note that the expectation of a complex random variable does not exist if
E
[
ℜ
(
Z
)
]
{\displaystyle \operatorname {E} [\Re {(Z)}]}
or
E
[
ℑ
(
Z
)
]
{\displaystyle \operatorname {E} [\Im {(Z)}]}
does not exist.
If the complex random variable
Z
{\displaystyle Z}
has a probability density function
f
Z
(
z
)
{\displaystyle f_{Z}(z)}
, then the expectation is given by
E
[
Z
]
=
∬
C
z
⋅
f
Z
(
z
)
d
x
d
y
{\displaystyle \operatorname {E} [Z]=\iint _{\mathbb {C} }z\cdot f_{Z}(z)\,dx\,dy}
.
If the complex random variable
Z
{\displaystyle Z}
has a probability mass function
p
Z
(
z
)
{\displaystyle p_{Z}(z)}
, then the expectation is given by
E
[
Z
]
=
∑
z
∈
Z
z
⋅
p
Z
(
z
)
{\displaystyle \operatorname {E} [Z]=\sum _{z\in \mathbb {Z} }z\cdot p_{Z}(z)}
.
Properties
Whenever the expectation of a complex random variable exists, taking the expectation and complex conjugation commute:
E
[
Z
]
¯
=
E
[
Z
¯
]
.
{\displaystyle {\overline {\operatorname {E} [Z]}}=\operatorname {E} [{\overline {Z}}].}
The expected value operator
E
[
⋅
]
{\displaystyle \operatorname {E} [\cdot ]}
is linear in the sense that
E
[
a
Z
+
b
W
]
=
a
E
[
Z
]
+
b
E
[
W
]
{\displaystyle \operatorname {E} [aZ+bW]=a\operatorname {E} [Z]+b\operatorname {E} [W]}
for any complex coefficients
a
,
b
{\displaystyle a,b}
even if
Z
{\displaystyle Z}
and
W
{\displaystyle W}
are not independent .
Variance and pseudo-variance [ edit ]
The variance is defined in terms of absolute squares as:[ 3] : 117
K
Z
Z
=
Var
[
Z
]
=
E
[
|
Z
−
E
[
Z
]
|
2
]
=
E
[
|
Z
|
2
]
−
|
E
[
Z
]
|
2
{\displaystyle \operatorname {K} _{ZZ}=\operatorname {Var} [Z]=\operatorname {E} \left[\left|Z-\operatorname {E} [Z]\right|^{2}\right]=\operatorname {E} [|Z|^{2}]-\left|\operatorname {E} [Z]\right|^{2}}
Eq.3
Properties
The variance is always a nonnegative real number. It is equal to the sum of the variances of the real and imaginary part of the complex random variable:
Var
[
Z
]
=
Var
[
ℜ
(
Z
)
]
+
Var
[
ℑ
(
Z
)
]
.
{\displaystyle \operatorname {Var} [Z]=\operatorname {Var} [\Re {(Z)}]+\operatorname {Var} [\Im {(Z)}].}
The variance of a linear combination of complex random variables may be calculated using the following formula:
Var
[
∑
k
=
1
N
a
k
Z
k
]
=
∑
i
=
1
N
∑
j
=
1
N
a
i
a
j
¯
Cov
[
Z
i
,
Z
j
]
.
{\displaystyle \operatorname {Var} \left[\sum _{k=1}^{N}a_{k}Z_{k}\right]=\sum _{i=1}^{N}\sum _{j=1}^{N}a_{i}{\overline {a_{j}}}\operatorname {Cov} [Z_{i},Z_{j}].}
The pseudo-variance is a special case of the pseudo-covariance and is defined in terms of ordinary complex squares , given by:
J
Z
Z
=
E
[
(
Z
−
E
[
Z
]
)
2
]
=
E
[
Z
2
]
−
(
E
[
Z
]
)
2
{\displaystyle \operatorname {J} _{ZZ}=\operatorname {E} [(Z-\operatorname {E} [Z])^{2}]=\operatorname {E} [Z^{2}]-(\operatorname {E} [Z])^{2}}
Eq.4
Unlike the variance of
Z
{\displaystyle Z}
, which is always real and positive, the pseudo-variance of
Z
{\displaystyle Z}
is in general complex.
Covariance matrix of real and imaginary parts [ edit ]
For a general complex random variable, the pair
(
ℜ
(
Z
)
,
ℑ
(
Z
)
)
{\displaystyle (\Re {(Z)},\Im {(Z)})}
has a covariance matrix of the form:
[
Var
[
ℜ
(
Z
)
]
Cov
[
ℑ
(
Z
)
,
ℜ
(
Z
)
]
Cov
[
ℜ
(
Z
)
,
ℑ
(
Z
)
]
Var
[
ℑ
(
Z
)
]
]
{\displaystyle {\begin{bmatrix}\operatorname {Var} [\Re {(Z)}]&\operatorname {Cov} [\Im {(Z)},\Re {(Z)}]\\\operatorname {Cov} [\Re {(Z)},\Im {(Z)}]&\operatorname {Var} [\Im {(Z)}]\end{bmatrix}}}
The matrix is symmetric, so
Cov
[
ℜ
(
Z
)
,
ℑ
(
Z
)
]
=
Cov
[
ℑ
(
Z
)
,
ℜ
(
Z
)
]
{\displaystyle \operatorname {Cov} [\Re {(Z)},\Im {(Z)}]=\operatorname {Cov} [\Im {(Z)},\Re {(Z)}]}
Its elements equal:
Var
[
ℜ
(
Z
)
]
=
1
2
Re
(
K
Z
Z
+
J
Z
Z
)
Var
[
ℑ
(
Z
)
]
=
1
2
Re
(
K
Z
Z
−
J
Z
Z
)
Cov
[
ℜ
(
Z
)
,
ℑ
(
Z
)
]
=
1
2
Im
(
J
Z
Z
)
{\displaystyle {\begin{aligned}&\operatorname {Var} [\Re {(Z)}]={\tfrac {1}{2}}\operatorname {Re} (\operatorname {K} _{ZZ}+\operatorname {J} _{ZZ})\\&\operatorname {Var} [\Im {(Z)}]={\tfrac {1}{2}}\operatorname {Re} (\operatorname {K} _{ZZ}-\operatorname {J} _{ZZ})\\&\operatorname {Cov} [\Re {(Z)},\Im {(Z)}]={\tfrac {1}{2}}\operatorname {Im} (\operatorname {J} _{ZZ})\\\end{aligned}}}
Conversely:
K
Z
Z
=
Var
[
ℜ
(
Z
)
]
+
Var
[
ℑ
(
Z
)
]
J
Z
Z
=
Var
[
ℜ
(
Z
)
]
−
Var
[
ℑ
(
Z
)
]
+
i
2
Cov
[
ℜ
(
Z
)
,
ℑ
(
Z
)
]
{\displaystyle {\begin{aligned}&\operatorname {K} _{ZZ}=\operatorname {Var} [\Re {(Z)}]+\operatorname {Var} [\Im {(Z)}]\\&\operatorname {J} _{ZZ}=\operatorname {Var} [\Re {(Z)}]-\operatorname {Var} [\Im {(Z)}]+i2\operatorname {Cov} [\Re {(Z)},\Im {(Z)}]\end{aligned}}}
Covariance and pseudo-covariance [ edit ]
The covariance between two complex random variables
Z
,
W
{\displaystyle Z,W}
is defined as[ 3] : 119
K
Z
W
=
Cov
[
Z
,
W
]
=
E
[
(
Z
−
E
[
Z
]
)
(
W
−
E
[
W
]
)
¯
]
=
E
[
Z
W
¯
]
−
E
[
Z
]
E
[
W
¯
]
{\displaystyle \operatorname {K} _{ZW}=\operatorname {Cov} [Z,W]=\operatorname {E} [(Z-\operatorname {E} [Z]){\overline {(W-\operatorname {E} [W])}}]=\operatorname {E} [Z{\overline {W}}]-\operatorname {E} [Z]\operatorname {E} [{\overline {W}}]}
Eq.5
Notice the complex conjugation of the second factor in the definition.
In contrast to real random variables, we also define a pseudo-covariance (also called complementary variance ):
J
Z
W
=
Cov
[
Z
,
W
¯
]
=
E
[
(
Z
−
E
[
Z
]
)
(
W
−
E
[
W
]
)
]
=
E
[
Z
W
]
−
E
[
Z
]
E
[
W
]
{\displaystyle \operatorname {J} _{ZW}=\operatorname {Cov} [Z,{\overline {W}}]=\operatorname {E} [(Z-\operatorname {E} [Z])(W-\operatorname {E} [W])]=\operatorname {E} [ZW]-\operatorname {E} [Z]\operatorname {E} [W]}
Eq.6
The second order statistics are fully characterized by the covariance and the pseudo-covariance.
Properties
The covariance has the following properties:
Cov
[
Z
,
W
]
=
Cov
[
W
,
Z
]
¯
{\displaystyle \operatorname {Cov} [Z,W]={\overline {\operatorname {Cov} [W,Z]}}}
(Conjugate symmetry)
Cov
[
α
Z
,
W
]
=
α
Cov
[
Z
,
W
]
{\displaystyle \operatorname {Cov} [\alpha Z,W]=\alpha \operatorname {Cov} [Z,W]}
(Sesquilinearity)
Cov
[
Z
,
α
W
]
=
α
¯
Cov
[
Z
,
W
]
{\displaystyle \operatorname {Cov} [Z,\alpha W]={\overline {\alpha }}\operatorname {Cov} [Z,W]}
Cov
[
Z
1
+
Z
2
,
W
]
=
Cov
[
Z
1
,
W
]
+
Cov
[
Z
2
,
W
]
{\displaystyle \operatorname {Cov} [Z_{1}+Z_{2},W]=\operatorname {Cov} [Z_{1},W]+\operatorname {Cov} [Z_{2},W]}
Cov
[
Z
,
W
1
+
W
2
]
=
Cov
[
Z
,
W
1
]
+
Cov
[
Z
,
W
2
]
{\displaystyle \operatorname {Cov} [Z,W_{1}+W_{2}]=\operatorname {Cov} [Z,W_{1}]+\operatorname {Cov} [Z,W_{2}]}
Cov
[
Z
,
Z
]
=
Var
[
Z
]
{\displaystyle \operatorname {Cov} [Z,Z]={\operatorname {Var} [Z]}}
Uncorrelatedness: two complex random variables
Z
{\displaystyle Z}
and
W
{\displaystyle W}
are called uncorrelated if
K
Z
W
=
J
Z
W
=
0
{\displaystyle \operatorname {K} _{ZW}=\operatorname {J} _{ZW}=0}
(see also: uncorrelatedness (probability theory) ).
Orthogonality: two complex random variables
Z
{\displaystyle Z}
and
W
{\displaystyle W}
are called orthogonal if
E
[
Z
W
¯
]
=
0
{\displaystyle \operatorname {E} [Z{\overline {W}}]=0}
.
Circular symmetry of complex random variables is a common assumption used in the field of wireless communication. A typical example of a circular symmetric complex random variable is the complex Gaussian random variable with zero mean and zero pseudo-covariance matrix.
A complex random variable
Z
{\displaystyle Z}
is circularly symmetric if, for any deterministic
ϕ
∈
[
−
π
,
π
]
{\displaystyle \phi \in [-\pi ,\pi ]}
, the distribution of
e
i
ϕ
Z
{\displaystyle e^{\mathrm {i} \phi }Z}
equals the distribution of
Z
{\displaystyle Z}
.
Properties
By definition, a circularly symmetric complex random variable has
E
[
Z
]
=
E
[
e
i
ϕ
Z
]
=
e
i
ϕ
E
[
Z
]
{\displaystyle \operatorname {E} [Z]=\operatorname {E} [e^{\mathrm {i} \phi }Z]=e^{\mathrm {i} \phi }\operatorname {E} [Z]}
for any
ϕ
{\displaystyle \phi }
.
Thus the expectation of a circularly symmetric complex random variable can only be either zero or undefined.
Additionally,
E
[
Z
Z
]
=
E
[
e
i
ϕ
Z
e
i
ϕ
Z
]
=
e
2
i
ϕ
E
[
Z
Z
]
{\displaystyle \operatorname {E} [ZZ]=\operatorname {E} [e^{\mathrm {i} \phi }Ze^{\mathrm {i} \phi }Z]=e^{\mathrm {2} i\phi }\operatorname {E} [ZZ]}
for any
ϕ
{\displaystyle \phi }
.
Thus the pseudo-variance of a circularly symmetric complex random variable can only be zero.
If
Z
{\displaystyle Z}
and
e
i
ϕ
Z
{\displaystyle e^{\mathrm {i} \phi }Z}
have the same distribution, the phase of
Z
{\displaystyle Z}
must be uniformly distributed over
[
−
π
,
π
]
{\displaystyle [-\pi ,\pi ]}
and independent of the amplitude of
Z
{\displaystyle Z}
.[ 4]
Proper complex random variables [ edit ]
The concept of proper random variables is unique to complex random variables, and has no correspondent concept with real random variables.
A complex random variable
Z
{\displaystyle Z}
is called proper if the following three conditions are all satisfied:
E
[
Z
]
=
0
{\displaystyle \operatorname {E} [Z]=0}
Var
[
Z
]
<
∞
{\displaystyle \operatorname {Var} [Z]<\infty }
E
[
Z
2
]
=
0
{\displaystyle \operatorname {E} [Z^{2}]=0}
This definition is equivalent to the following conditions. This means that a complex random variable is proper if, and only if:
E
[
Z
]
=
0
{\displaystyle \operatorname {E} [Z]=0}
E
[
ℜ
(
Z
)
2
]
=
E
[
ℑ
(
Z
)
2
]
≠
∞
{\displaystyle \operatorname {E} [\Re {(Z)}^{2}]=\operatorname {E} [\Im {(Z)}^{2}]\neq \infty }
E
[
ℜ
(
Z
)
ℑ
(
Z
)
]
=
0
{\displaystyle \operatorname {E} [\Re {(Z)}\Im {(Z)}]=0}
Theorem — Every circularly symmetric complex random variable with finite variance is proper.
For a proper complex random variable, the covariance matrix of the pair
(
ℜ
(
Z
)
,
ℑ
(
Z
)
)
{\displaystyle (\Re {(Z)},\Im {(Z)})}
has the following simple form:
[
1
2
Var
[
Z
]
0
0
1
2
Var
[
Z
]
]
{\displaystyle {\begin{bmatrix}{\frac {1}{2}}\operatorname {Var} [Z]&0\\0&{\frac {1}{2}}\operatorname {Var} [Z]\end{bmatrix}}}
.
I.e.:
Var
[
ℜ
(
Z
)
]
=
Var
[
ℑ
(
Z
)
]
=
1
2
Var
[
Z
]
Cov
[
ℜ
(
Z
)
,
ℑ
(
Z
)
]
=
0
{\displaystyle {\begin{aligned}&\operatorname {Var} [\Re {(Z)}]=\operatorname {Var} [\Im {(Z)}]={\tfrac {1}{2}}\operatorname {Var} [Z]\\&\operatorname {Cov} [\Re {(Z)},\Im {(Z)}]=0\\\end{aligned}}}
Cauchy-Schwarz inequality [ edit ]
The Cauchy-Schwarz inequality for complex random variables, which can be derived using the Triangle inequality and Hölder's inequality , is
|
E
[
Z
W
¯
]
|
2
≤
|
E
[
|
Z
W
¯
|
]
|
2
≤
E
[
|
Z
|
2
]
E
[
|
W
|
2
]
{\displaystyle \left|\operatorname {E} \left[Z{\overline {W}}\right]\right|^{2}\leq \left|\operatorname {E} \left[\left|Z{\overline {W}}\right|\right]\right|^{2}\leq \operatorname {E} \left[|Z|^{2}\right]\operatorname {E} \left[|W|^{2}\right]}
.
Characteristic function [ edit ]
The characteristic function of a complex random variable is a function
C
→
C
{\displaystyle \mathbb {C} \to \mathbb {C} }
defined by
φ
Z
(
ω
)
=
E
[
e
i
ℜ
(
ω
¯
Z
)
]
=
E
[
e
i
(
ℜ
(
ω
)
ℜ
(
Z
)
+
ℑ
(
ω
)
ℑ
(
Z
)
)
]
.
{\displaystyle \varphi _{Z}(\omega )=\operatorname {E} \left[e^{i\Re {({\overline {\omega }}Z)}}\right]=\operatorname {E} \left[e^{i(\Re {(\omega )}\Re {(Z)}+\Im {(\omega )}\Im {(Z)})}\right].}
^ Eriksson, Jan; Ollila, Esa; Koivunen, Visa (2009). Statistics for complex random variables revisited . 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. Taipei, Taiwan: Institute of Electrical and Electronics Engineers . pp. 3565– 3568. doi :10.1109/ICASSP.2009.4960396 .
^ Lapidoth, A. (2009). A Foundation in Digital Communication . Cambridge University Press. ISBN 9780521193955 .
^ a b c Park,Kun Il (2018). Fundamentals of Probability and Stochastic Processes with Applications to Communications . Springer. ISBN 978-3-319-68074-3 .
^ Peter J. Schreier, Louis L. Scharf (2011). Statistical Signal Processing of Complex-Valued Data . Cambridge University Press. ISBN 9780511815911 .