Measure of relative information in probability theory
In information theory, the conditional entropy quantifies the amount of information needed to describe the outcome of a random variable
given that the value of another random variable
is known. Here, information is measured in shannons, nats, or hartleys. The entropy of
conditioned on
is written as
.
Definition [edit]
The conditional entropy of
given
is defined as
| | (Eq.1) |
where
and
denote the support sets of
and
.
Note: Here, the convention is that the expression
should be treated as being equal to zero. This is because
.[1]
Intuitively, notice that by definition of expected value and of conditional probability,
can be written as
, where
is defined as
. One can think of
as associating each pair
with a quantity measuring the information content of
given
. This quantity is directly related to the amount of information needed to describe the event
given
. Hence by computing the expected value of
over all pairs of values
, the conditional entropy
measures how much information, on average, the variable
encodes about
.
Motivation [edit]
Let
be the entropy of the discrete random variable
conditioned on the discrete random variable
taking a certain value
. Denote the support sets of
and
by
and
. Let
have probability mass function
. The unconditional entropy of
is calculated as
, i.e.
-
where
is the information content of the outcome of
taking the value
. The entropy of
conditioned on
taking the value
is defined analogously by conditional expectation:
-
Note that
is the result of averaging
over all possible values
that
may take. Also, if the above sum is taken over a sample
, the expected value
is known in some domains as equivocation.[2]
Given discrete random variables
with image
and
with image
, the conditional entropy of
given
is defined as the weighted sum of
for each possible value of
, using
as the weights:[3] : 15
-
Properties [edit]
Conditional entropy equals zero [edit]
if and only if the value of
is completely determined by the value of
.
Conditional entropy of independent random variables [edit]
Conversely,
if and only if
and
are independent random variables.
Chain rule [edit]
Assume that the combined system determined by two random variables
and
has joint entropy
, that is, we need
bits of information on average to describe its exact state. Now if we first learn the value of
, we have gained
bits of information. Once
is known, we only need
bits to describe the state of the whole system. This quantity is exactly
, which gives the chain rule of conditional entropy:
-
[3] : 17
The chain rule follows from the above definition of conditional entropy:
-
In general, a chain rule for multiple random variables holds:
-
[3] : 22
It has a similar form to chain rule in probability theory, except that addition instead of multiplication is used.
Bayes' rule [edit]
Bayes' rule for conditional entropy states
-
Proof.
and
. Symmetry entails
. Subtracting the two equations implies Bayes' rule.
If
is conditionally independent of
given
we have:
-
Other properties [edit]
For any
and
:
-
where
is the mutual information between
and
.
For independent
and
:
-
and
Although the specific-conditional entropy
can be either less or greater than
for a given random variate
of
,
can never exceed
.
Conditional differential entropy [edit]
Definition [edit]
The above definition is for discrete random variables. The continuous version of discrete conditional entropy is called conditional differential (or continuous) entropy. Let
and
be a continuous random variables with a joint probability density function
. The differential conditional entropy
is defined as[3] : 249
| | (Eq.2) |
Properties [edit]
In contrast to the conditional entropy for discrete random variables, the conditional differential entropy may be negative.
As in the discrete case there is a chain rule for differential entropy:
-
[3] : 253
Notice however that this rule may not be true if the involved differential entropies do not exist or are infinite.
Joint differential entropy is also used in the definition of the mutual information between continuous random variables:
-
with equality if and only if
and
are independent.[3] : 253
Relation to estimator error [edit]
The conditional differential entropy yields a lower bound on the expected squared error of an estimator. For any random variable
, observation
and estimator
the following holds:[3] : 255
-
This is related to the uncertainty principle from quantum mechanics.
Generalization to quantum theory [edit]
In quantum information theory, the conditional entropy is generalized to the conditional quantum entropy. The latter can take negative values, unlike its classical counterpart.
See also [edit]
- Entropy (information theory)
- Mutual information
- Conditional quantum entropy
- Variation of information
- Entropy power inequality
- Likelihood function
References [edit]
- ^ "David MacKay: Information Theory, Pattern Recognition and Neural Networks: The Book". www.inference.org.uk . Retrieved 2019-10-25 .
- ^ Hellman, M.; Raviv, J. (1970). "Probability of error, equivocation, and the Chernoff bound". IEEE Transactions on Information Theory. 16 (4): 368–372. doi:10.1109/TIT.1970.1054466.
- ^ a b c d e f g T. Cover; J. Thomas (1991). Elements of Information Theory . ISBN0-471-06259-6.
0 Response to "Entropy of Discrete Conditioned on Continuous Variables"
Post a Comment