6.02 Tutorial Problems: Noise & Bit Errors


Problem .

Suppose the bit detection sample at the receiver is V + noise volts when the sample corresponds to a transmitted '1', and 0.0 + noise volts when the sample corresponds to a transmitted '0', where noise is a zero-mean Normal(Gaussian) random variable with standard deviation σNOISE.

  1. If the transmitter is equally likely to send '0''s or '1''s, and V/2 volts is used as the threshold for deciding whether the received bit is a '0' or a '1', give an expression for the bit-error rate (BER) in terms of the zero-mean unit standard deviation Normal cumulative distribution function, Φ, and σNOISE.
    Here's a plot of the PDF for the received signal where the red-shaded areas correspond to the probabilities of receiving a bit in error.

    so the bit-error rate is given by

    where we've used the fact that Φ[-x] = 1 - Φ[x], i.e., that the unit-normal Gaussian is symmetrical about the 0 mean.

  2. Suppose the transmitter is equally likely to send zeros or ones and uses zero volt samples to represent a '0' and one volt samples to represent a '1'. If the receiver uses 0.5 volts as the threshold for deciding bit value, for what value of σNOISE is the probability of a bit error approximately equal to 1/5? Note that Φ(0.85) ≈ 4/5.
    From part (A),
    BER = Φ[-0.5/σNOISE] = 1 - Φ[0.5/σNOISE]
    
    If we want BER = 0.2 then
    BER = 1/5 = 1 - Φ[0.5/σNOISE]
    
    which implies
    Φ[0.5/σNOISE] = 4/5
    
    Using the conveniently supplied fact that Φ(0.85) ≈ 4/5, we can solve for σNOISE
    0.5/σNOISE = 0.85   =>   σNOISE = 0.5/.85 = .588
    

  3. Will your answer for σNOISE in part (B) change if the threshold used by the receiver is shifted to 0.6 volts? Do not try to determine σNOISE, but justify your answer.
    If move Vth higher to 0.6V, we'll be decreasing prob(rcv1|xmit0) and increasing prob(rcv0|xmit1). Considering the shape of the Gaussian PDF, the decrease will be noticeably smaller than the increase, so we'd expect BER to increase for a given σNOISE. Thus to keep BER = 1/5, we'd need to decrease our estimate for σNOISE.

  4. Will your answer for σNOISE in part (B) change if the transmitter is twice as likely to send ones as zeros, but the receiver still uses a threshold of 0.5 volts? Do not try to determine σNOISE, but justify your answer.
    If we change the probabilities of transmission but keep the same digitization threshold, the various parts of the BER equation in (A) are weighted differently (to reflect the different transmission probabilities), but the total BER remains unchanged:

    BER = (0.667)Φ[(V/2 - V)/σNOISE] + (0.333)Φ[(-V/2)/σNOISE]
        = Φ[(-v/2)/σNOISE]
    
    So the derivation of part (B) is the same and the answer for σNOISE is unchanged. Note that when the transmission probabilities are unequal, the choice of the digitization threshold to minimize BER would no longer be 0.5V (it would move lower), but that's not what this question was asking.


Problem .

Ben Bitdiddle is doing a 6.02 lab on understanding the effect of noise on data receptions, and is confused about the following questions. Please help him by answering them.

In these questions, assume that:

  1. The sender sends 0 Volts for a "0" bit and 1 Volt for a "1" bit
  2. P_ij = Probability that a bit transmitted as "i" was received as a "j" bit (for all four combinations of i and j, 00, 01, 10, 11)
  3. alpha = Probability that the sender sent bit 0
  4. beta = Probability that the sender sent bit 1
  5. and, obviously, alpha + beta = 1

The channel has non-zero random noise, but unless stated otherwise, assume that the noise has 0 mean and that it is a Gaussian with finite variance. The noise affects the received samples in an additive manner, as in the labs you've done.

  1. Which of these properties does the bit error rate of this channel depend on?
    1. The voltage levels used by the transmitter to send "0" and "1"
    2. The variance of the noise distribution
    3. The voltage threshold used to determine if a sample is a "0" or a "1"
    4. The number of samples per bit used by the sender and receiver
    In general the bit error rate is a function of both noise and inter-symbol interference.

    The noise is a function of Φ[(vth - (signal_level + noise_mean))/noise_sigma] multiplied as appropriate by alpha or beta. So the bit error rate clearly depends on the signal level, the mean and variance of the noise and the digitization threshold.

    The number of samples per bit doesn't enter directly into the bit error calculation, but more samples per bit gives each transition more time to reach its final value, reducing inter-symbol interference. This means that the eye will be more open. In the presence of noise, a wider eye means a lower bit error rate.

  2. Suppose Ben picks a voltage threshold that minimizes the bit error rate. For each choice below, determine whether it's true or false.
    1. P_01 + P_10 is minimized for all alpha and beta
    2. alpha * P_01 + beta * P_10 is minimized
    3. P_01 = P_10 for all alpha and beta
    4. if alpha > beta then P_10 > P_01
    5. The voltage threshold that minimizes BER depends on the noise variance if alpha = beta
    (b) is the definition of bit error rate, so that's clearly minimized. Thus (a) is only minimized if alpha = beta.

    The magnitude of the BER is, of course, a function of the noise variance, but for a given noise variance, if alpha = beta, the minimum BER is achieved by setting the digitization threshold at 0.5. So (e) is false.

    As we saw in PSet #4, when alpha ≠ beta, the noise is minimized when the digitization threshold moves away from the more probable signal. Suppose alpha > beta. The digitization threshold would increase so P_01 would get smaller and P_10 larger. So (c) is not true and (d) is true.

  3. Suppose alpha = beta. If the noise variance doubles, what happens to the bit error rate?
    When alpha = beta = 0.5, the minimum BER is achieved when the digitization threshold is half-way between the signaling levels, i.e., 0.5V. Using Φ(x), the cumulative distribution function for the unit normal PDF, we can write the following formula for the BER:

    BER = 0.5*(1 - Φ[.5/σ]) + 0.5*Φ[-.5/σ]
        = Φ[-.5/σ]

    Doubling the noise variance is the same as multiplying σ by sqrt(2), so the resulting BER would be

    BERnew = Φ[-.5/(sqrt(2)*σ)]

    The change in the bit error rate is given by BERnew - BER.


Problem .

Messages are transmitted along a noisy channel using the following protocol: a "0" bit is transmitted as -0.5 Volt and a "1" bit as 0.5 Volt. The PDF of the total noise added by the channel, H, is shown below.

  1. Compute H(0), the maximum value of H.
    The area under the PDF is 1, so (0.5)*H(0)*(1+0.5) = 1 from which we get H(0) = 4/3.

  2. It is known that a "0" bits 3 times as likely to be transmitted as a "1" bit. The PDF of the message signal, M, is shown below. Fill in the values P and Q.

    We know that Q=3P and that P+Q=1, so Q=0.75 and P=.25.

  3. If the digitization threshold voltage is 0V, what is the bit error rate?
    The plot below shown the PDF of the received voltage in magenta. For a threshold voltage of 0, there is only one error possible: a transmitted "0" received as a "1". This error is equal to the area of the triangle formed by the dotted black line and the blue line = 0.5*0.5*0.5 = 0.125.

  4. What digitization threshold voltage would minimize the bit error rate?
    We'll minimize the bit error rate if the threshold voltage is chosen at the voltage where the red and blue lines intersect. By looking at the plot from the previous answer, let the ideal threshold be x and the value of the PDF at the intersection point be y. Then y/x=2/3 and y/(0.5-x)=1, so x = 0.3V.


Problem .

Consider a transmitter that encodes pairs of bits using four voltage values. Specifically:

For this problem we will assume a wire that only adds noise. That is,

y[n] = x[n] + noise[n]

where y[n] is the received sample, x[n] the transmitted sample whose value is one of the above four voltages, and noise[n] is a random variable.

Please assume all bit patterns are equally likely to be transmitted.

Suppose the probability density function for noise[n] is a constant, K, from -0.05 volts to 0.05 volts and zero elsewhere.

  1. What is the value of K?
    K has a rectangular PDF:
         +---------------------+  height K
         |                     |
         |                     |
    -----+----------|----------+-------
        -.05        0         +.05
    
    We know the area under the PDF curve must be 1 and since the area is given by (.1)(K) then K = 10.

Suppose now Vhigh= 1.0 volts and the probability density function for noise[n] is a zero-mean Normal with standard deviation σ.

  1. If σ = 0.001, what is the approximate probability that 1/3 < y[n] < 2/3? You should be able to give a numerical answer.
    The shaded areas in the figure below correspond to the probability we're trying to calculate:

    Each of the humps is a Gaussian distribution, so the two shaded areas each is exactly one half of a hump and so has area 1/2. Now we just need to scale those area by the probabilities that Vxmit = 1/3V and Vxmit = 2/3V:

    prob = (1/2)prob(Vxmit = 1/3) + (1/2)prob(Vxmit = 2/3) = (1/2)(1/4) + (1/2)(1/4) = 1/4

  2. If σ = 0.1, is the probability that a transmitted 01 (nominally 1/3 volts) will be incorrectly received the same as the probability that a transmitted 11 (nominally 1.0 volts) will be incorrectly received? Explain your answer.
    As you can see in the following figure, the probability of 01 being incorrectly received involves two "tails", while the probability of 11 being incorrectly received involves only one "tail". So the probabilities are NOT THE SAME.


Problem .

Consider the figure below, which shows the step response for a particular transmission channel along with the eye diagram for channel response when transmitting 4 samples/bit and 3 samples/bit.

  1. If the transmitter uses 3 samples/bit, under what conditions will it be possible to reliably (i.e., correctly) receive any sequence of transmitted bits?
    Looking the 3 b/s eye diagram, we can see that the widest part of the eye extends from .2V to .8V. If we set the digitization threshold at 0.5V, we can reliably receive both 0 and 1 bits if the noise is less than 0.3V.

Suppose now that there is additive noise on this channel so that sometimes a transmitted bit is misidentified at the receiver. Let's investigate how the rate of bit errors is affected by changes in noise probability density functions and number of samples per bit. In answering the questions below, please assume that the receiver uses the optimal detection sample for each bit (corresponding to the "center" of the eye) and uses a detection threshold of 0.5V.

  1. If we send 4 samples/bit down the noisy channel, the received voltage will be 1.0 + noise when receiving a transmitted '1' bit, and 0.0 + noise volts when receiving a transmitted '0' bit. If the noise is zero-mean Gaussian with standard deviation σ=0.25, what is the bit error rate? Assume that '0' and '1' bits are transmitted with a probability of 0.5, and that the noise is independent of the bit being transmitted.
    The probability of a bit error is given by
    p(xmit 0)p(receive 1 | xmit 0) + p(xmit 1)p(receive 0 | xmit 1)
    = 0.5*p(0V + noise ≥ 0.5V) + 0.5*p(1V + noise ≤ 0.5V)
    = 0.5*p(noise ≥ 0.5V) + 0.5*p(noise ≤ -0.5V)
    = 0.5*(1 - Φ(0.5/.25)) + 0.5*Φ(-0.5/.25)
    = 0.5*Φ(-2) + 0.5*Φ(-2)
    = Φ(-2) ≈ 0.023
    

  2. If 3 samples/bit are used by the transmitter, the received voltage will be

    If the noise is uniformly distributed between the voltage values -1 and 1 volts, what is the bit error rate? Hint: are all four cases of received voltages equally likely?

    The hard part of the problem is figuring out the probabilities of arriving at each of the 4 voltages -- 0.0, 0.2, 0.8, 1.0 -- at the sample corresponding to the widest part of the eye. The following figure shows the bit patterns that correspond to the various segments of the eye diagram.

    Each of the 8 segments occurs probability 1/8 and since there are 2 segments that terminate at each of the 4 voltages, those 4 voltages are observed with probability 1/4 in a transmission without noise.

    So the probability of a bit error is given by

    0.25*p(1.0+noise ≤ 0.5) + 0.25*p(0.8+noise ≤ 0.5) + 0.25*p(0.2+noise ≥ 0.5)+ 0.25*p(0.0+noise &ge 0.5)
    = 0.25*p(-1 ≤ noise ≤ -0.5) + 0.25*p(-1 ≤ noise ≤ -0.3) + 0.25*p(1 ≥ noise ≥ 0.5) + 0.25*p(1 ≥ noise ≥ 0.3)
    = 0.25*(.5/2) + 0.25*(.7/2) + 0.25*(.5/2) + 0.25(.7/2)
    = 0.3
    


Problem .

Suppose a channel has both noise and intersymbol interference, and further suppose the voltage at the receiver is:

In answering the following parts, please assume the receiver uses 4.0 volts as the threshold for deciding the bit value.

  1. Suppose noise is Gaussian with standard deviation σ=1 and all bit patterns are equally likely. Please determine the probability of a bit error.
    We need to the expected number of bit error for each of the nominal received voltages -- 0, 2, 6 and 8. For the voltages below 4V we want the probability that voltage+noise is ≥ 4V. For the voltages above 4V we want the probability that voltage+noise ≤ 4V.

    0.25*(1 - Φ(4-0)) + 0.25*(1- Φ(4-2)) + 0.25*Φ(4-6) + 0.25*Φ(4-8)
    = 0.5*Φ(-4) + 0.5*Φ(-2)
    = 0.5*(3.2e-5) + 0.5*(0.023) ≈ 0.0115

  2. Again suppose noise is Gaussian with standard deviation σ= 1, and suppose that for a particular set of transmitted data, which we will refer to as checkerboard data, there is an increased probability of unequal contiguous bits. That is, for checkerboard data

    What is the probability of bit error for the checkerboard case?

    0.167*(1 - Φ(4-0)) + 0.333*(1- Φ(4-2)) + 0.333*Φ(4-6) + 0.167*Φ(4-8)
    = 0.333*Φ(-4) + 0.667*Φ(-2) ≈ 0.0153