6.02 Quiz #1 Review Problems


Problem .

In the following plot of a voltage waveform from a transmitter, the transmitter sends 0 Volts for a zero bit and 1.0 Volts for a one bit, and is sending bits with with a certain number of samples per bit.

  1. What is the largest number of samples per bit the transmitter could be using? All the samples shown in the figure are consistent with the transmitter sending 3 samples/bit. It couldn't be larger than 3 since, e.g., samples 8, 9 and 10 wouldn't represent the legal transmission of a 0-bit.

  2. What is the sequence of bits being sent? At 3 samples/bit, the figure shows the transmission of 01101.


Problem .

The input sequence to a linear time-invariant (LTI) system is given by

and the output of the LTI system is given by

  1. Is this system causal? Why or why not? The system is not causal because y becomes nonzero before x does, i.e., y[0]=1 but x[0]=0.

  2. What are the nonzero values of the output of this LTI system when the input is

    The easiest approach is by superposition:

    So y = 1, 2, 2, 2, 1, 0, 0, ... when n ≥ 0, 0 otherwise.


Problem .

Suppose the bit detection sample at the receiver is V + noise volts when the sample corresponds to a transmitted '1', and 0.0 + noise volts when the sample corresponds to a transmitted '0', where noise is a zero-mean Normal(Gaussian) random variable with standard deviation σNOISE.

  1. If the transmitter is equally likely to send '0''s or '1''s, and V/2 volts is used as the threshold for deciding whether the received bit is a '0' or a '1', give an expression for the bit-error rate (BER) in terms of the zero-mean unit standard deviation Normal cumulative distribution function, Φ, and σNOISE. Here's a plot of the PDF for the received signal where the red-shaded areas correspond to the probabilities of receiving a bit in error.

    so the bit-error rate is given by

    where we've used the fact that Φ[-x] = 1 - Φ[x], i.e., that the unit-normal Gaussian is symmetrical about the 0 mean.

  2. Suppose the transmitter is equally likely to send zeros or ones and uses zero volt samples to represent a '0' and one volt samples to represent a '1'. If the receiver uses 0.5 volts as the threshold for deciding bit value, for what value of σNOISE is the probability of a bit error approximately equal to 1/5? Note that Φ(0.85) ≈ 4/5. From part (A),
    BER = Φ[-0.5/σNOISE] = 1 - Φ[0.5/σNOISE]
    
    If we want BER = 0.2 then
    BER = 1/5 = 1 - Φ[0.5/σNOISE]
    
    which implies
    Φ[0.5/σNOISE] = 4/5
    
    Using the conveniently supplied fact that Φ(0.85) ≈ 4/5, we can solve for σNOISE
    0.5/σNOISE = 0.85   =>   σNOISE = 0.5/.85 = .588
    

  3. Will your answer for σNOISE in part (B) change if the threshold used by the receiver is shifted to 0.6 volts? Do not try to determine σNOISE, but justify your answer. If move Vth higher to 0.6V, we'll be decreasing prob(rcv1|xmit0) and increasing prob(rcv0|xmit1). Considering the shape of the Gaussian PDF, the decrease will be noticeably smaller than the increase, so we'd expect BER to increase for a given σNOISE. Thus to keep BER = 1/5, we'd need to decrease our estimate for σNOISE.

  4. Will your answer for σNOISE in part (B) change if the transmitter is twice as likely to send ones as zeros, but the receiver still uses a threshold of 0.5 volts? Do not try to determine σNOISE, but justify your answer. If we change the probabilities of transmission but keep the same digitization threshold, the various parts of the BER equation in (A) are weighted differently (to reflect the different transmission probabilities), but the total BER remains unchanged:

    BER = (0.667)Φ[(V/2 - V)/σNOISE] + (0.333)Φ[(-V/2)/σNOISE]
        = Φ[(-v/2)/σNOISE]
    
    So the derivation of part (B) is the same and the answer for σNOISE is unchanged. Note that when the transmission probabilities are unequal, the choice of the digitization threshold to minimize BER would no longer be 0.5V (it would move lower), but that's not what this question was asking.


Problem .

Determine the output y[n] for a system with the input x[n] and unit-sample response h[n] shown below. Assume h[n]=0 and x[n]=0 for any times n not shown.

y[n] = Σx[k]h[n-k] = x[0]h[n] + x[1]h[n-1] + x[2]h[n-2]
     = δ[n+1] + 4δ[n] + 8δ[n-1] + 8δ[n-2] + 3δ[n-3]


Problem . A discrete-time linear system produces output v when the input is the unit step u. What is the output h when the input is the unit-sample δ? Assume v[n]=0 for any times n not shown below.

Note that

δ[n] = u[n] - u[n-1]

Since the system is linear we can compute the response of the system to the input δ[n] using the superposition of the appropriately scaled and shifted v[n]:

h[n] = v[n] - v[n-1]

The result is shown in the figure below:


Problem .

The following figure show plots of several received waveforms. The transmitter is sending sequences of binary symbols (i.e., either 0 or 1) at some fixed symbol rate, using 0V to represent 0 and 1V to represent 1.The horizontal grid spacing is 1 microsecond (1e-6 sec).

Answer the following questions for each plot:

  1. Find the slowest symbol rate that is consistent with the transitions in the waveform.

  2. Using your answer in question 1, what is the decoded bit string?
a) 1e6 symbols/sec, 101010101010101010
b) 333,333 symbols/sec, 101101
c) 1e6 symbols/sec, 101100111001110110
d) 5e5 symbols/sec, 100010001


Problem .

Ben Bitdiddle is doing a 6.02 lab on understanding the effect of noise on data receptions, and is confused about the following questions. Please help him by answering them.

In these questions, assume that:

  1. The sender sends 0 Volts for a "0" bit and 1 Volt for a "1" bit
  2. P_ij = Probability that a bit transmitted as "i" was received as a "j" bit (for all four combinations of i and j, 00, 01, 10, 11)
  3. alpha = Probability that the sender sent bit 0
  4. beta = Probability that the sender sent bit 1
  5. and, obviously, alpha + beta = 1

The channel has non-zero random noise, but unless stated otherwise, assume that the noise has 0 mean and that it is a Gaussian with finite variance. The noise affects the received samples in an additive manner, as in the labs you've done.

  1. Which of these properties does the bit error rate of this channel depend on?
    1. The voltage levels used by the transmitter to send "0" and "1"
    2. The variance of the noise distribution
    3. The voltage threshold used to determine if a sample is a "0" or a "1"
    4. The number of samples per bit used by the sender and receiver
    In general the bit error rate is a function of both noise and inter-symbol interference.

    The noise is a function of Φ[(vth - (signal_level + noise_mean))/noise_sigma] multiplied as appropriate by alpha or beta. So the bit error rate clearly depends on the signal level, the mean and variance of the noise and the digitization threshold.

    The number of samples per bit doesn't enter directly into the bit error calculation, but more samples per bit gives each transition more time to reach its final value, reducing inter-symbol interference. This means that the eye will be more open. In the presence of noise, a wider eye means a lower bit error rate.

  2. Suppose Ben picks a voltage threshold that minimizes the bit error rate. For each choice below, determine whether it's true or false.
    1. P_01 + P_10 is minimized for all alpha and beta
    2. alpha * P_01 + beta * P_10 is minimized
    3. P_01 = P_10 for all alpha and beta
    4. if alpha > beta then P_10 > P_01
    5. The voltage threshold that minimizes BER depends on the noise variance if alpha = beta
    (b) is the definition of bit error rate, so that's clearly minimized. Thus (a) is only minimized if alpha = beta.

    The magnitude of the BER is, of course, a function of the noise variance, but for a given noise variance, if alpha = beta, the minimum BER is achieved by setting the digitization threshold at 0.5. So (e) is false.

    As we saw in PSet #3, when alpha ≠ beta, the noise is minimized when the digitization threshold moves away from the more probable signal. Suppose alpha > beta. The digitization threshold would increase so P_01 would get smaller and P_10 larger. So (c) is not true and (d) is true.

  3. Suppose alpha = beta. If the noise variance doubles, what happens to the bit error rate? When alpha = beta = 0.5, the minimum BER is achieved when the digitization threshold is half-way between the signaling levels, i.e., 0.5V. Using Φ(x), the cumulative distribution function for the unit normal PDF, we can write the following formula for the BER:

    BER = 0.5*(1 - Φ[.5/σ]) + 0.5*Φ[-.5/σ]
        = Φ[-.5/σ]

    Doubling the noise variance is the same as multiplying σ by sqrt(2), so the resulting BER would be

    BERnew = Φ[-.5/(sqrt(2)*σ)]

    The change in the bit error rate is given by BERnew - BER.


Problem .

The output of a particular communication channel is given by

y[n] = αx[n] + βx[n-1] where α > β

  1. Is the channel linear? Is it time invariant? To be linear the channel must meet two criteria:

    • if we scale the inputs x[n] by some factor k, the outputs y[n] should scale by the same factor.

    • if we get y1[n] with inputs x1[n] and y2[n] with inputs x2[n], then we should get y1[n] + y2[n] if the input is x1[n] + x2[n].

    It's easy to verify both properities given the channel response above, so the channel is linear.

    To be time invariant the channel must have the property that if we shift the input by some number of samples s, the output also shifts by s samples. Again that property is easily verified given the channel response above, so the channel is time invariant.

  2. What is the channel's unit-sample response h? The unit-sample input is x[0]=1 and x[n]=0 for n≠0.

    Using the channel response given above, the channel's unit-sample reponse can be computed as

    h[0]=α,
    h[1]=β,
    h[n]=0 for all other values of n

  3. If the input is the following sequence of samples starting at time 0:

    x[n] = [1, 0, 0, 1, 1, 0, 1, 1], followed by all 1's.

    then what is the channel's output assuming α=.7 and β=.3? Convolving x[n] with h[n] we get

    y[n] = [.7, .3, 0, .7, 1, .3, .7, 1], followed by all 1's.

  4. Again let α=.7 and β=.3. Derive a deconvolver for this channel and compute the input sequence that produced the following output:

    y[n] = [.7, 1, 1, .3, .7, 1, .3, 0], followed by all 0's.
    w[n] = (1/h[0])(y[n] - h[1]w[n-1]) = y[n]/.7 - (.3/.7)w[n-1]

    so

    w[n] = [1, 1, 1, 0, 1, 1, 0, 0], followed by all 0's


Problem .

Suppose four different wires {I,II,III,IIII} have four different unit sample responses:

h1 = .25, .25, .25, .25, 0, ...

h2 = 0, .25, .5, .25, 0, ...

h3 = .11, .22, .33, .22, .11, 0, ...

h4 = .04, .08, .12, .16, .20, .12, .12, .12, .04, 0, ...

Each of the following eye diagrams is associated with transmitting bits using one of the four wires, where five samples were used per bit. That is, a one bit is five one-volt samples and a zero bit is five zero-volt samples. Please determine which wire was used in each case.

The eye diagram is from h2. Note that the signal transitions take three samples to complete and that the transitions occur in 3 steps with a larger slope in the middle step, an indicator of a response with 3 taps with a larger middle tap.

The eye diagram is from h4. Note that the signal transitions take more than five samples to complete and hence result in considerable inter-symbol interference. Response h4 is the only response that's non-zero for more than 5 taps.

The eye diagram is from h1. Note that the signal transitions take four samples to complete and that the transitions have constant slope, an indicator of a response with 4 equal taps.

The eye diagram is from h3. Note that the signal transitions take five samples to complete and that the transitions occur in 5 steps with larger slopes in the middle of the transition, an indicator of a response with 5 taps with larger middle taps.


Problem .

Consider the following eye diagram from a transmission where five samples were used per bit. That is, a one bit was transmitted as five one-volt samples and a zero bit was transmitted five zero-volt samples. The eye diagram shows the voltages at the receiver.

The channel is charcterized by the following unit-sample response.

Determine the eight unique voltage values for sample number 8 in the eye diagram.

At sample number 8 we can see that there are 4 samples above and 4 samples below the nominal threshold at the half-way voltage value. This means the inter-symbol interference is carrying over from the two previous transmitted bits (which can take on 4 possible values: 00, 01, 10, and 11). The eye diagram is showing us what happens when transmiting a 0-bit or a 1-bit in the current bit cell, given the 4 possible choices for the previous two bits.

Looking at the eye diagram, we can see that the first sample in a bit time occurs at receiver sample 1 and 6 (the diagram shows two bit times), (note that h[0] = 0). So sample 8 in the eye diagram corresponds to the third sample in the transmision of a bit. Thinking in terms of the convolution equation

y[n] = Σh[k]x[n-k]

and recalling that we're using 5 samples/bit, we can determine that bits start at n = 0, 5, 10, ... So we if want to evaluate what happens in the third sample of a bit time in a channel that has two bits of ISI, y[12] is the first such value, i.e., the third sample of the third bit time.

To determine y[12], it's useful convolve h4 with a unit step we get the unit-step response:

n           0    1     2     3     4     5     6     7     8     9
hstep[n] = 0.0, 0.04, 0.12, 0.24, 0.40, 0.60, 0.72, 0.84, 0.96, 1.00, ...

Now if we consider all possible values of the current bit and the previous two bits (listed earliest-to-latest in the table below) we can use superposition of hstep to compute the possible values at y[12].

bitsdecomposed into unit stepscomputation for y[12]
1 1 1u[n]y[12] = hstep[12] = 1
1 1 0u[n] - u[n-10]y[12] = hstep[12] - hstep[2] = 1 - .12 = .88
1 0 1u[n] - u[n-5] + u[n-10]y[12] = 1 - .84 + .12 = .28
1 0 0u[n] - u[n-5]y[12] = 1 - .84 = .16
0 1 1u[n-5]y[12] = .84
0 1 0u[n-5] - u[n-10]y[12] = .84 - .12 = .72
0 0 1u[n-10]y[12] = .12
0 0 0 y[12] = 0

So the eight unique values for y[12] are 0, .12, .16, .28, .72, .84, .88 and 1.


Problem .

Messages are transmitted along a noisy channel using the following protocol: a "0" bit is transmitted as -0.5 Volt and a "1" bit as 0.5 Volt. The PDF of the total noise added by the channel, H, is shown below.

  1. Compute H(0), the maximum value of H. The area under the PDF is 1, so (0.5)*H(0)*(1+0.5) = 1 from which we get H(0) = 4/3.

  2. It is known that a "0" bits 3 times as likely to be transmitted as a "1" bit. The PDF of the message signal, M, is shown below. Fill in the values P and Q.

    We know that Q=3P and that P+Q=1, so Q=0.75 and P=.25.

  3. If the digitization threshold voltage is 0V, what is the bit error rate? The plot below shown the PDF of the received voltage in magenta. For a threshold voltage of 0, there is only one error possible: a transmitted "0" received as a "1". This error is equal to the area of the triangle formed by the dotted black line and the blue line = 0.5*0.5*0.5 = 0.125.

  4. What digitization threshold voltage would minimize the bit error rate? We'll minimize the bit error rate if the threshold voltage is chosen at the voltage where the red and blue lines intersect. By looking at the plot from the previous answer, let the ideal threshold be x and the value of the PDF at the intersection point be y. Then y/x=2/3 and y/(0.5-x)=1, so x = 0.3V.


Problem .

This question refers to the LTI systems, I, II and III, whose unit-sample responses are shown below:

In this question, the input to these systems are bit streams with eight voltage samples per bit, with eight one-volt samples representing a one bit and eight zero-volt samples representing a zero bit.

  1. Which system (I, II or III) generated the following eye diagram? To ensure at least partial credit for your answer, explain what led you to rule out the systems you did not select.

    The rise or fall time of a transition as seen in the eye diagram is only 3 samples, so it can only be System I since it's the only system with a 3-sample unit-sample response. Note that all the systems have some ISI, but System I's ISI is limited to only 3 samples after which the received signal is stable for the remainder of the bit cell.

This question refers to a fourth LTI system whose unit-sample response, hIV[n], is given below:

where, just like in (A), the input to this system is a bit stream with eight voltage samples per bit, with eight one-volt samples representing a one bit and eight zero-volt samples representing a zero bit.

  1. Determine the voltage level denoted by D in the eye diagram generated from the system with unit-sample response hIV[n].

    The lowest curve above threshold, i.e., the curve the arrow points at, must be due to the transmission of an isolated 1-bit, preceded and followed by 0-bits. This corresponds to a sample stream of 8 zeros, followed by 8 ones, followed by another 8 zeros. We can generate the entire received waveform by convolving this sample stream with the given unit-sample response. But since D is at the maximum value, we can compute its value from the convolution sum when the 8 one samples overlap the 6 values of 0.1 in the response. So

    D = (1)(0.05) + (1)(0.1) + (1)(0.1) + (1)(0.1) + (1)(0.1) + (1)(0.1) + (1)(0.1) + (1)(0.05)
      = 0.7

    where we've left out the terms in the convolution sum where x[n] is 0.


Problem .

Consider a transmitter that encodes pairs of bits using four voltage values. Specifically:

For this problem we will assume a wire that only adds noise. That is,

y[n] = x[n] + noise[n]

where y[n] is the received sample, x[n] the transmitted sample whose value is one of the above four voltages, and noise[n] is a random variable.

Please assume all bit patterns are equally likely to be transmitted.

Suppose the probability density function for noise[n] is a constant, K, from -0.05 volts to 0.05 volts and zero elsewhere.

  1. What is the value of K? K has a rectangular PDF:
         +---------------------+  height K
         |                     |
         |                     |
    -----+----------|----------+-------
        -.05        0         +.05
    
    We know the area under the PDF curve must be 1 and since the area is given by (.1)(K) then K = 10.

Suppose now Vhigh= 1.0 volts and the probability density function for noise[n] is a zero-mean Normal with standard deviation σ.

  1. If σ = 0.001, what is the approximate probability that 1/3 < y[n] < 2/3? You should be able to give a numerical answer. The shaded areas in the figure below correspond to the probability we're trying to calculate:

    Each of the humps is a Gaussian distribution, so the two shaded areas each is exactly one half of a hump and so has area 1/2. Now we just need to scale those area by the probabilities that Vxmit = 1/3V and Vxmit = 2/3V:

    prob = (1/2)prob(Vxmit = 1/3) + (1/2)prob(Vxmit = 2/3) = (1/2)(1/4) + (1/2)(1/4) = 1/4

  2. If σ = 0.1, is the probability that a transmitted 01 (nominally 1/3 volts) will be incorrectly received the same as the probability that a transmitted 11 (nominally 1.0 volts) will be incorrectly received? Explain your answer. As you can see in the following figure, the probability of 01 being incorrectly received involves two "tails", while the probability of 11 being incorrectly received involves only one "tail". So the probabilities are NOT THE SAME.