6.02 Quiz #2 Review Problems


Problem . Suppose management has decided to use 20-bit data blocks in the company's new (n,20,3) error correcting code. What's the minimum value of n that will permit the code to be used for single bit error correction, i.e., that will achieve a minimum Hamming distance of 3 between code words? n and k=20 must satisfy the constraint that n + 1 ≤ 2n-k. A little trial-and-error search finds n=25.


Problem . Consider the following (n,k,d) block code:

D0  D1  D2  D3  D4   | P0
D5  D6  D7  D8  D9   | P1 
D10 D11 D12 D13 D14  | P2 
-------------------------
P3  P4  P5  P6  P7   |

where D0-D14 are data bits, P0-P2 are row parity bits and P3-P7 are column parity bits. The transmitted code word will be:

D0 D1 D2 ... D13 D14 P0 P1 ... P6 P7

  1. Please give the values for n, k, d for the code above. n=23, k=15, d=3.

    For an explanation of why d=3 see section 6.3.1 of the lecture notes for Lecture 6 and, in particular, the discussion on page 10.

  2. If D0 D1 D2 ... D13 D14 = 0 1 0 1 0, 0 1 0 0 1, 1 0 0 0 1, please compute P0 through P7. P0 through P7 = 0 0 0 1 0 0 1 0

  3. Now we receive the four following code words:
    M1: 0 1 0 1 0, 0 1 0 0 1, 1 0 0 0 1, 0 0 0 1 1 0 1 0
    M2: 0 1 0 1 0, 0 1 0 0 1, 1 0 0 0 1, 0 0 1 1 1 0 1 0
    M3: 0 1 0 1 0, 0 1 0 0 1, 1 0 0 0 1, 1 1 0 1 1 0 1 0
    M4: 0 1 0 1 0, 0 1 0 0 1, 1 0 0 0 1, 1 0 0 1 1 0 1 0 

    For each of received code words, indicate the number of errors. If there are errors, indicate if they are correctable, and if they are, what the correction should be.

    M1:
    0 1 0 1 0 | 0 | 0
    0 1 0 0 1 | 0 | 0
    1 0 0 0 1 | 0 | 0
    ---------------
    1 1 0 1 0 | 
    ---------------
    0 1 0 0 0   
    The syndrome bits are shown in red. Since there is only one error -- the parity bit P4 -- it must be that parity bit itself that had the error. So the correct data bits are:

    M1: 0 1 0 1 0, 0 1 0 0 1, 1 0 0 0 1

    M2:

    0 1 0 1 0 | 0 | 0
    0 1 0 0 1 | 0 | 0
    1 0 0 0 1 | 1 | 1
    ----------------
    1 1 0 1 0 |
    ----------------
    0 1 0 0 0   0

    There are parity errors detected for the third row and second column, so D11 must be in error. So the correct data bits are:

    M2: 0 1 0 1 0, 0 1 0 0 1, 1 1 0 0 1

    M3:

    0 1 0 1 0 | 1 | 1
    0 1 0 0 1 | 1 | 1
    1 0 0 0 1 | 0 | 0
    ----------------
    1 1 0 1 0 | 0  
    ----------------
    0 1 0 0 0   0

    There are more than two row/column parity errors, indicating an uncorrectable multi-bit error.

    M4:

    0 1 0 1 0 | 1 | 1
    0 1 0 0 1 | 0 | 0
    1 0 0 0 1 | 0 | 0
    ----------------
    1 1 0 1 0 |
    ----------------
    0 1 0 0 0 

    The row and column parity bit indicate the data bit which got flipped (D1). Flipping it gives us the correct data bits:

    M4: 0 0 0 1 0, 0 1 0 0 1, 1 0 0 0 1


Problem . Consider two convolutional coding schemes - I and II. The generator polynomials for the two schemes are

Scheme I: G0 = 1101, G1 = 1110
Scheme II: G0 = 110101, G1 = 111011

Notation is follows: if the generator polynomial is, say, 1101, then the corresponding parity bit for message bit n is

(x[n] + x[n-1] + x[n-3]) mod 2

where x[n] is the message sequence.

  1. Indicate TRUE or FALSE
    1. Code rate of Scheme I is 1/4.
    2. Constraint length of Scheme II is 4.
    3. Code rate of Scheme II is equal to code rate of Scheme I.
    4. Constraint length of Scheme I is 4.
    The code rate (r) and constraint length (k) for the two schemes are

    I: r = 1/2, k = 4
    II: r = 1/2, k = 6

    So

    1. false
    2. false
    3. true
    4. true

  2. How many states will there be in the state diagram for Scheme I? For Scheme II? Number of states is given by 2k-1 where k = constraint length. Following the convention of state machines as outlined in Lecture 8, number of states in Scheme I is 8 and in Scheme II, 32.

  3. Which code will lead to a lower bit error rate? Why? Scheme II is likely to lead to a lower bit error rate. Both codes have the same code rate but different constraint lengths. So Scheme II encodes more history and since it is less likely that 6 trailing bits will be in error vs. 4 trailing bits, II is stronger.

  4. Alyssa P. Hacker suggests a modification to Scheme I which involves adding a third generator polynomial G2 = 1001. What is the code rate r of Alyssa's coding scheme? What about constraint length k? Alyssa claims that her scheme is stronger than Scheme I. Based on your computations for r and k, is her statement true? For Alyssa's scheme r = 1/3, k = 4. Alyssa's code has a lower code rate (more redundancy), and given then she's sending additional information, the modified scheme I is stronger in the sense that more information leads to better error detection and correction.


Problem . Consider a convolution code that uses two generator polynomials: G0 = 111 and G1 = 110. You are given a particular snapshot of the decoding trellis used to determine the most likely sequence of states visited by the transmitter while transmitting a particular message:

  1. Complete the Viterbi step, i.e., fill in the question marks in the matrix, assuming a hard branch metric based on the Hamming distance between expected an received parity where the received voltages are digitized using a 0.5 V threshold. The digitized received parity bits are 1 and 0.

    For state 0:

      PM[0,n] = min(PM[0,n-1]+BM(00,10), PM[1,n-1]+BM(10,10)) = min(1+1,0+0) = 0
      Predecessor[0,n] = 1

    For state 1:

      PM[1,n] = min(PM[2,n-1]+BM(11,10), PM[3,n-1]+BM(01,10)) = min(2+1,3+2) = 3
      Predecessor[1,n] = 2

    For state 2:

      PM[2,n] = min(PM[0,n-1]+BM(11,10), PM[1,n-1]+BM(01,10)) = min(1+1,0+2) = 2
      Predecessor[2,n] = 0 or 1

    For state 3:

      PM[1,n] = min(PM[2,n-1]+BM(00,10), PM[3,n-1]+BM(10,10)) = min(2+1,3+0) = 3
      Predecessor[1,n] = 2 or 3

  2. Complete the Viterbi step, i.e., fill in the question marks in the matrix, assuming a soft branch metric based on the square of the Euclidean distance between expected an received parity voltages. Note that your branch and path metrics will not necessarily be integers.

    For state 0:

      PM[0,n] = min(PM[0,n-1]+BM([0,0],[0.6,0.4]), PM[1,n-1]+BM([1,0],[0.6,0.4])) = min(1+0.52,0+.32) = .32
      Predecessor[0,n] = 1

    For state 1:

      PM[1,n] = min(PM[2,n-1]+BM([1,1],[0.6,0.4]), PM[3,n-1]+BM([0,1],[0.6,0.4])) = min(2+0.52,3+0.72) = 2.52
      Predecessor[1,n] = 2

    For state 2:

      PM[2,n] = min(PM[0,n-1]+BM([1,1],[0.6,0.4]), PM[1,n-1]+BM([0,1],[0.6,0.4])) = min(1+0.52,0+0.72) = 0.72
      Predecessor[2,n] = 1

    For state 3:

      PM[1,n] = min(PM[2,n-1]+BM([0,0],[0.6,0.4]), PM[3,n-1]+BM([1,0],[0.6,0.4])) = min(2+0.52,3+.32) = 2.52
      Predecessor[1,n] = 2

  3. Does the soft metric give a different answer than the hard metric? Base your response in terms of the relative ordering of the states in the second column and the survivor paths. The soft metric certainly gives different path metrics, but the relative ordering of the likelihood of each state remains unchanged. Using the soft metric, the choice of survivor path leading to states 2 and 3 has firmed up (with the hard metric either of the survivor paths for each of states 2 and 3 could have been chosen).


Problem . Consider a binary convolutional code specified by the generators (1011, 1101, 1111).

  1. What are the values of
    1. constraint length of the code
    2. rate of the code
    3. number of states at each time step of the trellis
    4. number of branches transitioning into each state
    5. number of branches transitioning out of each state
    6. number of expected parity bits on each branch
    1. 4
    2. 1/3
    3. 24-1 = 8
    4. 2
    5. 2
    6. 3

A 10000-bit message is encoded with the above code and transmitted over a noisy channel. During Viterbi decoding at the receiver, the state 010 had the lowest path metric (a value of 621) in the final time step, and the survivor path from that state was traced back to recover the original message.

  1. What is the likely number of bit errors that are corrected by the decoder? How many errors are likely left uncorrected in the decoded message? 621 errors were likely corrected by the decoder to produce the final decoded message. We cannot infer the number of uncorrected errors still left in the message absent more information (like, say, the original transmitted bits).

  2. If you are told that the decoded message had no uncorrected errors, can you guess the approximate number of bit errors that would have occured had the 10000 bit message been transmitted without any coding on the same channel? 3*10000 bits were transmitted over the channel and the received message had 621 bit errors (all corrected by the convolutional code). Therefore, if the 10000-bit message would have been transmitted without coding, it would have had approximately 621/3 = 207 errors.

  3. From knowing the final state of the trellis (010, as given above), can you infer what the last bit of the original message was? What about the last-but-one bit? The last 4 bits? The state gives the last 3 three bits of the original message. In general, for a convolutional code with a constraint length k, the state indicates the final k-1 bits of the original message. To determine more bits we would need to know the states along the most-likely path as we trace back through the trellis.
Consider a transition branch between two states on the trellis that has 000 as the expected set of parity bits. Assume that 0V and 1V are used as the signaling voltages to transmit a 0 and 1 respectively, and 0.5V is used as the digitization threshold.

  1. Assuming hard decision decoding, which of the two set of received voltages will be considered more likely to correspond to the expected parity bits on the transition: (0V, 0.501V, 0.501V) or (0V, 0V, 0.9V)? What if one is using soft decision decoding? With hard decision decoding: (0V, 0.501V, 0.501V) -> 011 -> hamming distance of 2 from expected parity bits. (0V, 0V, 0.9V) -> 001 -> hamming distance of 1. Therefore, (0V, 0V, 0.9V) is considered more likely.

    With soft decision decoding, (0V, 0.501V, 0.501V) will have a branch metric of approximately 0.5. (0V, 0V, 0.9V) will have a metric of approximmately 0.8. Therefore, (0V, 0.501V, 0.501V) will be considered more likely.


Problem . Indicate whether each of the statements below is true or false, and a brief reason why you think so.

  1. If the number states in the trellis of a convolutional code is S, then the number of survivor paths at any point of time is S. Remember that if there is "tie" between to incoming branches (i.e., they both result in the same path metric), we arbitrarilly choose only one as the predecessor. True. There is one survivor per state. The path metric of a state s1 in the trellis indicates the number of residual uncorrected errors left along the trellis path from the start state to s1. False. It indicates the number of likely corrected errors.

  2. Among the survivor paths left at any point during the decoding, no two can be leaving the same state at any stage of the trellis. False. In fact, the survivor paths will likely merge at a certain stage in the past, at which point all of then will emerge from the same state.

  3. Among the survivor paths left at any point during the decoding, no two can be entering the same state at any stage of the trellis. Remember that if there is "tie" between to incoming branches (i.e., they both result in the same path metric), we arbitrarilly choose only one as the predecessor. True. When two paths merge at any state, only one of them will ever be chosen as a survivor path.

  4. For a given state machine of a convolutional code, a particular input message bit stream always produces the same output parity bits. False. The same input stream with different start states will produce different output parity bits.


Problem . Consider a convolution code with two generator polynomials: G0=101 and G1=110.

  1. What is code rate r and constraint length k for this code? We send two parity bits for each message bit, so the code rate r is 1/2. Three message bits are involved in the computation of the parity bits, so the constraint length k is 3.

  2. Draw the state transition diagram for a transmitter that uses this convolutional code. The states should be labeled with the binary string xn-1...xn-k+1 and the arcs labeled with xn/p0p1 where x[n] is the next message bit and p0 and p1 are the two parity bits computed from G0 and G1 respectively.

The figure below is a snapshot of the decoding trellis showing a particular state of a maximum likelihood decoder implemented using the Viterbi algorithm. The labels in the boxes show the path metrics computed for each state after receiving the incoming parity bits at time t. The labels on the arcs show the expected parity bits for each transition; the actual received bits at each time are shown above the trellis.

  1. Fill in the path metrics in the empty boxes in the diagram above (corresponding to the Viterbi calculations for times 6 and 7).

  2. Based on the updated trellis, what is the most-likely final state of the transmitter? How many errors were detected along the most-likely path to the most-likely final state? The most-likely final state is 01, the state with the smallest path metric. The path metric tells us the total number of errors along the most-likely path leading to the state. In this example there were 3 errors altogether.

  3. What's the most-likely path through the trellis (i.e., what's the most-likely sequence of states for the transmitter)? What's the decoded message? The most-likely path has been highlighted in red below. The decoded message can be read off from the state transitions along the most-likely path: 1000110.

  4. Based on your choice of the most-likely path through the trellis, at what times did the errors occur? The path metric is incremented for each error along the path. Looking at the most-likely path we can see that there were single-bit errors at times 1, 3 and 5.


Problem . Consider an LTI system characterized by the unit-sample response h[n].

  1. Given an expression for the frequency response of the system H(e) in terms of h[n].
    H(e) = &Sigmam h[m]e-jΩm

  2. If h[0]=1, h[1]=0, h[2]=1, and h[n]=0 for all other n, what is H(e)?
    H(e) = 1 + e-2jΩ

  3. Let h[n] be defined as in part B and x[n] = cos(Ω1n). Is there a value of Ω1 such that y[n]=0 for all n and 0 ≤ Ω1 ≤ π?
    Ω1 = π/2

  4. Find the maximum magnitude of y[n] if x[n] = cos(πn/4). Maximum magnitude of y[n] is sqrt(2).

  5. Find the maximum magnitude of y[n] if x[n] = cos(-πn/2). Maximum magnitude of y[n] is 0.


Problem .

In answering the questions below, please consider the unit sample response and frequency response of two filters, H1 and H2, plotted below.

Note: the only nonzero values of unit sample response for H1 are : h1[0] = 1, h1[1]=0, h1[2]=1.

Note, the only nonzero values of unit sample response for H2 are : h2[0] = 1, h2[1]=-sqrt(3), h2[2]=1.

In answering the several parts of this review question consider four linear time-invariant systems, denoted A, B, C, and D, each characterized by the magnitude of its frequency response, |HA(e)|, |HB(e)|, |HC(e})|, and |HD(e)| respectively, as given in the plots below. This is a review problem, not an actual exam question, so similar concepts are tested multiple times to give you practice

  1. Which frequency response (A, B, C or D) corresponds to a unit sample response given by

    h[n] = α δ[n] - h1[n]

    and what is the numerical value of |α|? Must be HB as that is the only frequency response that has the same values at 0 and ±π and extremes at ±&pi/2.

    |α|=2 as H1(ej0)=2 but HB(ej0)=0.

  2. Which frequency response (A, B, C or D) corresponds to a unit sample response given by

    h[n] = Σmh1[m]h2[n-m] for m = 0 to n

    and what are the numerical values of h[2], h[3] and H(ej0)? Must be HA since H1(e)H2(e) = H(e), and therefore |H(e)|=0 whenever H1(e)=0 or H2(e)=0.

    H(ej0) = H1(ej0)H2(ej0) = 2(2 - sqrt(3)) = 4 - 2sqrt(3)

    h[n] = [1,0,1]*[1,-sqrt(3),1] so h[2] = 2 and h[3] = -sqrt(3).

  3. Which frequency response (A, B, C or D) corresponds to a unit sample response given by

    h[n] = α δ[n] - Σmh1[m]h2[n-m] for m = 0 to n

    and what is the numerical value of |α|? Since HA is the frequency response for H1*H2, HD must be the solution. It's the only frequency response with enough wiggles.

    Since |HA(e)| = |H1(e)||H2(e)|, |α| = 2(2 + sqrt(3)) = 4 + 2sqrt(3).

  4. Which frequency response (A, B, C or D) corresponds to a unit sample response given by

    h[n] = α δ[n] - h2[n]

    and what is the numerical value of |α|? Must be HC by elimination but also because none of the other frequency repsonse could be generated by a single magnitude shift of H2(e).

    HC(ej0) = 0 so |α| = |H2(ej0)| = 2 - sqrt(3).

  5. Suppose the input to each of the above four systems is

    x[n]=0 for n < 0 and
    x[n] = cos(nπ/6) + cos(nπ/2) + 1.0 for n &ge 0

    Which system (A, B, C or D) produced an output, y[n] below, and what is the value of y[n] for n > 10?

    Must be HA, as |HA(e)| = 0 for Ω=±π/6 and Ω=±π/2.

    y[n] for n > 10 = |HA(ej0)|*1 = 4 - 2sqrt(3)

  6. Suppose the input to each of the above four systems is

    x[n]=0 for n < 0 and
    x[n] = cos(nπ/6) + cos(nπ/2) + 1.0 for n &ge 0

    Which system (H1 or H2) produced an output, y[n] below, and what is the value of y[22]?

    Must be H1 since the H2 system would eliminate cos(π/6), and since the output will eventually be a cosine offset by H1(ej0)*1.

    y[22] = 2.


Problem . Single-sideband (SSB) modulation is a modulation technique designed to minimize the amount of footprint used to transmit an amplitude modulated signal. Here's one way to implement an SSB transmitter.

  1. Starting with a band-limited signal s[n], modulate it with two carriers, one phase shifted by π/2 from the other. The modulation frequency is chosen to be B/2, i.e., in the middle of the frequency range of the signal to be transmitted. Sketch the real and imaginary parts of the Fourier coefficients for the signals at points A and B. The figure below shows the Fourier coefficients for the signal s[n].

  2. The modulated signal is now passed through a low-pass filter with a cutoff frequency of B/2. Sketch the real and imaginary parts of the Fourier coefficients for the signals at points C and D.

  3. The signal is modulated once again to shift it up to the desired transmission frequency. Sketch the real and imaginary parts of the Fourier coefficients for the signals at points E and F.

  4. Finally the two signals are summed to produce the signal to be sent over the air. Sketch the real and imaginary parts of the Fourier coefficients for the signal at point G.


Problem . The diagram for a broken IQ transmitter is shown below. Assume N=1024, M=64 and Ω1=2π/1024.

The IQ receiver was designed assuming the transmitter was working correctly:

  1. The broken transmitter sent the symbols I=1 and Q=1. However, the receiver received the symbols IRX=0.617 and QRX=0.924. What is the value of D?
    x[n]=I[n]cos(MΩ1n) + Q[n]sin(MΩ1n - MΩ1D)

    Recall the identity sin(a-b) = sin(a)cos(b) - cos(a)sin(b):

    x[n] = I[n]cos(MΩ1n) + Q[n] * [cos(MΩ1D)sin(MΩ1n) - sin(MΩ1D)cos(MΩ1n)]
    x[n] =[I[n] - Q[n]sin(MΩ1D)] * cos(MΩ1n) + Q[n]cos(MΩ1D) sin(MΩ1n)

    The receiver demodulates x[n] back to baseband, so that

    IRX[n]= I[n] - Q[n]sin(MΩ1D)
    QRX[n] = Q[n]cos(MΩ1D)

    You can use either of the above formulas to solve for D: D = 1.

  2. The broken transmitter sent the symbols I=1 and Q=1. However, the receiver received the symbols IRX=0 and QRX=0. What is the value of D? D = 4.