Someone picks a name out of a hat known to contain the names of 5
women and 3 men, and tells you a man has been selected.
How much information have they given you about the selection?
X is an unknown 8-bit binary number. You are given another
8-bit binary number, Y, and told that the Hamming distance
between X and Y is one. How many bits of information about
X have you been given?

Suppose we were encoding messages that the following probabilities
for each of the 5 symbols:
Using the probabilities for A, B, C, D and E given above,
construct a variable-length binary decoding tree using a simple
greedy algorithm as follows:
S = {A/0.5 B/0.125 C/0.125 D/0.125 E/0.125}
arbitrarily choose D & E
encoding: "0" => D, "1" => E
S = {A/0.5 B/0.125 C/0.125 DE/0.25}
choose B & C
encoding: "0" => B, "1" => C
S = {A/0.5 BC/0.25 DE/0.25}
choose BC & DE
encoding: "00" => B, "01" => C, "10" => D, "11" => E
S = {A/0.5 BCDE/0.5}
choose A & BCDE
encoding: "0" => A, "100" => B, "101" => C, "110" => D, "111" => E
S = {ABCDE/1.0}
This Tree #1 shown in the diagram above. The choice of D & E
as the first symbols to combine was arbitrary -- we could have
chosen any two symbols from B, C, D and E. So there are many
equally plausible encodings that might emerge from this algorithm,
corresponding the interchanging B, C, D and E at the leaves of
the tree.


| Letter | Probability of occurrence |
|---|---|
| A | p(A) = 0.15 |
| E | p(E) = 0.4 |
| I | p(I) = 0.15 |
| O | p(O) = 0.15 |
| U | p(U) = 0.15 |
If you are told that the first letter of a message is "A", give an
expression for the number of bits of information have you received.
Ben is trying to invent a fixed-length binary encoding for DDS that
permits detection and correction of single bit errors. Briefly
describe the constraints on Ben's choice of encodings for each letter
that will ensure that single-bit error detection and correction is
possible. (Hint: think about Hamming distance.)
Giving up on error detection and correction, Ben turns his attention
to transmitting DDS messages using as few bits as possible. Assume
that each letter will be separately encoded for transmission. Help
him out by creating a variable-length encoding that minimizes the
average number of bits transmitted for each letter of the message.
S = {A/0.15 E/0.4 I/0.15 O/0.15 U/0.15}
arbitrarily choose O & U
encoding: "0" => O, "1" => U
S = {A/0.15 E/0.4 I/0.15 OU/0.3}
choose A & I
encoding: "0" => A, "1" => I
S = {AI/0.3 E/0.4 OU/0.3}
choose AI & OU
encoding: "00" => A, "01" => I, "10" => O, "11" => U
S = {AIOU/0.6 E/0.4}
choose E & AIOU
encoding: "0" => E, "100" => A, "101" => I, "110" => O, "111" => U
S = {AEIOU/1.0}
Note that the assignments of symbols to encodings "0" and "1"
were arbitrary and could have been swapped at each level. So, for example,
swapping the encoding at the last step would have resulted in
encoding: "1" => E, "000" => A, "001" => I, "010" => O, "011" => Uwhich achieves the same average bits/symbol as the previous encoding.
hex bits hex bits hex bits hex bits 0 0000 4 0100 8 1000 C 1100 1 0001 5 0101 9 1001 D 1101 2 0010 6 0110 A 1010 E 1110 3 0011 7 0111 B 1011 F 1111Give the 8-digit hexadecimal equivalent of the following decimal and binary numbers: 3710, -3276810, 110111101010110110111110111011112.
Calculate the following using 6-bit 2's complement arithmetic (which
is just a fancy way of saying to do ordinary addition in base 2
keeping only 6 bits of your answer). Show your work using binary
(base 2) notation. Remember that subtraction can be performed by
negating the second operand and then adding it to the first operand.
13 + 10 15 - 18 27 - 6 -6 - 15 21 + (-21) 31 + 12Explain what happened in the last addition and in what sense your answer is "right".
13 = 001101 15 = 001111 27 = 011011 + 10 = 001010 -18 = 101110 - 6 = 111010 =========== =========== =========== 23 = 010111 -3 = 111101 21 = 010101 -6 = 111010 21 = 010101 31 = 011111 -15 = 110001 -21 = 101011 +12 = 001100 =========== ============ =========== -21 = 101011 0 = 000000 -21 = 101011 (!)In the last addition, 31 + 12 = 43 exceeds the maximum representable positive integer in 6-bit two's complement arithmetic (max int = 25-1 = 31). The addition caused the most significant bit to become 1, resulting in an "overflow" where the sign of the result differs from the signs of the operands.
1 ... 1 1 - AN-1 ... A1 A0 =============== ~AN-1 ... ~A1 ~A0where "~" is the bit-wise complement operator. We've used regular subtraction rules (1 - 0 = 1, 1 - 1 = 0) and noticed that 1 - Ai = ~Ai. So, finally:
b0,0 b0,1 b0,2 prow0 b1,0 b1,1 b1,2 prow1 b2,0 b2,1 b2,2 prow2 pcol0 pcol1 pcol2A single-bit error in one of the data bits (bI,J) will generate two parity errors, one in row I and one in column J. A single-bit error in one of the parity bits will generate just a single parity error for the corresponding row or column. So after computing the parity of each row and column, if both a row and a column parity error are detected, inverting the listed value for the appropriate data bit will produce the corrected data. If only a single parity error is detected, the data is correct (the error was one of the parity bits). Give the correct data for each of the following data blocks protected with the row/column ECC shown above.
(1) 1011 (2) 1100 (3) 000 (4) 0111
0110 0000 111 1001
0011 0101 10 0110
011 100 100
(1) 0011 (2) 1100 (3) 000 (4) 0110
0110 0000 101 1001
0011 0101 10 0110
011 100 100
(1) and (3) had single bit errors, (2) had no detectable errors, and (4)
had an error in a parity bit. The red digits represent corrected values.
| Code | Decimal |
|---|---|
| 11000 | 1 |
| 10100 | 2 |
| 01100 | 3 |
| 10010 | 4 |
| 01010 | 5 |
| 00110 | 6 |
| 10001 | 7 |
| 01001 | 8 |
| 00101 | 9 |
| 00011 | 0 |