6.02 - Digital Communications, End-to-end Lecture #26 Katrina LaCurts, lacurts@mit.edu ====================================================================== Introduction ====================================================================== Today's lecture will show you how all of the material in 6.02 fits together, and answer some questions about how things work in the real world. Over the semester, we've covered much of the technology that is used today to make various forms of communication work, but we have elided some practical details. To illuminate these issues, I will teach you how cellphones actually work. At the end of this lecture, you should understand exactly what happens when you send a text message or make a phone call. ====================================================================== Cell Networks ====================================================================== All of you probably own a cellphone. In fact, you've probably owned multiple cellphones throughout your life. One thing that has changed about cellphones during your lifetime is the networking technology; we've evolved from 2G to 3G to 4G/LTE. In this lecture, I am going to concentrate on cellphones that use LTE networks, but at various points I will explain to you how an LTE network differs from the others. Surprisingly, there are real differences; it's not just marketing! However, "4G" and "LTE" are basically the same thing -- depending on who you talk to, they might be *exactly* the same thing -- so whenever I say LTE you can feel free to replace that with 4G. ---------------------------------------------------------------------- Cell Networks: An Overview ---------------------------------------------------------------------- We have not explicitly addressed the differences between wireless and wired networks in this class, partly because many of the protocols we study do not depend on the physical layer. Wireless communications happen on what is known as the radio spectrum, which encompasses frequencies from 3kHz to 300GHz. What you should realize immediately is that if we let everyone use any part of that spectrum that they want, there will be all sorts of interference. [ In class: Next slide ] The way this is handled is to have the FCC *license* certain parts of the wireless spectrum. Licensed spectrum is available to only those entities that hold the license. It provides protection from outside interference, but it is very expensive. The reason I am telling you this is that it is a design decision that influences cellular networks: they are designed to be extremely spectrally efficient. If we were designing a cellular system to be used on the unlicensed part of the spectrum, we'd probably care less about spectral efficiency and more about power constraints. [ In class: Next slide ] A cellular network is divided into cells, each of which contains a single base station. Your cellphone communicates with whatever base station it is closest to. As you move around -- remember, cellphones are mobile! -- base stations complete "handoffs", where your cellphone seamlessly disconnects from one base station and connects to another (at least, one hopes it's seamless). In reality, the cells overlap a little bit (they're not perfect hexagons as shown in my slides); this is what makes a handoff possible. [ In class: Advance slide 2x ] In an LTE network, these base stations are then connected to a packet-switched core network. (I'm abstracting away the devices that they go through to connect to this core, but you can imagine that there is just some gateway device that bridges the gap between base stations and packet-switched network). There are two important issues we need to solve in cell networks: 1. How nodes access the channel 2. How interference (from many people using the network at once) is handled. Before we do this, I need to give you a brief history of cell networks, so that you understand the motivations for our solutions. ---------------------------------------------------------------------- 3G vs. 4G vs. LTE ---------------------------------------------------------------------- As you know, voice calls in the landline phone network are done over a circuit-switched network. This is because circuit-switched networks are actually *really good* for voice calls. Telephony-quality sound is transmitted at a relatively low, and constant bit rate, which is exactly what a circuit-switched network is good at. We can reserve exactly the bandwidth we need for each call, and there is no waste. Moreover, since bandwidth is explicitly reserved -- this is effectively what happens when you set up the circuit -- our communications are guaranteed, not "best effort". Plus we avoid all of the overhead of reliable packet-switched networks. [ In class: On board ] As we moved from landline networks to cell networks, circuit-switching was still used. - 1G networks were entirely analog, and still connected to the regular circuit-switched phone network. - 2G networks moved from analog to digital, and introduced the notion of cell "data", such as email or text-messaging. This data was still sent over a circuit-switched network. - "2.5G", also known as GPRS (General Packet Radio Service), moved to transmitting data over a packet-switched network, and this move was also included in the 3G standard This means that a 3G network looks necessarily different than an LTE network. Base stations must connect to some type of controller that then sends the data to either a packet-switched core or a circuit-switched core depending on the data type. Today, we use cellphones for all sorts of multimedia that is not voice. We've come to care about - Data rate: most services we use today operate at a higher (and more variable) data rate than voice does - Delay: many services, for instance video-streaming, have demands on delay, or latency. The round-trip-time in the network must be low, or these applications suffer. - Capacity: the total capacity of the system has to be much larger than it used to be. These constraints were the main drivers for the move from 3G to LTE, and partly because of these demands, LTE networks operate with an entirely packet-switched core. This is great for the way we use cellphones today -- I am fairly confident in saying that you all use way more data on your phone than you do voice. *But*, it is terrible for voice data, and we will come to that. ====================================================================== Channel Access and Interference Management ====================================================================== Okay. So what you know so far is that cellphones connect to base stations, which connect to a packet-switched core network. Now let's talk about how the cellphones actually access that channel between them and the base station, and how we deal with interference. Channel access and interference are two separate problems: - Channel access: How the system is shared by users of the same cell - Interference: How to deal with simultaneous transmissions in different cells (or, depending on the access scheme, within the same cell) The solutions to the two problems often depend on one another, so we'll talk about both of these problems at once. The way in which phones access the network and control interference is another way in which 2G/3G/LTE networks differ. ---------------------------------------------------------------------- FDD and TDD ---------------------------------------------------------------------- The first problem we have to figure out is how to handle having just a single phone connected to a base station. Why is this a problem? The phone and the base station both need to communicate. What we want is "full duplex" link, where both endpoints can communicate at the same time. However, we don't really have such a link -- simultaneous transmission would interfere -- and so we have to do some work. There are two approaches: one uses time, the other uses frequency. [ In class: On board ] Frequency Division Duplex (FDD): Choose two frequencies, f1 and f2. On f1 (say), the base station transmits and the phone receives. On f2, the base station receives and the phone transmits. Time Division Duplex (TDD): On a particular frequency, divide it into very small timeslots. In every other time slot, the base station transmits; in alternative time slots, it receives. FDD requires more spectrum than TDD, and TDD can also adapt to a skewed traffic workload: if the base station needs to send twice as much traffic as the phone, it can divide timeslots up so that the base station sends for 2 and then the phone sends for 1. TDD is arguably a better choice. However, we use FDD because it's what early technologies/spectrum assignments were based on. ---------------------------------------------------------------------- Narrowband access: FDD + TDMA (used mostly, but not exclusively, in 2G) ---------------------------------------------------------------------- The first approach to channel access is what is known as a narrowband system. Here, we take our bandwidth, W, and divide it into N channels; each channel has a width of W/N. Each particular cell gets to operate on n of the N channels (n < N). Channels are assigned to cells in such a way that nearby cells don't operate on the same channels; this prevents inter-cell interference. [ In class: Next slide ] Determining the appropriate value of n, and doing the assignment, depends heavily on the geometry of the network. We won't get into it, but if you're interested, research the "frequency reuse factor". Now: How do we allocate transmissions to users within a cell? TDMA. A concrete example: GSM networks -- a particular 2G standard -- use a bandwidth of 200kHz and timeslots of 577 microseconds to do TDMA within a band. ---------------------------------------------------------------------- Wideband access: CDMA (used mostly, but not exclusive, in 3G) ---------------------------------------------------------------------- There are a few problems with the above approach: - Inefficient use of spectrum (a particular cell can't use the entire spectrum) - Complex planning (determing which cells use which bands; and think about what happens when we add a base station) - TDMA not great for all workloads The next approach is a wideband system. This means that a cell is going to get to use the entire spectrum. The basic approach in these systems is to spread transmissions across the spectrum. One such technique is Direct-sequence Spread Spectrum (DSSS). Here, each phone's transmission is multiplied by a pseudorandom sequence. The receiver demodulates using the same pseudorandom sequence for a particular sender. But how does this solve the problem? Won't tranmisssions still interfere? The pseudorandom sequences are such that the transmissions of the other senders, collectively, appear as noise. When the receiver demodulates with the correct pseudorandom sequence, it will select out the correct sender's and ignore all of the "noise" created by the other senders. A good analogy is to think of a room where everyone is speaking a different language. You can easily find the person speaking your language, and everything else is just noise. In addition to getting better spectrum usage, a spread-spectrum technique is more robust to narrowband interference. E.g., interference focused on the 400MHz band (say) will only affect a very small part of the signal, and we can use error-correction codes or some similar method to handle that. There is one particular downside in getting this technique to work: power control. In order for all of the other senders' transmissions to "average out" into noise, they all need to appear the receiver to have been transmitted at the same power. What that means is that a sender that is far away has to transmit with a lot of power; a sender close by needs to use less. Analogy: You and your friend are talking in a crowded restaurant. When you are close to each other, you talk quietly. If you move far apart, you must talk more loudly to be heard. If you talk *too* loudly at any point, everyone else in the room will need to talk louder to be heard by their own receivers, and the problem will escalate. The access technique that uses this spread-spectrum technique is known as CDMA: code-division multiple access. It has some good benefits: - Frequency reuse - Statistical multiplexing - Scalable As the number of users increase, so does the interference, *but* so does the processing gain Some CDMA schemes use repetition coding. Better ones use convolutional codes with interleaving (they use Viterbi and everything!). The operations I described here work more-or-less the same on the uplink and the downlink, with some exceptions. The downlink doesn't experience the "near-far" problem -- the problem that led to our power control issues -- but it takes a capacity hit because there aren't many interfering base stations; we don't get the nice averaging effects we need to do DSSS. ---------------------------------------------------------------------- Wideband access: OFDM (used in LTE) ---------------------------------------------------------------------- The main problem with CDMA is the tight power control that is necessary. This operation can be too expensive for some cellphones. (Also recall the original problem we were fixing when we moved to CDMA: frequency reuse). A fix to that is to use OFDM: Orthogonal Frequency-division Multiplexing. You're familiar with the idea of frequency division by now: we divide up the frequency spectrum, and transmit on different parts of it. But it's not particularly efficient in spectrum usage; there are a lot of gaps between the channels. OFDM is different. We still divide the channel into frequencies, but there is some overlap. The key is to space the frequencies such that the parts of the signals that overlap cancel each other out. In OFDM, the carrier spacing is 1/tau, where tau is the symbol period. This ensures that the carriers are orthogonal, and the interference from each will cancel the others out. [ In class: Next slide ] There is still an issue of interference here: each sender in a cell uses a particular set of subcarriers, but senders from nearby cells might still interfere. This interference won't necessarily "average out" like it did in CDMA, because it might be from one particular sender in a nearby cell (or a few nearby cells). What we do is re-assign a particular user's subcarriers in each symbol period. This is known as frequency hopping. ---------------------------------------------------------------------- Error correction ---------------------------------------------------------------------- Even though LTE uses OFDM, there is still always the possibility that noise or interference will cause bit errors. LTE uses a method called "hybrid ARQ" to handle this. Hybrid ARQ is a combination of an error-correcting code and a retransmission scheme. The retransmission scheme is based, like virtually all transmission schemes, on sending ACKs, but it has a few additional components from the reliable transport protocols we studied in 6.02. Note that this means there is a notion of reliable transport between the phone and the base station, not just from one endpoint (the phone) to the other (another phone, a server on the Internet, etc.). The error-correcting is sometimes done with convolutional codes, just like we studied in 6.02, and other times with Turbo codes, which are a bit more advanced (sort of the next step after convolutional codes). It usually also includes a cyclic redundancy check of the data, which is *sort of* like sending a hash of the packet along with the packet. TCP packets also include CRCs. ====================================================================== Cell networks, end-to-end ====================================================================== [ In class: Next slide; Advance slide ] So, now that you know all of this technology, what happens for instance when you send a text message on your phone? First, you connect to the base station. Which involves some work! - Your phone first does a "cell search". It listens for the Primary Synchronization Signal (and, subsequently, the Secondary Synchronization Signal) from the base station. These are broadcast periodically, and give frame timing and the cell's physical-layer identity. - Once your phone has discovered a cell, it receives and decodes the cell's system information. Part of this information -- the Master Information Block -- is transmitted on a broadcast channel every 40ms. This data is modulated with QPSK and convolutionally encoded. Once the MIB is received, your phone can receive System Information Blocks on the normal downlink. These are typically broadcast every 80ms, though that timing can change. - Now that your phone has all the system information, it needs to do one last thing: figure out the "uplink timing". This is, in some sense, the latency between the phone and the base station. This timing matters because how far away you are from the station changes a lot about how you send (sometimes it changes power constraints, other times it changes when you start sending in a timeslot, etc.) Think about TDMA: TDMA works such that at the receiver, data from a different receiver is received every second (say). For a sender's data to be received in the correct timeslot, it must start sending slightly before that timeslot. How far away it is changes how far in advance it needs to send. To do this, the sender sends a "random access preamble" and receives a response. The preamble is usually 1ms long, and there is a particular region of a frame in which you transmit it. After finishing that process, your phone gets assigned a unique identity by the base station (which requires one more round-trip between the phone and the base station). The base station also does some work here to authenticate you with the phone network. [ In class: Advance slide ] Now it's time to send a packet. This we've more-or-less covered. Your phone will use OFDM to send to the base station, and the data will be encoded with a CRC and either a convolutional or Turbo code. If the data is received in error, the phone will be notified and will re-send. Once it is received correctly, the base station will then send data to the packet-switched core, which operates like any other packet-switched network. ====================================================================== Dealing with Voice ====================================================================== Way back in the beginning of this lecture, we talked about how packet-switched networks are terrible for voice. So what happens today, with everything being packet-switched? People do in fact still make phone calls.. It turns out that there are a couple of different technologies for handling this problem: 1. VoIP, or Voice-over-IP. VoIP traffic uses a protocol called RTP -- real-time transport protocol -- which, unlike TCP, does *not* favor reliability over performance. And you can imagine why; TCP is not great for interactive applications. RTP forms the basis for many entertainment systems (e.g., streaming), not just VoIP. An RTP sender monitors the quality of its traffic so that it can send packets in sort of a semi-reliable manner. It's okay if a few packets get lost -- those don't need to be retransmitted -- but it's not okay if lost starts to degreade the quality of the call -- then we should retransmit something. This monitoring is actually done via a separate protocol, RTCP (Real-time Control Protocol), which runs alongside RTP. Despite this effort to monitor quality, VoIP calls tend to have low quality. In particular, they can experience both high latency and high variation in latency (also known as jitter). In some sense, this level of performance isn't surprising; the Internet was not designed for guarantees on quality. 2. Circuit-switched Fallback, CSFB. An LTE network that uses CSFB will process only data, no voice. For a voice call, it will fall back to a 2G or 3G network, which has a circuit-switched component. Here, the quality of the call will be high, but the setup for the call will take awhile. 3. Voice over LTE, VoLTE. This is a not-too-dissimilar technique from VoIP. VoLTE has some advantages over VoIP in that it has less overhead (in particular, the packet headers are smaller), which means that it should be able to do a better job of improving quality. However, it's still very new, so there is not much published research about it, and it's hard to get a sense of whether this technology will solve the problem of voice over a packet-switched network. So look what has happened: we started out with a phone network that was extremely well-suited to voice traffic, and then the way we use our phones evolved. This caused the networks themselves to evolve and adapt to changing traffic patterns. The result is a phone network that is actually not very well-suited to making phone calls. ====================================================================== Acknowledgments ====================================================================== I used a variety of resources for the content herein. These two books are very helpful (Chapter 4 in the first provides a particularly good overview of cell networks. The second provides every detail of LTE that you could possibly want) - D. Tse and P. Viswanath, Fundamentals of Wireless Communication, Cambridge University Press, 2005 - E. Dahlman, S. Parkvall, J. Skold, 4G LTE/LTE-Advanced for Mobile Broadband For more about OFDM: - D. Matiae, OFDM as a Possible Modulation Technique for Multimedia Applications in the Range of MM Waves (http://web.cs.ucdavis.edu/~liu/289I/Material/OFDM_ubicom.pdf) - OFDM Tutorial (http://www.csie.ntu.edu.tw/~hsinmu/courses/_media/wn_11fall/ofdm_tu torial.pdf) For VoIP/CSFB/VoLTE - RFC 3550 (https://datatracker.ietf.org/doc/rfc3550/) - Qualcomm, Circuit-switched Fallback White Paper (https://www.qualcomm.com/media/documents/files/circuit-switched-fal lback-the-first-phase-of-voice-evolution-for-mobile-lte-devices.pdf) And thanks to Aaron Schulman and Peter Iannucci for pointing me to resources and answering questions.