The question is: how do we demultiplex a sequence of IP datagrams that need to go to many different application processes? Let&apos;s consider a particular host with a single network interface bearing the IP address 24.156.79.20. Normally, every datagram received by the IP layer will have this value in the IP Destination Address field. Consecutive datagrams received by IP may contains a piece of a file you are downloading with your Web browser, an e-mail sent to you by your brother, and a line of text a buddy wrote in an IRC chat channel. How does the IP layer know which datagrams go where, if they all have the same IP address? The first part of the answer lies in the Protocol field included in the header of each IP datagram. This field carries a code that identifies the protocol that sent the data in the datagram to IP. Since most end-user applications use TCP or UDP at the transport layer, the Protocol field in a received datagram tells IP to pass data to either TCP or UDP as appropriate. In both UDP and TCP messages two addressing fields appear, for a Source Port and a Destination Port . These are analogous to the fields for source address and destination address at the IP level , but at a higher level of detail. They identify the originating process on the source machine, and the destination process on the destination machine. They are filled in by the TCP or UDP software before transmission, and used to direct the data to the correct process on the destination device. TCP and UDP port numbers are 16 bits in length, so valid port numbers can theoretically take on values from 0 to 65,535. As we will see in the next topic , these values are divided into ranges for different purposes, with certain ports reserved for particular uses. One fact that is sometimes a bit confusing is that both UDP and TCP use the same range of port numbers, and they are independent. So, in theory, it is possible for UDP port number 77 to refer to one application process and TCP port number 77 to refer to an entirely different one. There is no ambiguity, at least to the computers , because as mentioned above, each IP datagram contains a Protocol field that specifies whether it is carrying a TCP message or a UDP message. IP passes the datagram to either TCP or UDP, which then sends the message on to the right process using the port number in the TCP or UDP header. This mechanism is illustrated in Figure 198 . In practice, having TCP and UDP use different port numbers is confusing, especially for the reserved port numbers used by common applications. For this reason, by convention, most reserved port numbers are reserved for both TCP and UDP. For example, port #80 is reserved for the Hypertext Transfer Protocol (HTTP) for both TCP and UDP, even though HTTP only uses TCP. Application process multiplexing and demultiplexing in TCP/IP is implemented using the IP Protocol field and the UDP/TCP Source Port and Destination Port fields. Upon transmission, the Protocol field is given a number to indicate whether TCP or UDP was used, and the port numbers are filled in to indicate the sending and receiving software process. The device receiving the datagram uses the Protocol field to determine whether TCP or UDP was used, and then passes the data to the software process indicated by the Destination Port number

Higher-Layer Data Transfer: An application sends a message to the UDP software . UDP Message Encapsulation: The higher-layer message is encapsulated into the Data field of a UDP message. The headers of the UDP message are filled in, including the Source Port of the application that sent the data to UDP, and the Destination Port of the intended recipient. The checksum value may also be calculated. Transfer Message To IP: The UDP message is passed to IP for transmission

TCP is designed to have applications send data to it as a stream of bytes, rather than requiring fixed-size messages to be used. This provide maximum flexibility for a wide variety of uses, because applications don’t need to worry about data packaging, and can send files or messages of any size. TCP takes care of packaging these bytes into messages called segments . Consider for example an application that is sending database records. It needs to transmit record #579 from the Employees database table, followed by record #581 and record #611. It sends these records to TCP, which treats them all collectively as a stream of bytes. TCP will package these bytes into segments, but in a manner the application cannot predict. It is possible that each will end up in a different segment, but more likely they will all be in one segment, or part of each will end up in different segments, depending on their length. The records themselves must have some sort of explicit markers so the receiving device can tell where one record ends and the next starts. Since applications send data to TCP as a stream of bytes and not prepackaged messages, each application must use its own scheme to determine where one application data element ends and the next begins. TCP is said to treat data coming from an application as a stream ; thus, the description of TCP as stream-oriented . Each application sends the data it wishes to transmit as a steady stream of octets (bytes). It doesn&apos;t need to carve them into blocks, or worry about how lengthy streams will get across the internetwork. It just “pumps bytes” to TCP. TCP is designed to have applications send data to it as a stream of bytes, rather than requiring fixed-size messages to be used. This provide maximum flexibility for a wide variety of uses, because applications don’t need to worry about data packaging, and can send files or messages of any size. TCP takes care of packaging these bytes into messages called segments . Since TCP works with individual bytes of data rather than discrete messages, it must use an identification scheme that works at the byte level to implement its data transmission and tracking system. This is accomplished by assigning each byte TCP processes a sequence number . Since applications send data to TCP as a stream of bytes and not prepackaged messages, each application must use its own scheme to determine where one application data element ends and the next begins. Consider for example an application that is sending database records. It needs to transmit record #579 from the Employees database table, followed by record #581 and record #611. It sends these records to TCP, which treats them all collectively as a stream of bytes. TCP will package these bytes into segments, but in a manner the application cannot predict. It is possible that each will end up in a different segment, but more likely they will all be in one segment, or part of each will end up in different segments, depending on their length. The records themselves must have some sort of explicit markers so the receiving device can tell where one record ends and the next starts.

Transcript of "Transport Layer [Autosaved]"

1.
Services provided by transport layer protocols <ul><li>Protocols running at the transport layer provide services to upper layers </li></ul><ul><ul><li>To enable software applications in higher layers to work over an internetwork </li></ul></ul><ul><ul><li>For connections to be established and maintained between software services on possibly distant machines. </li></ul></ul><ul><ul><li>To enable applications to send data in a reliable way </li></ul></ul><ul><ul><ul><li>without needing to worry about error correction, lost data or flow management, and network-layer protocols, which are often unreliable and unacknowledged. </li></ul></ul></ul>

2.
Adressing at Network and Transport layers <ul><li>Internet Protocol (IP) provide addressing function on a TCP/IP network </li></ul><ul><li>Network-layer addresses uniquely identify each network interface </li></ul><ul><ul><li>serve as the mechanism by which data is routed to the correct network on the internetwork </li></ul></ul><ul><ul><li>and then the correct device on that network </li></ul></ul><ul><li>Additional level of addressing occurs at the transport layer in TCP/IP </li></ul><ul><li>TCP and UDP, use the concepts of ports and sockets for virtual software addressing </li></ul><ul><ul><li>To enable the Multiplexing and Demultiplexing Using Ports </li></ul></ul>

5.
Client server ports <ul><li>Well-known and registered port numbers are needed for server processes </li></ul><ul><li>Client processes don't use well-known or registered ports </li></ul><ul><li>The server must know the port number of client to send reply </li></ul><ul><li>Each client process use temporary port number called an ephemeral port number . </li></ul><ul><li>These port numbers are assigned in a pseudo-random way </li></ul>

8.
Sockets: Process Identification <ul><li>Combination of </li></ul><ul><ul><li>The IP address of the host it runs on </li></ul></ul><ul><ul><li>Port number which has been assigned to it </li></ul></ul><ul><li>Notation </li></ul><ul><ul><li><IP Address>:<Port Number> </li></ul></ul><ul><li>Eg: Socket corresponding to the HTTP server would be 41.199.222.3:80 </li></ul>

9.
Socket Pairs: Connection Identification <ul><li>The exchange of data between a pair of devices consists of a series of messages sent from a socket on one device to a socket on the other </li></ul><ul><li>Each device will normally have multiple such simultaneous conversations going on </li></ul><ul><li>A connection is established for each pair of devices for the duration of the communication session </li></ul><ul><li>Each connection is uniquely identified using the combination of the client socket and server socket </li></ul><ul><li>An Eg: connection between two devices can be described using this socket pair:(41.199.222.3:80, 177.41.72.6:3022) </li></ul>

10.
UDP <ul><li>The User Datagram Protocol (UDP) was developed for use by application protocols that do not require reliability, acknowledgment or flow control features at the transport layer. </li></ul><ul><li>It is designed to be simple and fast, providing only transport layer addressing in the form of UDP ports and an optional checksum capability, and little else. </li></ul>

12.
What UDP Does Not <ul><li>UDP does not </li></ul><ul><ul><li>Establish connections before sending data. </li></ul></ul><ul><ul><li>Provide acknowledgments to show that data was received. </li></ul></ul><ul><ul><li>Provide any guarantees that its messages will arrive. </li></ul></ul><ul><ul><li>Detect lost messages and retransmit them. </li></ul></ul><ul><ul><li>Ensure that data is received in the same order that they were sent. </li></ul></ul><ul><ul><li>Provide any mechanism to manage the flow of data between devices, or handle congestion. </li></ul></ul>

13.
Use of UDP <ul><li>When an application values timely delivery over reliable delivery </li></ul><ul><li>TCP’s retransmission of lost data would be of limited or even no value. </li></ul><ul><li>A simple Application is able to handle the potential loss of an IP datagram and other features of TCP are not required. </li></ul><ul><li>UDP is also used for applications that require multicast or broadcast transmissions </li></ul><ul><ul><li>TCP is only supported for unicast communication between two devices. </li></ul></ul>

16.
TCP <ul><li>The primary transport layer protocol in the TCP/IP suite is the Transmission Control Protocol (TCP) . </li></ul><ul><li>TCP is a connection-oriented, acknowledged, reliable, fully-featured protocol designed to provide applications with a reliable way to send data using the unreliable Internet Protocol. </li></ul>

17.
TCP <ul><li>It allows applications </li></ul><ul><ul><li>To send bytes of data as a stream of bytes </li></ul></ul><ul><ul><li>Automatically packages them into appropriately-sized segments for transmission. </li></ul></ul><ul><li>It uses a special sliding window acknowledgment system </li></ul><ul><ul><li>To ensure that all data is received by its recipient </li></ul></ul><ul><ul><li>To handle necessary retransmissions </li></ul></ul><ul><ul><li>To provide flow control for connected devices to manage the rate at which data is sent. </li></ul></ul>

23.
TCP FSM <ul><li>The TCP finite state machine describes the sequence of steps taken by both devices in a TCP session as they establish, manage and close the connection. </li></ul><ul><li>Three types of message that control transitions between states </li></ul><ul><ul><li>SYN: A synchronize message, used to initiate and establish a connection. It is so named since one of its functions is to synchronizes sequence numbers between devices. </li></ul></ul><ul><ul><li>FIN: A finish message, which is a TCP segment with the FIN bit set, indicating that a device wants to terminate the connection. </li></ul></ul><ul><ul><li>ACK: An acknowledgment, indicating receipt of a message such as a SYN or a FIN. </li></ul></ul>

26.
Sequence number synchronisation <ul><li>As part of the process of connection establishment, each of the two devices in a TCP connection informs the other of the sequence number it plans to use for its first data transmission by putting the preceding sequence number in the Sequence Number field of its SYN message. </li></ul><ul><li>The other device confirms this by incrementing that value and putting it into the Acknowledgment Number field of its ACK , telling the other device that is the sequence number it is expecting for the first data transmission. </li></ul><ul><li>This process is called sequence number synchronization . </li></ul>

29.
Normal Connection Termination <ul><li>Device sends a FIN message to tell the other device that it wants to end the connection </li></ul><ul><li>It is s acknowledged by other Device. </li></ul><ul><li>When the responding device is ready, it too sends a FIN. </li></ul><ul><li>This response FIN that is acknowledged </li></ul><ul><li>After waiting a period of time for the ACK to be received, the session is closed. </li></ul>

31.
The TIME-WAIT State <ul><li>The TIME-WAIT state is required for two main reasons. </li></ul><ul><ul><li>The first is to provide enough time to ensure that the ACK is received by the other device, and to retransmit it if it is lost. </li></ul></ul><ul><ul><li>The second is to provide a “buffering period” between the end of this connection and any subsequent ones. If not for this period, it is possible that packets from different connections could be mixed, creating confusion. </li></ul></ul><ul><li>The standard specifies that the client should wait double a particular length of time called the maximum segment lifetime (MSL) before finishing the close of the connection. </li></ul>

32.
Transmission control block <ul><li>Since each connection is distinct, we must maintain data about each connection separately. </li></ul><ul><li>TCP uses a special data structure for this purpose, called a transmission control block (TCB) . </li></ul><ul><li>Each device maintains its own TCB for the connection. </li></ul><ul><li>TCB contains all information about the connection, such as:- </li></ul><ul><ul><li>Two socket numbers that identify it </li></ul></ul><ul><ul><li>Pointers to buffers where incoming and outgoing data are held. </li></ul></ul><ul><ul><li>The TCB is also used to implement the sliding window mechanism. </li></ul></ul><ul><ul><ul><li>It holds variables that keep track of the number of bytes received and acknowledged, </li></ul></ul></ul><ul><ul><ul><li>Bytes received and not yet acknowledged </li></ul></ul></ul><ul><ul><ul><li>Current window size and so forth </li></ul></ul></ul>

33.
TCB set up <ul><li>Before the process of setting up a TCP connection can begin, the devices on each end must perform some “prep work”. </li></ul><ul><li>One of the tasks required to prepare for the connection is to set up the TCB that will be used to hold information about it. </li></ul><ul><li>This is done right at the very start of the connection establishment process, when each device just transitions out of the CLOSED state </li></ul>

34.
TCP requirements <ul><li>Two key requirements of the protocol: </li></ul><ul><ul><li>Reliability: Ensuring that data that is sent actually arrives at its destination, and if not, detecting this and re-sending the data. </li></ul></ul><ul><ul><li>Data Flow Control: Managing the rate at which data is sent so that it does not overwhelm the device that is receiving it. </li></ul></ul>

35.
PAR <ul><li>Reliability in communications follow a rule </li></ul><ul><ul><li>a device to send back an acknowledgment each time it successfully receives a transmission </li></ul></ul><ul><li>If a transmission is not acknowledged after a period of time, it is retransmitted by its sender </li></ul><ul><li>This system is called positive acknowledgment with retransmission (PAR) </li></ul><ul><li>One drawback: the transmitter cannot send next message until the previous is acknowledged. </li></ul>

37.
TCP ACK & Retransmission <ul><li>TCP acknowledgments are cumulative </li></ul><ul><li>Tell a transmitter that all the bytes up to the sequence number indicated in the acknowledgment were received successfully. </li></ul><ul><li>If bytes are received out of order, they cannot be acknowledged until all the preceding bytes are received. </li></ul><ul><li>TCP includes a method for timing transmissions and retransmitting lost segments if necessary. </li></ul>

38.
Managing Retransmissions <ul><li>Each time a segment is sent, a copy is Placed On Retransmission Queue </li></ul><ul><li>Timer Starts at a predetermined value </li></ul><ul><li>Counts down over time </li></ul><ul><li>If an acknowledgment is received for a segment before its timer expires, the segment is removed from the retransmission queue </li></ul><ul><li>If the timer expires before an acknowledgment is received, the segment is retransmitted </li></ul><ul><li>No guarantee that a retransmitted segment will be received </li></ul><ul><li>If not, Retransmission timer is reset, the segment will be retransmitted again and the process repeated </li></ul>

40.
Retransmission Time <ul><li>Length of time for retransmission timer is very important </li></ul><ul><li>If it is set too low </li></ul><ul><ul><li>A segment actually received might be retransmitted </li></ul></ul><ul><ul><li>didn't wait long enough for the acknowledgment </li></ul></ul><ul><li>if it is set too long </li></ul><ul><ul><li>waste time waiting for an acknowledgment that will never arrive </li></ul></ul><ul><ul><li>reducing overall performance </li></ul></ul>

41.
Choosing Retransmission time <ul><li>Ideally, the retransmission timer should be of value just slightly larger than the round-trip time (RTT) </li></ul><ul><li>How to determine RTT? </li></ul><ul><ul><li>Differences in TCP Connection Distances. </li></ul></ul><ul><ul><li>Transient Delays and Variability : The amount of time it takes to send data between any two devices will vary over time due to various happenings on the internetwork: fluctuations in traffic, router loads and so on. </li></ul></ul>

43.
RTT Calculation by Karn's Algorithm <ul><li>Karn's algorithm- Inventor, Phil Karn </li></ul><ul><ul><li>Does not use measured round-trip times </li></ul></ul><ul><ul><li>Eliminates problem of acknowledgment ambiguity </li></ul></ul><ul><li>Start by setting the timer, based on the current average round-trip time </li></ul><ul><li>On retransmission, the timer is not reset to the same value but is “backed off” (increased) using a multiplier (typically 2) to give the retransmission more time to be received </li></ul><ul><li>The timer continues to be increased until a retransmission is successful, up to a certain maximum value </li></ul>

44.
RTT Calculation by Karn's Algorithm <ul><li>The round-trip timer is kept at the longer (backed-off) value until a valid round-trip time can be measured on a segment that is sent and acknowledged without retransmission </li></ul><ul><li>This permits a device to respond with longer timers temporarily, while eventually having the round-trip time settle back to a long-term average when normal conditions resume </li></ul>

45.
TCP sliding window <ul><li>A variation on the enhanced PAR system </li></ul><ul><li>To support TCP’s stream orientation </li></ul><ul><li>Each device keeps track of the status of the byte stream </li></ul><ul><li>Dividing Data into four conceptual categories: </li></ul><ul><ul><li>Bytes sent and acknowledged </li></ul></ul><ul><ul><li>Bytes sent but not yet acknowledged </li></ul></ul><ul><ul><li>Bytes not yet sent but that can be sent immediately </li></ul></ul><ul><ul><li>Bytes not yet sent that cannot be sent until the recipient signals that it is ready for them. </li></ul></ul>

47.
Send, usable windows <ul><li>The send window is the key to the entire TCP sliding window system: </li></ul><ul><ul><li>it represents the maximum number of unacknowledged bytes a device is allowed to have outstanding at once. </li></ul></ul><ul><li>The usable window is the amount of the send window that the sender is still allowed to send at any point in time; </li></ul><ul><ul><li>it is equal to the size of the send window less the number of unacknowledged bytes already transmitted. </li></ul></ul>

50.
Implementing sliding window <ul><li>Three essential fields in the TCP segment </li></ul><ul><ul><li>The Sequence Number field indicates the number of the first byte of data being transmitted. </li></ul></ul><ul><ul><li>The Acknowledgment Number is used to acknowledge data received by the device sending this segment. </li></ul></ul><ul><ul><li>The Window field tells the recipient of the segment the size to which it should set its send window </li></ul></ul>

51.
Window slide <ul><li>When a device gets an acknowledgment for a range of bytes, it knows they have been successfully received by their destination. </li></ul><ul><li>It moves them from the “sent but unacknowledged” to the “sent and acknowledged” category. </li></ul><ul><li>This causes the send window to slide to the right, allowing the device to send more data. </li></ul>

53.
Flow control <ul><li>The TCP sliding window system is used not just for ensuring reliability through acknowledgments and retransmission </li></ul><ul><li>it is also the basis for TCP’s flow control mechanism. </li></ul><ul><li>By increasing or reducing the size of its receive window </li></ul><ul><ul><li>a device can raise or lower the rate at which its connection partner sends it data. </li></ul></ul><ul><ul><li>In the case where a device becomes extremely busy, it can even reduce the receive window to zero, closing it </li></ul></ul><ul><ul><li>this will halt any further transmissions of data until the window is reopened </li></ul></ul>

55.
Silly window syndrome <ul><li>Sliding window mechanism does not ensure a min size of segment </li></ul><ul><li>Shrinking window can result in inefficient transmission of small size segment </li></ul>

57.
SWS avoidance algorithm <ul><li>Receiver SWS avoidance </li></ul><ul><ul><li>Restrict moving right edge of window by too small amount </li></ul></ul><ul><ul><li>Reduce window size to 0 </li></ul></ul><ul><ul><li>Right edge be moved by half buffer size or MSS whichever is less </li></ul></ul>

58.
Sender SWS avoidance algorithm <ul><li>Nagle’s algorithm – John Nagle </li></ul><ul><ul><li>Data can be immediately sent as long as all sent data is acknowledged </li></ul></ul><ul><ul><li>When there is unacknowledged data </li></ul></ul><ul><ul><ul><li>Do not send till all data acknowledged </li></ul></ul></ul><ul><ul><ul><li>Send after accumulating data for full segment </li></ul></ul></ul>