Posts

Network Address Translation (NAT)

Image
Network Address Translation is a popular network engineering technique. It brings a special terminology and can be implemented in different flavours. Why network address translation? The basic advantage of network address translation in the Internet world is to allow a set of internal hosts to communicate with hosts on the Internet, using a single public IP address. Recall that we can not conceptually make a private IP address communicate with a public IP address: Public to Private IP communication © Geoff Huston, Cisco.com Network address translation terminology Public-side IP address : the public IP address to which the internal IP address is mapped during NAT Traversal Public-side source port : the source port number to which the original source port number is mapped, when an internal host communicates with an external host, in NAPT Trigger outgoing packet : the outgoing packet, sent from the internal host towards the external host, that triggers the NAT mapping. without a trigger o...

Dynamics Of AIMD For Multiple Flows

Image
We have seen in AIMD for a single flow that R = W(t) / RTT(t). In the case of multiple flows, RTT(t) tend to be constant. So the sending rate varies with the window size: R = W(t) / RTT and it tends to equal W(t) The queue occupancy B for multiple flows has a smooth geometric intuition and is almost constant in time, compared with the queue occupancy for a single flow. The sending rate tends to depend solely on the evolution of the window size over time – © Nick McKeown, Stanford University AIMD is sensitive to packet loss, since R can be written as a function of p , p being the drop probability :     R = √(3/2) * 1/(RTT*√p) Regarding packet loss, remember that lost packets on one flow do not impact another flow. References CS144, Stanford University

Dynamics of AIMD For A Single Flow

Image
AIMD is a mechanism used by TCP to manage congestion on bottleneck link. the figure below shows a sample bottleneck link. Both hosts compete on using the bottleneck link © Ratul Mahajan, Washington University AIMD increases and decreases the size of the window to control the rate of the transmitter (the sending rate). AIMD increases the amount of outstanding segments to the point of network congestion. Once a congestion occurs, AIMD decreases that amount. AIMD does not impact the egress link rate. The egress link will stay at 100% usage. With AIMD, if we had to draw geometric intuition of the cumulative bits of outstanding segments sent per time, we will get a sawtooth shape. That’s why we call it AIMD Sawtooth . AIMD sawtooth – © imada.sdu.dk Does WFQ solve the congestion problem? no. it’s true that each flow will be put in its own queue and that the scheduler will serve queues in a fair manner. But the source host will not be notified in times of congestion. So it will still sen...

Network-based Congestion Control vs Host-based Congestion Control

ECN is a network-based congestion control mechanism. ECN is distributed because any packet switch can signal the congestion. Host-based congestion control occurs at the source host (or transmitter). The transmitter can detect congestion with variables such as packet retransmissions, duplicate ACKs, window size reduced,… With congestion control, TCP tries to determine the amount of outstanding segments that can be sent at a time without overwhelming the network.. TCP relies on the Sliding Window mechanism to perform congestion control. In fact, TCP defines the concept of congestion window ( cwnd ). And the relationship between cwnd and the Sender’s Window size is given by the following formula: Sender’s Window size = min (Advertized window, congestion window) where: Advertized window: the window size as announced by the receiver (this is the flow control window) congestion window ( cwnd ): the window size calculated by AIMD algorithm at the transmitter side the congestion window c...

Queueing Properties

Image
We study in this network engineering article how queueing mechanisms work and we take one example: Weighted Fair Queueing. In a queue, we can model an arrival process with a deterministic process. However, when aggregated, arrival processes are random events. So we model them with a random process. The study of random arrival processes is part of a discipline called Queueing theory . This study shows that random arrival processes have interesting properties: Traffic burstiness increases delay. Determinism reduces delay Poisson process : models an aggregation of random events, eg the incoming phone calls at a telecom switch. Generally we can not use the Poisson process with queue arrivals. However, we can use the Poisson process with new flows in some types of events such as Web requests or new user connections Packet arrival on the network is not a Poisson process. Little’s Result , is a simplistic queueing model given: λ the arrival rate, L: the number of packets in the queue waiting ...

Circuit Switching Concepts

Circuit switching technology appeared long before packet switching. It appeared to support Telephone calls first, then was used for computer communication. In the past, a dedicated physical circuit was formed by Switchboard Operators each time a phone A wants to communicate with phone B. The switchboard operator connects the ingress circuit to the egress circuit to form a dedicated switched circuit. Then switchboard operators were replaced with Central Office switches that automatically “switch” between circuits. There was the concept of virtual private circuits , which are dedicated circuits established over physical shared circuits in the operator’s network. The term virtual comes from the fact that they do not physically exist from end to end. In Traditional telephony, each voice call is transported over a 64kbps channel . Voice calls are carried over trunks or “big fat pipes” whose data rates can be up to 10Gbps. These trunks are usually high-density copper cables or fiber optic ...

Congestion Control Basics And the Max-min Fairness Allocation

Image
Let us understand some basics of congestion control and learn what max-min fairness allocation means. Remember from the Packet Switching article that: in a packet switch, only one packet traverses a link at instant “t” with statistical multiplexing, the link is used at its full capacity. When, at a node, the sum of incoming rates is greater or equal to the rate of the outgoing link, then network congestion will occur. Congestion is the normal state of the network. Congestion is inevitable. It’s an indication that the network is efficiently used. If there is no congestion then delay is low and the network is used inefficiently. If there is congestion, then queues are used heavily, and the network is used efficiently. We want to use the maximum capacity of the links to make the most advantage of the network. Congestion examples: two packets collide at a router interface and want to take the wire both at the same time two or more incoming flows at their maximum rates, over a prolonged pe...