DHCP and TCP/IP  «Prev  Next»
Lesson 3Optimizing IP performance on a network
ObjectiveDefine how to optimize IP performance on a network.

Optimizing IP Network Performance

To optimize the flow of TCP/IP data within an internetwork requires the classification of the traffic flow so that you can understand where configuring and tuning the TCP/IP implementation might provide performance gains.

Recognizing Traffic Patterns

The transactions involved between hosts across a network vary from simple datagram interactions with low packet counts to complex authenticated transfers with security and verifications involved. In general, you can categorize packet traffic into two major groups, both of which are sensitive to particular characteristics of a network:
  1. Delay and latency-sensitive traffic
  2. Bandwidth-sensitive traffic

The following series of images describes and provides examples of each traffic pattern.
Delay in latency-sensitive traffic consists mainly of single packet transfer that must be acknowledged before communication can continue. Logon, authentication, and encryption negotiations are extreme examples of this form of traffic. For example, when a user logs on, a packet is sent to the domain controller for authentication. The logon process (transaction) cannot continue until there is acknowledgment of the request by the domain controller.
1) Delay in latency-sensitive traffic consists mainly of single packet transfer that must be acknowledged before communication can continue. Logon, authentication, and encryption negotiations are extreme examples of this form of traffic. For example, when a user logs on, a packet is sent to the domain controller for authentication. The logon process (transaction) cannot continue until there is acknowledgment of the request by the domain controller.

Bandwidth-sensitive traffic consists principally of unidirectional communications where a large amount of traffic flows in one direction and acknowledgments flow in the other. Client/server, thin client, and web-based applications exhibit this type of traffic flow.
Streaming point-to-point audio and video is an example of such bandwidth-sensitive applications
2) Bandwidth-sensitive traffic consists principally of unidirectional communications where a large amount of traffic flows in one direction and acknowledgments flow in the other. Client/server, thin client, and web-based applications exhibit this type of traffic flow. Streaming point-to-point audio and video is an example of such bandwidth-sensitive applications

Consider a latency-sensitive traffic examples on a 10 megabits per second (mbps) local area network (LAN) segment where delay is essentially zero. If the transaction requires 18 packets, with an average of 120 bytes per packet, and the domain controller-processing overhead is 
150 milliseconds (ms).
3) Consider a latency-sensitive traffic examples on a 10 megabits per second (mbps) local area network (LAN) segment where delay is essentially zero. If the transaction requires 18 packets, with an average of 120 bytes per packet, and the domain controller-processing overhead is 150 milliseconds (ms).

The transaction time in the previous example is dominated by the domain controller response times. 
The level of performance is typical in LAN based environments.
4) The transaction time in the previous example is dominated by the domain controller response times. The level of performance is typical in LAN based environments.


  1. Delay in latency-sensitive traffic consists mainly of single packet transfer that must be acknowledged before communication can continue.
  2. Bandwidth-sensitive traffic consists principally of unidirectional communications where a large amount of traffic flows in one direction and acknowledgments flow in the other.
  3. Consider a latency-sensitive traffic examples on a 10 megabits per second (mbps) local area network (LAN) segment where delay is essentially zero.
  4. The transaction time in the previous example is dominated by the domain controller response times. The level of performance is typical in LAN based environments.

Many applications exhibit hybrid traffic-type characteristics, and must be designed to minimize performance limitations when used in a WAN environment.

TCP/IP Performance Factors

The TCP/IP implementation in Windows® 2000 is largely self-tuning, but some design choices made for both the network infrastructure and the software installation can influence the performance that you will ultimately achieve. In particular, when WAN networks span large distances, the delay through a network becomes a significant factor in any design considerations. Principal factors that influence TCP/IP performance are:
  1. TCP/IP receive window size: This is the buffer required to receive packets in a TCP stream before an acknowledgment is sent. For Ethernet-based TCP connections, the window is normally set to 17,520 bytes, or 16 K rounded up to 12 Maximum Segment Size (MSS) segments. Where network delay is high, you can increase the minimum window size offered for a connection by modifying the registry.
  2. Delay/Bandwidth product: High bandwidth/high delay networks, such as satellite links, require special consideration when you are configuring the network transports and designing the applications being used. When network delay becomes significant, always select the largest bandwidth available for links to maximize performance.
  3. Packet loss on the network: This is usually caused due to network errors or congestion in routers.

There are certain factors that influence performance, but because they are part of the existing ISO layer one and two infrastructure, you may not be able to configure them.

Factors that influence Performance

  1. Maximum Transmission Unit (MTU). This is usually set by the underlying network technology. For example, Ethernet provides a 1,500 byte MTU, whereas Token Ring can support up to 17,914 bytes.
  2. Maximum Segment Size (MSS). This is the TCP/UDP payload that can be carried in the MTU. For example, the MSS for an Ethernet MTU of 1,500 bytes is 1,460 bytes.

Note:
In network environments that include links with large delay components, your network design may be required to place network services, authentication, and application servers on both sides of the links to achieve acceptable client performance. This situation is common when dealing with issues related to placement of domain controllers, WINS servers, DHCP Servers, and DNS servers.
The next lesson focuses on how to optimize remote subnets.

Optimizing IP Performance Network- Exercise

Click the Exercise link below to apply what you know about identifying traffic patterns as a first step in improving IP performance.
Optimizing IP Performance Network- Exercise

DHCP Knowledge