DistributedNetworks DistributedNetworks


Network Daemons  «Prev  Next»

Iterative server

A server process may respond to incoming connections in two different ways. The first response pattern looks like this:
  1. The server receives the incoming connection.
  2. The server handles the connection.
  3. The server closes the connection.
  4. The server returns to listening on its well-known port.
A server that operates like this is called an iterative server. When an iterative server is handling a request, other connections to that port are blocked. The incoming connections must be handled one after another. For example, a system in which the telnet server is set up as an iterative server can handle only one incoming telnet connection at a time.

Iterative model

In the Iterative model, the Listener and Server portions of the application coexist in the same CICS or IMS TP and run as part of the same CICS task. The server application, therefore, holds the socket until all application processing has completed. This means that after a client TP starts a server TP, another client TP cannot access the Listener or the server TP until the first client is finished.

Concurrent server

The second type of response pattern looks like this:
  1. The server receives the incoming connection.
  2. The server calls fork() to split itself into two processes, a parent and a child.
  3. The child process handles the connection, while the parent returns to listen on the original port.
  4. When the child process is finished with the connection, it terminates.
A server that operates like this is called a concurrent server. A concurrent server is always available for incoming connections. For example, a system in which the telnet server is set up as a concurrent server can handle multiple telnet connections, each of which is managed by a different child of the listening server process.

Concurrent Vs. Iterative Servers

We use the term iterative server to describe a server implementation that processes one request at a time, and the term concurrent server to describe a server that handles multiple requests at one time. Although most concurrent servers achieve apparent concurrency, we will see that a concurrent implementation may not be required- it depends on the application protocol. In particular, if a server performs small amounts of processing relative to the amount of I/O it performs, it may be possible to implement the server as a single process that uses asynchronous I/O to allow simultaneous use of multiple communication channels. From the perspective of the client, the server appears to communicate with multiple clients concurrently. The point is:
Question: Does the server handle multiple requests concurrently?
The term concurrent server refers to whether the server handles multiple requests concurrently, not to whether the underlying implementation uses multiple concurrent processes.

TCP/IP Illustrated

Concurrent Servers are more complex

In general, concurrent servers are more difficult to design and build, and the resulting code is more complex and difficult to modify. Most programmers choose concurrent server implementations. However, because iterative servers cause unnecessary delays in distributed applications[1] and can become a performance bottleneck that affects many client applications. We can summarize:
Iterative server implementations, which are easier to build and understand, may result in poor performance because they make clients wait for service or threads to become available. In contrast, concurrent server implementations, which are more difficult to design and build, yield better performance.

[1]distributed application: A distributed application consists of one or more local or remote clients that communicate with one or more servers on several machines linked through a network. With this type of application, business operations can be conducted from any geographical location.