Introduction to Asynchronous Transfer Mode

This chapter briefly describes and explains the key features, functions, and benefits of asynchronous transfer mode networking, and discusses how Microsoft® NetShow® Theater Server operates on this network type. This information is presented in the following topics:

Understanding Asynchronous Transfer Mode

Asynchronous transfer mode (ATM) is a digital packet-switching technology originally developed to integrate voice, video and data traffic into one network. An ATM network is a connection-oriented environment that easily can be optimized for video-on-demand delivery systems. It offers extremely high throughput and guaranteed transmission of sequential packets.

ATM uses very short, fixed-length packets called cells. Each cell has a fixed length of 53 bytes. By using fixed-length cells, the information can be transported in a predictable manner. The ATM cell is broken into two main sections, the header and the payload. The header (5 bytes) is the addressing mechanism, and is significant for networking purposes because it defines how the cell is to be delivered. The payload (48 bytes) is the portion that carries the actual voice, data, or video information.

For information on configuring your ATM network or configuring NetShow Theater Server for an ATM network, see Microsoft® NetShow Theater Server Installation Guide

Understanding the OSI Reference Model

ATM networks are described in a layered architecture defined by the Open Systems Interconnection (OSI) reference model, the International Standards Organization (ISO) structure for the "ideal" network architecture. This model outlines seven areas, or layers, for the network. These layers (from highest to lowest) are:

The OSI reference model is a generalized description of networking functions. It provides a common set of terms that can be related to a specific networking system. If you understand how one network operates and understand how that network is related to the OSI model, then you can quickly understand a new network by seeing how that network also relates to the OSI model.

Understanding ATM Network Architecture

Asynchronous transfer mode (ATM) layering is made up of three basic layers: the physical layer, the ATM layer, and the ATM adaptation layer. The following table relates the general Open Systems Interconnection (OSI) reference to the layers and sublayers of ATM networks.
 
OSI ATM
Transport/Network AAL CS SAR  Convergence Segmentation and reassembly 
Network/Datalink ATM   Generic flow control Cell VPI / VCI translation Cell multiplex and demultiplex 
Physical PHY TC Cell rate decoupling Header Error Control (HEC) Cell delineation Transmission frame adaptation Transmission frame generation 
    PM Bit timing Physical medium 

In this table: AAL is the ATM adaptation layer, CS is the convergence sublayer, SAR is the segmentation and reassembly sublayer, ATM is the ATM layer, PHY is the physical layer, TC is the transmission convergence sublayer, and PM is the physical medium sublayer.

About Virtual Paths and Virtual Circuits

On a connection-oriented network, the routing between the endpoints of the connection is negotiated when the connection is established. This routing persists as a service reservation for the duration of the connection. This ensures cell sequence at point of reception, and also allows the negotiation of the quality of service transmission parameters that optimize the connection for video-on-demand service.

Each cell contains a connection identifier in its header that uniquely identifies the connection throughout the network. Two such hierarchical identifiers are located in the header of each cell: a virtual path (VP) and a virtual circuit (VC). A virtual path is the generic term for a collection of virtual circuit links. A virtual circuit is a unidirectional transport of ATM cells. The VC identifier (VCI) identifies a particular VC link for a given VP connection.

Organization of the cell frame header

Suppose a virtual circuit consisting of a four-hop path is selected between users A and B by a routing algorithm. After the network finds a path between the two nodes, it assigns the VCI values to be used at each node along the path and sets up routing tables at nodes N1 to N5. When the transmission starts, all cells of the connection follow the same path in the network. Once the communication is completed, one of the two users releases the connection, and the VC also is terminated. The end-to-end connection defined by the concatenation of VC links is called a virtual circuit connection (VCC).

About the Physical Layer

The physical layer of the ATM network takes care of two separate classes of functions:  The first task for any transmission system is to synchronize timing at the bit level. This is 155.52 megabits per second (Mbps) for ATM systems operating over optical fiber. Once the bits are available to the next sublayer, it is possible first to convert the bits to the frames of the transmission system used, and then to convert the frames to the actual cells.

The transmission convergence sublayer is standardized to generate and extract frames at the rates specified from the transmission system. The ATM cells are found by looking for the header error control (HEC) on the cell header and checking the error correction code. Once the cells are checked, they must be decoupled from the transmission rate of the physical medium by inserting active cells into and deleting idle cells from the stream. These cells then are handed off to the ATM layer.

About ATM Layer

The ATM layer deals with cells and their transport; it defines cell layout and the meaning of header fields. The ATM layer includes two interfaces: the user-network interface (UNI) and the network-network interface (NNI). The UNI defines the boundary between a host and an ATM network (or the customer and the carrier). The NNI refers to the line between two ATM switches. The header field varies slightly between the UNI and NNI.

The ATM layer routes cells across the network and manages the transmission of cells together from many virtual paths onto one physical carrier. It also is responsible for delivery of a quality of service to the higher layers. The cell loss priority (CLP) attribute designates whether a cell is high or low priority. The payload type (PT) indicator defines the file type of the cell. The header error control (HEC) prevents errors from occurring in the header itself.

About ATM Adaptation Layer

The ATM Adaptation Layer (AAL) is used to adapt the ATM layer to the services that are using the network. It has two sublayers: the convergence sublayer (CS) and the segmentation and reassembly sublayer (SAR). The CS converts user service information into a protocol data unit (PDU). The SAR places these PDUs into cells, resulting in a 48-byte SAR-protocol data unit with a 5-byte header that together make up the 53-byte cell.

There are four different classes of service delivered by the AAL based on timing between source and destination, the bit rate of the source, and the connection mode. The following diagram describes how these service classes are defined.

Functional definition of ATM service classes

All Microsoft® NetShow Theater Server services function over AAL5. The AAL5 format standardizes how the cells are generated and which headers they include.

Understanding ATM Services

Asynchronous transfer mode (ATM) provides quality of service (QoS) to connections. QoS is a set of parameters specified by the user service, such as delivery time and bandwidth. While setting up the connection, the service and the network negotiate the QoS parameters to govern that connection. Microsoft® NetShow Theater Server video streaming traffic is in AAL5 format, and a specified Constant Bit rate (CBR) service. CBR service specifies the mean bit rate and cell loss acceptable, as well as the allowable variation in the cell delay.

Each time a new connection is opened, ATM Connection Admission Control (CAC) reviews the current demands on the network, evaluates the required service against the remaining network resources available, and accepts the new connection only if sufficient resources are available. In addition, ATM switches are configured to accept new traffic only up to a maximum percentage (typically 90 percent) of their theoretical capacity.

About the Network Perspective

ATM networks scale effectively in both local area network (LAN) and wide area network (WAN) environments. Network control functions can be automated or distributed efficiently. Network control and management functions try to satisfy three competing objectives:

Balancing competing functions in network management

NetShow Theater Server assumes a dedicated networking environment. For example, NetShow Theater Server Setup reports the calculated maximum number of streams supported. This assumes that there is no competing network traffic that would reduce the available network resources.

Understanding ATM Connection Setup

Asynchronous transfer mode (ATM) connections are established as either permanent virtual circuits (PVCs) or switched virtual circuits (SVCs). As their name implies, PVCs are always present, whereas SVCs must be established each time a connection is set up.

To set up a connection, a signaling circuit is used first. This is a predefined circuit (with VPI equal to 0 and VCI equal to 5) that is used to transfer signaling messages, which are used for making and releasing calls or connections. If a connection request is successful, a new set of VPI and VCI values are allocated on which the parties that set up the call can send and receive data.

Six message types are used to establish virtual circuits. Each message occupies one or more cells, and contains the message type, length, and parameters.
 
Message Significance if sent by host Significance if sent by network
SETUP Request to establish a call Indicates an incoming call
CALL PROCEEDING Acknowledges an incoming call Indicates a call request will be attempted
CONNECT Indicates acceptance of a call Indicates a call was accepted
CONNECT ACK Acknowledges acceptance of a call Acknowledges making a call
RELEASE Requests that a call be terminated Terminates a call
RELEASE ACK Acknowledges releasing a call Acknowledges releasing a call

The sequence for establishing and releasing a call is:

  1. The host sends a SETUP message on the signaling circuit.
  2. The network responds by sending a CALL PROCEEDING message to acknowledge receiving the request.
  3. Along the route to the destination, each switch receiving the SETUP message acknowledges it by sending the CALL PROCEEDING message.
  4. When the SETUP message reaches its final destination, the receiving host responds by sending the CONNECT message to accept the call.
  5. Next, the network sends a CONNECT ACK message to acknowledge receiving the CONNECT message.
  6. Along the route back to the sender, each switch that receives the CONNECT message acknowledges it by sending CONNECT ACK.
  7. To terminate the call, a host (either the caller or the receiver) sends a RELEASE message. This causes the message to propagate to the other end of the connection, and then releases the circuit. Again, the message is acknowledged at each switch along the way.

About Sending Data to Multiple Receivers

In ATM networks, users can set up point-to-multipoint (P/MP) calls, with one sender and multiple receivers. A P/MP VC allows an endpoint called the root node to exchange data with a set of remote endpoints called leaves. To set up a point-to-multipoint call, a connection to one of the destinations is set up in the usual way. Once the connection is established, users can send the ADD PARTY message to attach a second destination to the VC returned by the previous call. To add receivers, users then can send additional ADD PARTY messages.

This process is similar to a user dialing multiple parties to set up a telephone conference call. One difference is that an ATM P/MP call doesn't allow data to be sent by parties toward the root (or the originator of the call). This is because the ATM Forum Standard UNI 3.1 only allows data on P/MP VCs to flow from the root toward the leaves.

Understanding ATM Switching

An asynchronous transfer mode (ATM) switch transports cells from the incoming links to the outgoing links, using information from the cell header and information stored at the switching node by the connection setup procedure. The connection setup procedure performs the following tasks: As mentioned previously, the VPI and VCI are the connection identifiers used in ATM cells. To uniquely identify each connection, VPIs are defined at each link and VCIs are defined at each virtual path (VP). To establish an end-to-end connection, a path from source to destination must be determined first. Once the path is established, so is the sequence of links to be used for the connection and their identifiers.

VPIs are used to reduce the amount of processing required at an ATM switch by routing on the VPI field only. For example, VPI routing is useful when many VCs share a common physical route (similar to all phone connections between Seattle, WA, and Chicago, IL).

Understanding ATM Service Types

Asynchronous transfer mode (ATM) standards define the types of services most commonly used, in order to control the various categories of network traffic. The service categories are listed below.

About Constant Bit Rate

Constant Bit Rate (CBR) services generate traffic at a constant rate. With this class of service, the end station must specify bandwidth at the time a network connection is established. The network then commits that bandwidth along the connection route; this ensures that no network traffic is lost due to traffic congestion because connections are admitted only if the requested bandwidth can be guaranteed. Microsoft® NetShow~Y Theater Server requires CBR.

About Constant Bit Rate

Constant Bit Rate (CBR) services generate traffic at a constant rate. With this class of service, the end station must specify bandwidth at the time a network connection is established. The network then commits that bandwidth along the connection route; this ensures that no network traffic is lost due to traffic congestion because connections are admitted only if the requested bandwidth can be guaranteed. Microsoft® NetShow~Y Theater Server requires CBR.

About Variable Bit Rate

Variable Bit Rate (VBR) is divided into two subclasses, real-time VBR (RT-VBR), and Non-Real-Time VBR (NRT-VBR). RT-VBR is for applications that have variable bit rates and stringent real-time requirements, such as interactive compressed video (video-conferencing, for example). Real-time services can deteriorate in quality, or become unintelligible if the associated information is delayed; they are sensitive to the time it takes for the ATM cells to be transferred.

NRT-VBR is for traffic where timely delivery is not as important~Wthat is, the quality of non-real time services is not affected by delays in information transfer. An example of non-real-time service is data transmission.

About Available Bit Rate

Available Bit Rate (ABR) service is for bursty traffic for which an approximate bandwidth is known. Burstiness is the ratio of peak-to-average traffic generation rate. With ABR, you can specify, for example, a fixed capacity of 5 megabits per second (Mbps) between two points, with peaks of up to 10 Mbps. The network guarantees 5 Mbps will be provided at all times, and tries to provide the peak capacity when needed, but does not guarantee this.

With ABR service, the network provides feedback to the sender, asking it to slow down when traffic congestion occurs.

About Unspecified Bit Rate

Unspecified Bit Rate (UBR) service allows a connection to be established without specifying the bandwidth expected from the connection. The network does not guarantee UBR service; it establishes the route, but does not commit bandwidth. UBR service can be used for applications that have no delivery constraints, and do their own error and flow control, such as e-mail and file transfer, which have no real-time characteristics.

Understanding ATM Traffic Control

Traffic control refers to a set of actions performed by the network to avoid congestion, and to achieve predefined network performance objectives, for example, in terms of cell transfer delays or cell loss probability.

The International Telecommunications Union (ITU, formerly the Consultative Committee on International Telegraphy and Telephony) has defined a standard rule, called the Generic Cell Rate Algorithm (GCRA) to define the traffic parameters. This rule is used to differentiate clearly between conforming and nonconforming cells~Wthat is, it provides a formal definition of traffic conformance to the negotiated traffic parameters.

ITU recommendation I.371 defines two equivalent versions of the Generic Cell Rate Algorithm: the virtual scheduling (VS) algorithm, and continuous-state leaky bucket (LB) algorithm. For any sequence of cell arrival times, both algorithms determine the same cells to be conforming or nonconforming.

GCRA uses the following parameters:

When a cell arrives, the VS algorithm calculates the TAT of the cell, assuming the cells are equally spaced when the source is active. If the AT of a cell is equal to TAT - L, then the cell is conforming; otherwise, the cell arrived too early and is considered nonconforming.

The continuous-state leaky bucket algorithm can be viewed as a finite capacity bucket whose contents leak out at a continuous rate of 1 per time unit and whose contents are increased by I for each conforming cell. If the contents of the bucket are less than L when a cell arrives, then the cell is conforming; otherwise, it is nonconforming. The capacity of the bucket equals L + I.

For more information about the CGRA algorithm, see ITU recommendation I.371.

About Traffic Control Objectives

Asynchronous transfer mode (ATM) traffic control has the following objectives: Two basic traffic control functions that ATM networks use to manage traffic are connection admission control (CAC) and usage parameter control (UPC).

About Connection Admission Control

Connection admission control (CAC) represents a set of actions carried out by the network during the call setup phase to accept or reject an ATM connection. If sufficient resources exist to accept the call request, and if the call assignment does not affect the performance quality of existing network services, then the call is granted. At call setup time, the user negotiates with the network to select the desired traffic characteristics.

About Usage Parameter Control and Network Parameter Control

Using usage parameter control (UPC) and network parameter control (NPC), the network monitors user traffic volume and cell path validity. It monitors users~R traffic parameters to ensure that they do not exceed the values negotiated at call setup time. It also monitors all connections crossing the user-network interface (UNI) or network-network interface (NNI).

The UPC algorithm must be capable of monitoring illegal traffic conditions, determining whether or not the confirmed parameters exceed the limits of the negotiated range, and quickly dealing with parameter usage violations. To deal with such violations, the network can apply several measures, for example, discarding the cells in question, or removing the connection that contains those cells.

About the Priority Control Function

The Priority Control (PC) function is used to support the actions of CAC and UPC/NPC. Users can employ the cell loss priority (CLP) bit to create traffic flows of different priorities, allowing the network to selectively discard cells with low priority if necessary to protect those with high priority.

About Network Resource Management

Network resource management (NRM) represents provisions made to allocate network resources to separate network traffic flows according to service characteristics.

© 1996-1998 Microsoft Corporation. All rights reserved.