The quality of service (QoS) refers to several related aspects of telephony and computer networks that allow the transport of traffic with special requirements. In particular, much technology has been developed to allow computer networks to become as useful as telephone networks for audio conversations, as well as supporting new applications with even stricter service demands.
the Internet is a series of exchange points interconnecting private networks. Hence the Internet's core is owned and managed by a number of different network service providers, not a single entity. Its behavior is much more stochastic or unpredictable. Therefore, research continues on QoS procedures that are deployable in large, diverse networks.
There are two principal approaches to QoS in modern packet-switched IP networks, a parameterized system based on an exchange of application requirements with the network, and a prioritized system where each packet identifies a desired service level to the network.
- Integrated services (“IntServ”) implements the parameterized approach. In this model, applications use the Resource Reservation Protocol (RSVP) to request and reserve resources through a network.
- Differentiated services (“DiffServ”) implements the prioritized model. DiffServ marks packets according to the type of service they desire. In response to these markings, routers and switches use various queueing strategies to tailor performance to expectations. DiffServ Code Point (DSCP) markings use the first 6 bits in the ToS field of the IP(v4) packet header.
Early work used the integrated services (IntServ) philosophy of reserving network resources. In this model, applications used the Resource reservation protocol (RSVP) to request and reserve resources through a network. While IntServ mechanisms do work, it was realized that in a broadband network typical of a larger service provider, Core routers would be required to accept, maintain, and tear down thousands or possibly tens of thousands of reservations. It was believed that this approach would not scale with the growth of the Internet, and in any event was antithetical to the notion of designing networks so that Core routers do little more than simply switch packets at the highest possible rates.
The second and currently accepted approach is differentiated services (DiffServ). In the DiffServ model, packets are marked according to the type of service they need. In response to these markings, routers and switches use various queuing strategies to tailor performance to requirements. — At the IP layer, differentiated services code point (DSCP) markings use the 6 bits in the IP packet header. At the MAC layer, VLAN IEEE 802.1Q and IEEE 802.1p can be used to carry essentially the same information.
Routers supporting DiffServ use multiple queues for packets awaiting transmission from bandwidth constrained (e.g., wide area) interfaces. Router vendors provide different capabilities for configuring this behavior, to include the number of queues supported, the relative priorities of queues, and bandwidth reserved for each queue.
In practice, when a packet must be forwarded from an interface with queuing, packets requiring low jitter (e.g., VoIP or videoconferencing) are given priority over packets in other queues. Typically, some bandwidth is allocated by default to network control packets (such as Internet Control Message Protocol and routing protocols), while best effort traffic might simply be given whatever bandwidth is left over.
At the Media Access Control (MAC) layer, VLAN IEEE 802.1Q and IEEE 802.1p can be used to carry essentially the same information as used by DiffServ. Queueing theory models have been developed on performance analysis and QoS for MAC layer protocols.
The Internet has worked so far with a best effort traffic model: every packet is treated (forwarded or discarded) equally. This is very simple and efficient model and several arguments has been stated against any need for a more complicated system :
- ``Bandwidth will be infinite.''
The optical fiber has enormous transmission capacity, tens if not hundreds of terabits per second in a single 0.5 mm thin (including primary coating) strain of fiber. However, installing new capacity and developing faster equipment will take some time. Also, the networks are generally designed in cost effective manner balancing between over-engineering and over-subscribing.
As wireless systems are more and more common, they limit available bandwidth because capacity on usable radio frequencies is limited. Also the energy conservation in portable equipment may limit available bandwidth.
Corollary of Moore's Law: As you increase the capacity of any system to accommodate user demand, user demand will increase to consume system capacity.
- ``Simple priority is sufficient.''
This is very much true: QoS is all about giving some traffic higher priority over other traffic. The problem is where to assign the priority. The user terminals cannot generally be trusted to give ``fair'' priorities for different traffic. If there is some billing and policing mechanism, then we do have already some kind of QoS mechanism. In some cases it is useful to give busy signal to user in oder to protect the network from being over-subscribed.
There are two approaches to assigning the priority in Internet traffic: hop-by-hop based on reservation and packet marking at edges of network.
- ``Applications can adapt.''
- While the applications and protocols can adapt to even extreme delays, human user can adapt much less. For example, to maintain dialogue in telephone conversation, end-to-end delays cannot exceed 300 ms to avoid man-on-moon effect.
The Quality of Service (QoS) is often quite much abused term. If we look at traditional circuit-switched telecommunication networks, the QoS is formed by several factors, which can be divided into two groups: ``human'' and ``technical'' factors as shown in Table.
|Human factors||Technical factors|
|stability of service quality||reliability|
|availability of subscriber lines||expandability|
|fault clearance times||maintainability of the system|
|subscriber information||congestion waiting|
|stability of operation of the system||transmission quality|
In the packet switched networks there are much more factors then in the circuit-switched networks that must be agreed on. The Asynchronous Transfer Mode (ATM) networks have very extensive QoS control as it is intended for real-time traffic . For the IP networks the ITU is developing a recommendation I.380 which defines quite similar metrics for IP packet transfer performance parameters:
- IP packet transfer delay (IPTD)
- This is the delay for a IP datagram (or the delay for the last fragment) between two reference points. Typically a end-to-end delay or a delay within one network.
- Mean IP packet transfer delay
- An arithmetic average of the IP packet transfers delays for packets we are interested about.
- IP packet delay variation
- It is useful that streaming applications know how much the delay varies in network to avoid buffer overflows and underflows (Figure 1). For elastic applications small delay variations are not important but the large ones may cause either unnecessary packet retransmissions or unnecessary long delays before retransmit.
- IP packet error ratio (IPER)
- This is the ratio of errored packets of all received packets.
- IP packet loss ratio (IPLR)
The ratio of lost packets from all packets transmitted in population of interest. The packet loss ratio affects on quality of connection. The applicatios can react on packet loss different ways. The applications can be divided to similar categories also by required bandwidth and delay.
- If the packet loss exceeds certain threshold, the value of application is lost.
- The application can tolerate packet loss, but the higher the packet loss the lower is the value of application. There are certain threshold levels which are critical.
- The application can tolerate even very high packet loss ratio but its performance can be very low in high packet loss ratio.
- Spurious IP packet rate
- As it is not expected that this is a number proportional to number of packets transmitted, this is expressed as a rate: number of spurious packets in time interval.
For ATM networks there are also metrics to characterize traffic flow for call admission control (CAC) purposes:
- Peak Cell Rate (PCR)
- The maximum cell rate that connection may have while maintaining jitter less than defined by Cell Delay Variation Tolerance (CDVT).
- Sustainable Cell Rate (SCR)
- The long-term maximum cell rate the connection may have.
- Maximum Burst Size (MBS)
- The number of cells in burst which may exceed SCR but not PCR.
- Minimum Cell Rate (MCR)
- The minimum rate the connection must be able to send at any time.
The IETF Internet Protocol Performance Metrics (IPPM) working group is working to define metrics for the Internet performance. The framework document defines criteria for those metrics, terminology, the metrics itself, the methodology, and the practical considerations including sources of uncertainty and errors. There are some differences in terminology considering time between ITU-T definitions and IPPM working group definitions. Short summary of the differences is presented in Table .
|synchronization||time error||difference of two clocks|
|accuracy||time error from UTC||difference to real time|
|resolution||sampling period||the precession of clock|
|skew||time drift||change in synchronization or in accuracy|
As full traffic analysis is not always feasible, the IPPM metrics are based on random sampling of traffic. The framework document includes discussion that recommends that the Internet properties are not considered in probabilistic terms as there is no static state in the Internet.
The Grade of Service (GoS) has been used in the telecommunications industry to indicate components which contribute to overall quality of service what the user experiences. Many components have both human component and technical component: the technical component can be measured (like bandwidth of voice) and the human component is subjective. There is relation between human and technical components but the exact mapping depends on many factors, for example language used and other culture factors. In the GoS is defined as the following:
It may happen that in a network, or in part of a network, the volume of telephone traffic that arises exceeds the capacity for handling it without limitations, with the result that congestion occurs. These limitations affect the service provided to customers, and the degree of these limitations is expressed by an appropriate GoS parameter (e.g. probability of loss, average delay, failure rate due congestion, etc.). GoS should therefore be regarded as providing information on the traffic aspects of the ``quality of service''.
In a circuit-switched network the GoS has been divided into two standards:
- Loss grade of service
- This standard has a component internal loss probability: for any call attempt, it is the probability that an overall connection cannot be set up between a given incoming circuit and any suitable free outgoing circuit within the switching network. For international digital telephone exchanges the internal loss probability may not exceed 0.2 % in normal load situation and 1 % in high load. The figures are higher for end-to-end connections: mean 2 and 5 % in local and international connections respectively.
- Delay grade of service
- There are several components in this standard, depending on technology used for signaling information. For the ISDN circuit-switched services the delay components are defined for a pre-selection, a post-selection, a answer signal, and a call release.
The Class of Service concept divides network traffic into different classes and provides class-dependent service to each packet depending on what class it belongs to. While the strict QoS has some absolute measures for quality, the CoS has relative measures: at this time this class gets packet drop probability of 10-6 while on the other class packet drop probability is 10-3.
To differentiate the network traffic into the different classes, the differentiation must be based on some factor. The factors include:
Differentiation is made based on (possibly many) protocols. The protocol information may be more or less accurate and includes:
- Protocol identifier
- One can differentiate IP from other network level protocols using link level information - TCP from UDP and ICMP using protocol field on IP header.
- Source port number
The only way to identify applications run over TCP or UDP is to look for port numbers and compare them to list of well-known port numbers, maintained by IANA/ICANN. While in most cases the mapping is correct there are many cases when some service or client uses a port reserved for an another application.
The source port identifies traffic originating from the server.
- Destination port number
- The destination port identifies traffic originating from the client to the server.
- Type of service and priority or precedence
The IPv4 header has 3-bit precedence field so there are 8 possible levels. In addition there is type of service field which originally was a bit pattern, then enumeration value and now together with precedence filed a 8-bit Differentiated Services Field (DS Field).
In IPv6 there are was originally 4-bit priority field (8 levels for real time traffic and 8 levels for elastic traffic) but it is now replaced with 8-bit traffic class field (DS Field).
- Source host address
- We can identify the end system sending data and based on that classify traffic (we can identify the customer).
- Destination host address
- We can identify the end system receiving data.
A flow is defined as a sequence of packets which have some common denominator. Depending on a granularity it can be anything between:
- Source and destination network
- The packets between two networks share same routing information (in single-class routing). In practise, the addresses have the same prefix (source and destination individually).
- The most fine-grained descriptor for a flow is 5-tuple, which consists of source and destination addresses, transport protocol identifier and source and destination ports, for example (22.214.171.124, 126.96.36.199, 6 (TCP), 33877, 80).
It is also possible, that some other descriptor is used to identify flow. In IPv6 there is defined a 20-bit flow label that can be used to identify a flow with source host address.
The Internet Protocol (IP) development started in DARPA ARPANET in 1969 using IMP nodes. As the network grow, the development work for common fault tolerant protocol started in 1974. The architecture and the core protocols was ready by end of 1970s and beginning of 1980s . In January 1983 the ARPANET changed from Network Control Protocol (NCP) to Internet Protocol family (TCP/IP). The most commonly used applications programming interface (API) for TCP/IP services was developed for BSD 4.2 UNIX in 1983: the same interface is used on most platforms.
The IP works roughly at network layer in ISO Open Systems Interconnection (OSI) model. The supporting protocols are Internet Control Message Protocol (ICMP) (error reporting, configuration and diagnosis), Internet Group Management Protocol (IGMP) (multicast management), and Address Resolution Protocol (ARP) which performs mapping between link layer addresses and IP addresses if needed.
The IP is a datagram delivery service: each datagram contains enough information to carry it to its destination. There is no call setup: the service is connectionless. The network, however, does not make any guarantees if the datagram is delivered to destination. If the packet is lost, corrupted, misdelivered, or for some other reason not delivered to its intended destination, the network does nothing to recover from the failure. This service model is commonly called as best effort or unreliable service. IP does not either guarantee anything about order in which packets arrive to destination nor that the packets are delivered at most once (datagrams may duplicate). However, there is a limited life time for each datagram.
The best effort connectionless service is the simplest service a internetwork can provide: this makes it possible to transfer datagrams over any link layer technology . If the link losses some packets, that's fine. If the link delivers all packets, even better.
Transmission Control Protocol (TCP)
If we want to transport data reliably on top of IP, we need a transport protocol which hides unreliability of IP from application. The most simple solution is that the sender sends one data segment; if the receiver receives it successfully, it acknowledges received packet. As sender receives acknowledgment it may send the second segment. If either segment or acknowledgment is lost or receiver is not online, the sender does not receive acknowledgment in specified time and retransmits segment.
This is not very efficient as the transmission speed is restricted by the round trip delay. A better approach is to use sliding window scheme: the receiver announces how much it is prepared to receive: the sender can send this much without getting acknowledgment.
The Transmission Control Protocol (TCP) provides reliable byte stream using sliding window flow control scheme. The original scheme, however, worked badly in case of congestion as was seen first in 1986 in ``congestion collapses'' . The scheme was then improved by introducing following methods:
- Round-trip-time variance estimation
- Better estimates to find out when a segment is lost or when it is just late. In case of rising congestion event the delay will increase very much. The retransmit timer is set to value mean plus four times variation.
- Exponential retransmit timer backoff
- Limit data rate sent to network to help clear out the congestion.
- Probe for available bandwidth.
- More aggressive receiver ack policy
- Receiver acknowledges data as soon as possible to avoid retransmits.
- Dynamic window sizing on congestion
- Adapts for changed situation on network.
- Karn's clamped retransmit backoff
- Limit data rate.
- Fast retransmit
- Fast recovery if only one segment is lost.
In addition to the window used for flow control (e.g.. the sender does not overrun the receiver) the concept of a congestion window was introduced. The congestion window tells the sender how much it can send to network and the sender selects minimum of those two windows.
The improvements made lead to more graceful operation in case of congestion. The problem is that there are difficulties to estimate (specially in case of retransmissions) round trip delay which is vital for maintaining steady flow. This has been addressed with timestamp option. Current TCP congestion control algorithms and discussion can be found from.
The TCP is an elastic transport protocol: it adapts its transfer rate to available bandwidth on on network. It does not make any efforts to have minimum rate but only deliveries data in reliable manner.
The User Datagram Service (UDP) provides program addressing (ports) and optional data integrity check (a checksum for payload). It does not add any reliability to delivery, hence the original name ``Unreliable''.
The UDP is suitable for use as a carrier for real-time traffic as it does not have any flow control or retransmissions which could affect on timing. With real-time traffic retransmissions are generally useless as retransmitted data arrives too late to be for any use.
For a real-time application, there is more need for control than the UDP provides. The real-time transport protocol (RTP) and accompanying real-time control protocol (RTCP) are designed for this purpose. The RTP packets encapsulated in UDP packets carry the actual real-time data and have a sequence number and a time stamp. The sequence number makes it possible that the receiver can find out if there are some dropped packets. The time stamp is used to detect jitter introduced by network and end systems.
The lack of any QoS guarantees or levels in Internet is considered as one of main limitation of more wide use of Internet. To solve this problem an IETF Internet Integrated Services working group was formed. It defined a framework for the resource reservation and the performance guarantees. This framework is independent from protocols used for signaling and implementation details.
Each network node is divided into two parts: background process and traffic forwarding. The background process takes care of routing, reservation setups and admission control in addition to management. The traffic forwarding part classifies traffic based on information in the traffic control database and based on this classification, the traffic scheduled to right queues.
By now there are two service classes. Both of them support merging of flows (aggregation) for scalability and have the rules to substitute ``as good or better'' service if the requested service is not available.
Guaranteed Quality of Service
The guaranteed service is designed for applications which require certain minimum bandwidth and maximum delay. The service, as it provides firm (mathematically provable) bounds for end-to-end queuing delay, makes possible to provide service that guarantees both the delay and the bandwidth.
The traffic is considered as a fluid model: delivered queuing delays do not exceed the fluid delays by more than the specified error bounds. When the resource reservation is being made, each node calculates its values for C and D.
As long as the traffic is conforming to traffic specification (TSpec: b, r, p, m (a minimum policed unit; used to estimate link level overhead), M) the network element must transmit the packets conforming to receiver specification (RSpec: R, S (a slack term; ``extra'' time the node can delay the datagram)). If the traffic exceeds the traffic specification, the non-conforming datagrams must be considered as best-effort datagrams. They should not be given any presence over other best-effort datagrams (to avoid misuse) nor they should be discarded as erroneous packets as a originally conforming traffic may become non-conforming in the network.
The controlled-load service provides independent the network element load the client data flow with QoS closely approximating the QoS the flow would receive in unloaded network. It uses capacity (admission) control to assure this.
As in the guaranteed service the service is provided for a flow conforming the same TSpec. The applications may assume that only very few if any packets are lost and only very few if any packets greatly exceed minimum transit delay. If a non-conforming packet is received, the network element must ensure that
- 1. the other controlled-load flows receive expected QoS,
- 2. the excess traffic does not have an unfair impact on best-effort traffic,
- 3. the excess traffic is delivered best-effort basis if sufficient resources exists.
All resource reservation systems need a setup (signaling) protocol to allocate needed resources from the network. For the IP networks, the Resource ReSerVation Protocol (RSVP) is specified . The RSVP is independent from Integrated Services model and can be used with variety of QoS services as the Integrated Services can be used with variety of setup protocols .
The RSVP request is receiver initiated: this provides better scalability for large multicast receiver groups, more flexible group membership and diverse receiver requirements. The sender sends Path messages which records the route packets travel to receiver and has a traffic characterization information. On reception of a Path message the receiver sends a Resv message to reserve needed capacity from the network. This message travels hop-by-hop same route (other direction) the Path message traveled.
The RSVP supports three types of reservations:
- Wildcard-Filter (WF)
- The WF reservation is shared with all senders: it is propagated towards all senders and is extended automaticly to new senders as they appear.
- Fixed-Filter (FF)
- The reservation is distinct (not shared between senders) and the sender is specified explicitly.
- Shared-Explicit (SE)
- The reservation is shared by selected senders. Compared to WF reservation, the receiver can select set of senders.
One of crucial problems is resource reservation charging. If the resources can be reserved without charge, all network users will want reserve all of bandwidth to themselves. For a network to provide QoS, there must be some monetary cost associated to reservation which corresponds to amount of resources reserved. There is no currently any billing mechanism defined, but there is mechanism for cryptographic authentication . The key management and accounting are still big issues to be solved.
In larger networks there are generally several alternative routes between two hosts. These alternative routes provide added fault tolerance and possibility to share network load. The routing algorithms are based on a graph theory and practical implementations within one administrative domain are based either on distance vectors or more commonly today to a topology and a link state, for example the Open Shortest-Path First (OSPF).
If we include QoS parameters, for example needed bandwidth or delay bounds, the picture changes. We must take those considerations into account when calculating network topology. The ATM Private Network-Network Interface (PNNI) routing information uses source traffic descriptor to determine which links to include into topology and which not. There can be great number of different topologies depending on requirements so handling of those can be a difficult task.
Another factor where QoS constrains affect on routing are changes in route tables in life time of connection. If the network and the routing algorithm are not stabile, the route may change very often, specially if the network is congested. If routing information is changed after reservation setup there is no resources reserved for this particular connection on new route. For this reason, the RSVP requires that the route for a certain session is fixed: the concept is called as path pinning and it is relied on periodic RSVP Path updates to change reservation to the new route.
The IP and ATM world have one basic difference: the ATM is connection oriented as the IP is connectionless. There are two basic ways to realize IP-over-ATM: using permanent virtual circuits (PVC) which emulate point-to-point links (leased lines) and switched virtual circuits which are set up on-demand. The reservation in ATM is ``hard state'' (active as long as it is not released) while on RSVP the reservation is ``soft state'' (active as long there are periodic updates).
The mapping of Integrated services to ATM service classes is quite straightforward and is presented in Table 3. The ATM traffic descriptors (PCR, SCR, MBS) are set to values based on a peak rate, a bucket depth, a RSpec, and a Receiver TSpec.
|Integrated Service Class||ATM Service Class|
|Guaranteed Service||CBR or rt-VBR|
|Controlled Load||nrt-VBR or ABR (with minimum rate MCR)|
|Best Effort||UBR or ABR|
For the point-to-point links the bandwidth allocation and prioritizing is done by router. In multipoint networks there may be some shared media where each host can send as much data as they want. Some mechanism is needed to make sure that the bandwidth allocations do not exceed available bandwidth in network. This is a task for a bandwidth broker or Subnet Bandwidth Manager (SBM) .
Each (RVSP) request for a bandwidth goes through SBM and if sufficient capacity in the network exists, it grants the request. It does not, however, policy requests in any way, it is just a book keeper.
One of the main problems with any resource reservation technology is burden needed for maintaining a state information for each flow. In some central network nodes the number of simultaneous flows may exceed hundred thousand. If we estimate that each flow lasts for 10 seconds, there comes and goes more than 10,000 flows per second. For a reference, a large telephone exchange can handle up to million BHCA (busy hour call attempts) which equals to few hundred calls per second on average.
The number of flows makes maintaining per flow state information infeasible in core routers. The time needed to look up database entry for 5-tuple in each packet is considerable overhead compared for normal destination address lookup from routing table. The solution is to use a per-packet stateless information.
For the differentiated service approach several requirements were identified and addressed. The key requirements are:
- Independence of applications, services and policing.
- Deployable incremental (only some part(s) of path), interoperability with QoS other technologies,
- No customer or microflow information or state in core network nodes - no nop-by-hop signaling. Core nodes utilize only small set of simple aggregated classification policies.
The differentiated service (DS) architecture is presented in Figure 4. The traffic is classified at edges of the network (Figure 5). Each datagram is possibly conditioned and assigned to one of behavior aggregates which are identified by DS codepoints. At the core of the network, the packets are forwarded according to the per-hop behavior (PHB) associated with the DS codepoint.
A customer (possibly other network operator) makes a service level agreement (SLA) with the network operator. The SLA can be either qualitative (``Traffic offered at service level A will be delivered with low latency'') or quantitative (``90% of in profile traffic delivered at service level B will experience no more than 50 ms latency'') .
Based on SLAs the network provider assigns proper service level specifications (SLS) to boundary nodes. The nodes have four components as seen in Figure 5. The meters measure if submitted traffic conforms to a profile. Based on the measurements other components will implement the policing. Markers re-mark traffic: to demote out-of-profile traffic to different PHB, to conform SLS codepoint mutation, and to ensure that only the valid codepoints are used. Shapers delay traffic so that it does not exceed profile and droppers discard non-conforming traffic.
The per-hop-behavior groups are the actual mechanism to implement needed service differentiation in core networks. There should not be too many PHB groups as it complicates efficient router design. Currently there are proposals for two PHB groups:
- Assured Forwarding PHB Group (AF)
provides four independently forwarded traffic classes, each with three drop precedences. A single DS node does not reorder packets of the same microflow if they belong into the same AF class. Note that this does not guarantee that packets do not get reordered as they travel through the network as datagrams may take different path as the routing information changes.
Each class is assigned some partition of bandwidth and buffer capacity. One way to use classes is ``Olympic service'': packets are assigned to ``gold'', ``silver'', and ``bronze'' classes. The ``gold'' class has lighter load than the other two classes. The customer may select one of these classes (which each are of of different cost).
- An Expedited Forwarding PHB Group (EF)
- can be used to build a low loss, low latency, low jitter assured bandwidth, end-to-end service through DS domain. This makes possible to provide end-to-end ``virtual leased lines'' or Premium service.
In there are defined some service examples. The codepoints used in the examples are not officially assigned.
- Better than Best-Effort (BBE)
- provides a service prioritized to best-effort. The traffic conforming contract (for example: 1 Mbps, any egress point) is marked with AF11 mark and traffic non-conforming is marked with AF13. For example a web service provider can use this to provide a better performance to its clients.
- Leased Line Emulation
- uses EF to implement this. A traffic contract ``1 Mbps, egress point B, discard non-conforming'' provides virtual 1 Mbps leased line to destination at egress point B.
- Quantitative Assured Media Playback
- is similar to Leased Line Emulation but bursts are allowed and no traffic is (imminently) discarded. Traffic not exceeding basic rate (like 100 kbps) is marked with AF11, burst at maximum rate 200 kbps and size up to 100 kB are marked with AF12 and larger bursts with AF13.
It is commonly agreed that the RSVP and the integrated services do not provide enough scalability in high speed core networks. The differentiated services on the other hand may not have enough granularity to work just on few flows resulting non-quality service in the access network. One solution provided for that is using RSVP/intserv at edges of Internet (access networks) and use differentiated services on core networks. The differentiated service networks are seen by RSVP/intserv connections as a single hop. The RSVP/intserv network edge nodes and the diffserv network border nodes take care of mapping the RSVP requests and flows to proper differentiated services PHB group.
The use of the IP Security protocols causes some problems for both for the RSVP and differentiated service.
If the Encapsulating Security Payload (ESP) is used, the upper protocol layers are encrypted and the network nodes cannot know the port numbers or protocols. The RSVP uses port numbers to make difference between different flows between two hosts (for example one flow for a data communication and one for a audio transfer).
As the transport layer is encrypted the network nodes cannot know the ports and thus cannot differentiate between flows requiring real time handling and bulk transfers. If the datagram is only authenticated the port numbers are visible but they are on a different position which may cause performance problems on routers.
This problem was solved by introducing ``virtual destination port'' which is actually the IP SEC Security Parameter Index (SPI). We must then make sure that the flows needing different QoS have a different SPI .
If the services are classified for differentiated services networks based on port numbers, the encryption hides needed information. In this case the end system should be able to tell the network what kind of service each packet needs.
There are three main uses for tunneling: the first is to build (possibly secure) virtual private networks (VPN), the second one is to provide transport for protocols the network between does not support (as currently IPv6). The third use is tunneling the subscriber traffic from the access server to the Internet service provider (ISP). The access server can be located in local telephone exchanges so Internet connections do not reserve circuit switched capacity from the telephone network.
As the original IP packets are encapsulated to IP packets, the port numbers or DS code points are not visible to routing nodes. This problem has been solved for RSVP by using IP-in-UDP encapsulation. The UDP source ports are used to identify individual (or aggregated) flows. The RSVP reservations are tunneled and corresponding reservations are made for the tunnel also. For the differentiated services the solution is simpler: the DS codepoint is copied to IP datagram which carries the original IP datagram. This way the datagram will receive same service as the original would. The packet reordering may cause problems with some tunneling protocols so packets in same flow should have same DS code.
WiMAX provides Quality of Service at both physical and MAC layer using the following techniques:
- Frequency division duplex (FDD) - The application of frequency-division multiple access to separate outward and return signals. The uplink and downlink sub-bands are said to be separated by the "frequency offset". Frequency division duplex is much more efficient in the case of symmetric traffic.
- Time division duplex (TDD) - The application of time-division multiple access to separate outward and return signals. Time division duplex has a strong advantage in the case where the asymmetry of the uplink and downlink data speed is variable.
- Forward Error Correction (FEC) - The technique to allow the receiver to correct some errors without having to request a retransmission of data.
- Orthogonal frequency-division multiplexing (OFDM) - The frequencies and modulation of FDM are arranged to be orthogonal with each other which eliminates most of the interference between channels.
- Orthogonal Frequency Division Multiple Access (OFDMA) - It works by assigning a subset of subcarriers to individual users.
MAC Layer - Service Types
- Unsolicited Grant Service (UGS) - Supports Real Time data streams, having fixed size packets issued at regular intervals e.g.. VoIP
- Real Time Polling Service (rTPS) - Supports Real Time data streams, having variable size packets issued at regular intervals e.g.. MPEG Video
- Non Real Time Polling Service (nrTPS) - Supports delay tolerant data with variable packet sizes, for which a minimum data rate is specified e.g.. ftp
- Best Effort (BE) - Supports data streams where no minimum service is required and packets are handled on a space-available basis