Networks Horizon

share

Thursday, 26 April 2012


Quality of Service (QoS)-Part9

IP Precedence and DSCP

Why Marking necessary?

Though Cisco routers have inbuilt capability of classifying and re-classifying on every hop on the way but the problem is we have to look inside the packet on every hop, which is processor intensive and can become a big problem in a large network such as ISP.


So It is always good to mark your traffic at trust boundary. Trust boundary is the first place that you decide to mark your traffic. That marking will be trusted throughout the network. First device in the trust boundary will classify and mark the traffic and rest of the devices in the path can take decision based on particular marking and that is without opening whole box of packet. In a large ISP network, this is a scalable solution.


According to Cisco, We should mark the traffic as closed to the source as we possibly can. So trust boundary should be closer to the source. That device might be the source device, any switch or the router.


But in practical life, based on requirement, the best idea would be to apply QoS before the traffic enters to the Wide Area Network because this is the point which is most likely to be congested. Another way, it would be better idea to apply QoS on LAN devices , switches, PCs or QoS enable IP phones only in case, if LAN network is prone to be congested frequently. Another thing that might prevent applying QoS in LAN is that most of the devices in LAN are not QoS capable.
Layer 2 and Layer 3 Marking
Layer 2 Marking
Below layer 2 markings are stripped at router before applying QoS in explosed IP packet.
Class of Service (CoS)
Frame-relay DE bit
ATM CLP bit
MPLS Exp bits


Layer 3 Marking
Below markings are encapsulated in layer 2 frames/Cells/labels and do not change while passing through the routers. So it more reliable. 
  • IP Precedence
  • DSCP (Differenciate Services Code Point)

IP Precedence

It is placed in layer 3 header in Type of Service (ToS) filed. It is 8 bits field. But IP Precedence only use first 3 bits for this field. Therefore , IP Precedence marks with values 0-7.  So, we can say converting all the available 8 bits from binary to decimal, we can get 0-255 values but IP Precedence only uses 0-7 values. Because we can have maximum 7 values with 3 bits.

Binary review:

first bit from right ON means=1
second bit from right ON means=2
third bit from right ON means=4
fourth bit from right ON means=8
fifth bit from the right ON means=16
Sixth bit form the right ON means=32
Seventh bit from the right ON means=64
Eighth bit from the right ON means=128
So:
0=000
1=001=1
2-010=2
3=011=0+1
4=100=4
5=101=4+1
6=110=4+2
7=111=4+2+1
Thus,
Available IP precedence classes :


7- Reserved  (BPDU,Hello)===111
6- Reserved  (routing protocol/Synchronisation protocol)===110
5- Voice Bearer======================================== 101
4- Video-conferencing================================== 100
3- Call-signalling        (Provides statstics for the audio calls)= 011
2- High Priority Data    (like Citix, SAP)=================== 010
1- Medium-priority Data ===============================  001  
0- Best-Efforts Data  (Web surfing, peer-to-peer)=========   000


DSCP (Differentiated Services Code Point)


DSCP is new method of marking. DSCP uses all available 8 bits of ToS filed but to maintain 
compatibilty with old IP Precedence standards, designers introduce new method so that IP 
precedence only router can understand the importance of of the packet coming from DSCP configured router. IN DSCP, they took these 8 bits and divided into 3 groups. First group will contain first 3 bits, Second group contains another 3 bits and third group contains rest of 2 bits out of 8 bits


First group is called Major Class
Second group is called Minfor class or Drop Precedence class
Third group is called ECN (Explicit Congestion Notification) bits


First two groups are mainly responsible for QoS marking. It means first 6 bits can be used in 
DSCP. If we convert it to decimal, we can have 0-63 values in DSCP.
111111=63  


Instead of using all 64 values (0-63), Cisco defined two classes-major and minor to make it backward compatible with IP Precedence.
Major Class
Major class of traffic gives first 3 bits means values between 0-7.  This way DSCP becomes 
compatible with IP Precedence and providing same set of marking that IP precedence was defining.
11=3 (third bit is unused)


When Cisco defined DSCP, they defines some of the new classes  Line IP Precedence, higher numbered classes are treated better than lower numbered class. Means DSCP AF4 is better than DSCP AF3.
For example:


Default Class=========DSCP value 0========IP Prec 0===000
Assured Forwarding (AF)-
AF1========DSCP 1 ==========IP Prec 1 =====001
AF2========DSCP 2 ==========IP Prec 2 =====010
AF3========DSCP 3 ==========IP Prec 3 =====011
AF4========DSCP 4 ==========IP Prec 4 =====100
Expedite Forwarding(EF)=======DSCP 5 =====IP Prec 5==101


DSCP 6 and DSCP 7 are reserved and can never be used so not shown in DSCP configuration as well.

Minor Class (Drop Precedence)

It is three bit field expresses Drop Preference of the Major classes. Last bit of this Minor class is always 0. So IP Precedence only needed left 2 bits. With two bits we can only have maximum 3 values.


But unlike Major class, In Minor classes lower is better (Lower is better).  This is why it is also called drop precedence. In Minor class, three numbers are allowed (1,2,3) for each major AF class through (1-4). For example:


AF1 --------DSCP 4 ------IP Prec 4
  AF11
  AF12
  AF13
AF2 --------DSCP 4 ------IP Prec 4
  AF21
  AF22
  AF23
AF3 --------DSCP 4 ------IP Prec 4
  AF31
  AF32
  AF33
AF4 --------DSCP 4 ------IP Prec 4
  AF41
  AF42
  AF43


Some calculations


AF11 means :


Major=1=3bits================001
Minor=1=2bits+1 bit (unused)==01 +0==010


IN decimal=major +minor=001010=10


AF31 means :


Major=3=3bits==011
Minor=1=2bits+1 bit (unused)==01+0=010


In decimal=major +minor=011010=16+8+2=26


AF43 means :


Major=4=3bits=011
Minor=3=2bits+1 bit (unused)=10+0=100


In decimal=011100=16+8+4=28


So we can have 12 DSCP AF classes with the use of Major and Minor classes. One default class and one EF (Expedite Forwarding) class addition of these 12 DSCP AF classes concludes 14 DSCP AF classes against 7 classes of IP Precedence. In decimal, we can have DSCP valued between 0-63.
Class Selector
Class Selector is used when you want IP Precedence kind of flexibility in DSCP enable router. This is another term we can use in DSCP. This is the functional equivalent to IP precedence, they just call it class selector in DSCP speak.  


Note that drop preference is n’t used in any of the available 7 CS classes like we do in IP Precedence.





Quality of Service (QoS)-Part10
Implementing QoS on switches


Why do we require QoS in LAN where we have lots of bandwidth already available.


Reason 1: Allows traffic to be classified and marked as closed to the source as possible. It is Cisco's best strategy.
Reason 2: QoS on catalyst switches has to be really fast. Unlike router, QoS queuing is performed on hardware in switches due to its architecture. This is the reason, it is recommended to run applications like NBAR inside switches. This will save lots of processing on router as they will only need to treat pre-tagged packets.

Although it is much faster to run QoS inside switch, the major drawback is that all of the QoS is integrated into switch ASIC. ASIC is switching engine in the hardware. So every-catalyst switch is different in sesnse that every -catalyst has different hardware/engine.

Reason3: Policing can be performed to the source as possible. We can police per port and per vlan.

Switching Capabilities for QoS

  1. Queues (Priority or Standard)
  2. Threshold: Dropping method (Tail Drop, WRED)

For example:

  • 2Q2T=Can have 2 standard queues and 2 Threshold per queue
  • 1P2Q2T= one priority queue/2 standard queue and 2 Threshold per queue

Queues configuration methods:

  1. Priority Queuing-True Priority
  2. Weighted Round Robin (WRR)-Custom Queuing
  3. WRR with Priority Queue-PQ+CQ  ( this is LLQ for switches)


In CAT2950 switches queue number 4 is a priority queue by default

for example: 1PQ+3CQ

queues   1     2    3    4
packets  100   20  50    0

for example: 4CQ (WRR)

queues   1     2    3    4
packets  100   20  40    10

Threshold

Some of the switch model do not have threshold defined but there are some with below application:

  1. Tail Drop (If one threshold is defined)
  2. WRED (if 2 Threshold are available)

How to enable QoS on a switch:

Command:

switch(config)#mls qos

Once this command is applied another command is automatically added into configuration:

switch(config)#mls qos map cos-dscp 0 8 16 26 32 46 48 56

This means when we are getting CoS marking, it is automatically converting them into corrosponding DSCP values. For example, CoS value 2 is matching DSCP value of 16. Similarly CoS value 7 is matching DSCP value of 56.

Other commands:

switch(config)#mls qos map ?
       cos-dscp
       dscp-cos
       ip-prec-dscp
       more..

For example::

switch(config)#mls qos map dscp-cos <cos> <cos> <cos>.. to <dscp_value>
switch(config)#mls qos map dscp-cos 4 5 6 7 to 1

Marking when Connected device is a IP phone

switch(config)# int fa0/1
switch(config-if)#mls qos trust device cisco-phone

Above command will detect a connected cisco-phone through CDP and trust its marking . By default cisco-phone packet will have the marking of IP PREC 3. If any other device is connected, switch will mark the traffic with CoS value of ZERO.As alrady discussed CDP should be enabled on switch to detect the cisco phone.

If you are not using cisco-phone and using any other IP phone, you can simple put below command to trust all IP for of IP PRECEDENCE value.

switch(config-if)#mls qos trust ip-precedence


Queuing (Generally we have 4 queues)

Starting Weighted Round Robin
switch(config-if)# wrr-queue cos-map 1 0 1 2
switch(config-if)# wrr-queue cos-map 2 3 4
switch(config-if)# wrr-queue cos-map 3 6 7
switch(config-if)# wrr-queue cos-map 4 5


Assigning Bandwidth (In Relative values)

switch(config-if)# wrr-queue bandwidh 4 5 10 1

Above command means how much weight(stuff), we will grab from each queue on round robin basis.
Means fourth queue has been assigned with 50% of bandwidth.

In CAT 2950, there is fourth queue is priority queue by default so configuration should be like this:

switch(config-if)# wrr-queue bandwidh 4 5 10 0

First three queue will work in Weighted round robin and fourth queue bandwidth will be ZERO and traffic will get the priority always.

In CAT 3550, ASIC behaviour is different as discussed, so there is no need to put ZERO when priority queue is defined, an additional command is needed as below..

switch(config-if)# wrr-queue bandwidh 4 5 10 1
switch(config-if)# priority-queue out

In above case fourth queue bandwidth value will be ignored. and then it will behave like CAT 2950.

Other scenarios

If it is rquired to hardcoded on a switch port that everything we receive on that port should have CoS value of 2.

switch(config-if)# mls qos cos 2




Tuesday, 10 April 2012

Quality of Service (QoS)-Part8
Compression and LFI


Types of compression:


1.Payload Compression
2.Header Compression


Payload Compression
1. Traditional Compression
   STAC=Sacrifice processor load on router to get more bandwidth on the link
   PRED=Predicture=Sacrifice routers memory on router to get more bandwidth on the link
   MPPC= Microsoft Point-to-Point Compression-For microsoft enable devices. Like in old         
   DOS systems


2. We also have compression for audio codec for example
   G.711-Primary=64 kbps for the voice
   G.723=6.3 kbps for the voice. It's being phased out.
   G.729=8 kbps for the voice


Header Compression
Primary, for the voice like audio codec compression technique like G.729, sometimes, header size is larger than payload itself. So, by compressing the heading we gaining a lot with RTP. This shouldn't be called header compression but header suppression.


TCP Header Compression


It is true header compression.  TCP generally sends packets with large payload.So, IN TCP header, payload is typically 1500 bytes of size. This way, by compression the header, generally, we do not save much bandwidth but adds on router processing. But in some conditions, we can enable TCP header compression, For example, Telnet have a packet size of 1 byte and rest is the header. 




RTP Header Compression (cRTP)


RTP Header Compression is not a true compression. During the single session, for e.g. when we pick-up the phone and start talking, we generally do not change IP addressing, port numbers, DLC or, MAC. RTP header suppression, suppress the header, instead of compressing it. It cache the header information which is being repeated and keeps session opened based on session sequence. It can keep track of incoming packets on the basis of particular session number they belong to. This way, It does not require to process full header every time. On average, It takes a 40 byte header and compress/suppress it to 2-3 bytes header.


Link Fragmentation and Interleaving (LFI)


Serialization Delay
Link Fragmentation
Interleaving

Serialization Delay

It is the time taken in mili-seconds by interface to put packet on the wire to be transmitted on the wire. For example a 56kbps link, serialization delay is considered as 214ms. It means it will take 214 ms for the interface to move a byte of information from router to the physical line itself.This delay is called serialization delay. This adds into the delay between one point to another point for given speed.

Cisco recommends a serialization delay should be no more than 10-15 mili seconds.

Link Fragmentation
Cisco also recommends not to enable LFI for the links faster than 768 kbps because we do not generally need it. We might need this tool in slow WAN connections where we need to give priority to small voice/video packets before large data packets. Queuing cannot help in this case because it is only useful where data packet is not yet put on to the wire for the transmission whereas when a large data packet already started to transmit on the wire, LFI can be an useful tool.
Generally, it takes around 15 msecs serialization delay for the router to put 1500 Bytes information onto the wire which is idle. If you enable link fragmentation for the higher capacity connections, you are adding that extra header information to each small packet which can hurt in long run. It is two way swords because, if you fragmenting a 1500 bytes packet into 3 small packets, it will also add 3 additional headers between 2 end points and this means more traffic. This way you are sacrificing the bandwidth. So Link Fragmentation and Interleaving is basically meant for slow and temporary congested WAN connections. For long run, it is advisable to upgrade the bandwidth where this technique is being implemented.

Interleaving
Interleaving is nothing but to put small voice packet in between to enable it to be transmitted before the large data packet.


LFI feature is only application for FR and PPP links, ATM links have inbuilt Fragmentation feature supported by their 53bytes cells.

Sunday, 8 April 2012

Quality of Service (QoS)-Part7
Congestion Avoidance -WRED

Tail drop-Consequences

We need to first understand crises of Tail drop. Tail Drop is how we handle out routing for decades. We had packet streams coming into the router and router used to manage them as it could because routers is made to bridge discontinues networks. Lot of time this network also present some speed mismatch when we are going from much faster segment like LAN connection to slower segment like WAN. 

Now, the crisis of trail drop comes in with the basic question-how the TCP works?.  In TCP, In case of data transfer, a device begins to use a system called windowing. It initially going to send one packet and as soon as that packet gets acknowledge reliably, window size continues to grow until it eventually reaches to a level where link is completely utilized. Once link is over-utilize, packets starts getting dropped. Once there is failure acknowledgement and packet starts getting dropped,  TCP mechanism reduces its window size by half.  This is called Tail Drop.

After some time, same thing happens and traffic slowly picks up and goes to the same level and then drops again. This happens repeatedly and when another device tries to send the data in this situation, it synchronize its window size accordingly. Traffic from second device behaves almost same though allocated bandwidth for second application is not yet over-utilized. With this process, we have gaps where bandwidth remain un-utilized during the moment where window size is dropped (at low level). When all the application running on the system behaves the same way and follow the same pattern, its become a global problem and now called global synchronisation.

This is how TCP system has been working for decades and resulting improper utilization and global synchronization problem due to its TCP tail drop feature.

Consequences of TCP Tail Drop Problem:
  • Global Synchronization and non-utilized networks
  • Traffic starvation
  • Unbiased Dropping
Random Early Detection(RED)

To overcome this problem of Tail Drop, RED technique was devoloped. It has below features: 
  • Randomly drops packets from TCP flows to minimize Synchronization and helps in utilizing whole bandwidth properly. So It is nothing but sacrificing few TCP packet for the sake of rest of the network.
  • Dropping of RED becomes more aggressive as Queues start filling in and reaches maximum available bandwidth limit.
Limitation of RED: works only in case of TCP where window size has an impact on end to end packet delivery.

Weighted Random Early Detection (WRED)

WRED is a Cisco's implementation of RED allows multiple RED profiles in :
IP Precedence (8 Profiles)
DSCP (64 profiles)

HOW WRED works:

When traffic stream reaches up-to minimum threshold and crosses, traffic begins to drop on some ratio based on MPD. Once traffic crosses the maximum threshold, whole traffic which is exceeding starts getting dropped.

So minimum threshold is where, IP packets start getting dropped. Maximum Threshold is where whole traffic above this threshold is dropped.

Mark Probability Denominator (MPD) is number of packets from which 1 packet will be dropped. If we say MPD: 100, it means 1 packet out of 100 packets will be dropped when traffic crosses minimum threshold.

Terms:
MPD=Mark Probability Denominator
Maximum Threshold
Minimum Threshold

We can make WRED work in different classes of traffic. For example, we can set different Minimum/Maximum Threshold and MPD values based on class priority for different classes.

See below picture:

for IP Prec 1, we want to start dropping when minimum threshold is 10 means 10 packets are in queue. Similarly for IP Prec 2, we want to start dropping when there are 15 packets in the queue.

Dropping of packet will be based on MPD value defined. After maximum threshold all the packets in the queue will be dropped which are crossing maximum threshold.

Config example:

R3 (config)#policy-map WRED1
R3 (config-pmap)#class Match_FTP
R3 (config-pmap-c)#random-detect?
dscp
dscp-based
ecn
prec-based
precedence
R3 (config-pmap-c)#random-detect dscp-based

If we want to use cisco recommended WRED config, we need to configure dscp-based or prec-based based on dscp and IP Precedence marking
If we need to tweek the WRED config, we can configure like below:

R3 (config-pmap-c)#random-detect dscp af11 <min_threshold> <maximum_threshold> <mpd>

Explicity Congestion Notification (ECN)

It does make WRED a little proactive in essences that it is going to try to tell the sender to slow down instead of dropping packet at random.It uses last two bits of ToS bytes in IP packet for the purpose. These two bits are called ECN bits. Cisco router can mark these bits on one of these four marking as given below:
  • 00-Non ECN-capable
  • 01-Endpoints are ECN Capable (nothing else to be done)
  • 10-Endpoints are ECN Capable (nothing else to be done)
  • 11-Congestion Experienced (tells sending device to slow down)


ECN to work perfectly, routers and sending devices (PCs, phone) both should be ECN compatible. Sending router sends ECN bits 11 to other end router.  Receiving router sends ECN-ECHO back to the sending device in the response. Once sending device receives the ECH-ECHO, it slows down the sending.

Config:

R3 (config-pmap-c)#random-detect ecn

Before this configuration to work perfectly, devices should be ECN compatible. Whenever congestion is notified, ECN bits(11) stating congestion Experienced are sent.

Saturday, 7 April 2012

Quality of Service (QoS)-Part6

Congestion Management-Configuration

NBAR (Network Based Application Recognition) has a built in feature to allow the protocol discovery feature to monitor the traffic statistics.
By default NBAR will monitor the session for 5 minutes (300 secs) average of traffic. With load interval command we can change this. Here Load interval is set to 60 seconds.


Router(config)# int S0/0
Router(config-if)#ip nbar protocol discovery
Router(config-if)#load-interval 60

After enabling above commands, we can monitor the network going through serial0 on router. Command is :

Router# show ip nbar protocol-discovery stats bit-rate top-n 10

Above command will display top 10 sender input/output on serial 0 interface. NBAR continuously monitors the network traffic and generates this information for the user. In above output, "unknown" traffic is might be the traffic that NBAR is unable to identify.

Router3(config)#class-map Match_HTTP
Router3(config-cmap)#match protocol http

As soon as we use protocol keyword NBAR is operational.

Now, create second class-map as below:

Router3(config)#class-map Match_FTP
Router3(config-cmap)#match protocol ftp

Defining Policy Map:

Router3(config)#policy-map Mark_traffic

Calling class-maps under Policy Map:


Router3(config-map)#class Match_HTTP
Router3(config-map-c)#set dscp af21 =========>>First class-map

Router3(config-map)#class Match_FTP
Router3(config-map-c)#set dscp af11 =========>>Second class-map

Note:We can have maximum 256 class-maps under one policy-map

Router3#show policy-map
Plicy Map Mark-traffic
Class Match_HTTP
   set dscp af21
Class Match_FTP
 set dscp af11

Marking is usually done in Inbound direction but here we have discussed the problem:

Router3 (config)# int e0/0
Router3 (config-if)#service-plicy input Mark_traffic

This completes marking of different kinds of traffic.

Checking the marking of traffic on interface basis.

Router3#show policy-map interface ethernet 0/0
On router2, we will re-classify the traffic based on different marking labels (AF21,AF11) like
below:

Router2(config)#class-map Match_AF11
Router2(config-cmap)#match dscp af11

Router2(config)#class-map Match_AF21
Router2(config-cmap)#match dscp af21


Now Implementating Queuing:

Router2(config)#policy-map LLQ
Router2(config-pmap)#class-map af11
Router2(config-pmap-c)#bandwidth ?
<8-2000000> Kilo Bits per second
percent    % of total bandwidth
remaining   % of remaining bandwidth
Router2(config-pmap-c)#bandwidth percent 10

Router2(config-pmap)#class class-default
Router2(config-pmap-c)# fair-queue  ===================>>>enabling WFQ for default class

Configuring Priority Queuing

Router2(config-pmap)#class Match_AF21
Router2(config-pmap-c)#priority ?
<8-2000000> Kilo Bits per second
percent    % of total bandwidth

If we apply 70 percent here in priority queue, then

Router2(config-pmap-c)#priority 70

But when we try to apply this policy map input direction on S0, it gives an Error.
It is important thing to remember is that We can only do 3 things in inbound traffic, classify, Mark and Police but we cannot give priority or re queue the traffic as it is completly dependent on sending device so we cannot apply this policy in inbound direction.Therefore, when we try to apply this policy map, it gives an error.

Moreover, by the rule we can only assign maximum 75 % of total available bandwidth but in this case we have requested of total 80 percent of available bandwidth (10 % for AF11, 70 % for AF21) We can change bandwidth limitation with below command per interface basis:

Router(config-if)#max-reserved-bandwidth 90

By applying this we can use 90 percent of total available bandwidth for the QoS purpose.Another thing to remember is congestion management only takes place when there is congestion in the network. When we try to apply any outbound policy on Ethernet, mostly we do not see any QoS applied. Therefore, we need to apply policy map to the inbound of WAN interface for receiving device and outbound of WAN interface for sending router.

Using same class-map for inbound/outbound 

Moreover, if we try to use same class-map for inbound and outbound direction, it becomes more processor work for the router. The whole purpose of applying QoS is not to open actual packet which is already marked but in the case where we are using same class-map for inbound and outbound direction, if a class-map is used in inbound direction and marked with some kind of marking, the packet will be re-opened and re-marked when we apply the same class-map for outbound direction.


Friday, 6 April 2012

Quality of Service (QoS)-Part5

Congestion Management-Queuing Methods


Legacy Queuing Methods

These methods are still available and still being used by some of the routers on Cisco networks. These were the queuing methods that you would have deployed using the CLI before MQC standards came into existence. FIFO is the default queuing method but rest of the queuing methods are only not used unless there is any congestion observed on the router interface. That is why sometimes, these queuing methods are called congestion management.

First In First Out (FIFO) 

It is one of the easiest queuing method to understand. The first packet get in the line will be the first one to get sent. Problem with FIFO is, that sometimes it is little bit too fair in essence that first one to get there is the first one to get out but first one might not be the packet that we most want to get out first. For example, bandwidth hungry data application may be sending thousands of large packets but we want some of our voice packet to be sent first and in that case this method is not the best way to go. Lot of people do accurately referred this method as best effort delivery system and it is ON by default in every Cisco router for high speed links.

Problem with FIFO:
  • We can only have a single queue.
  • No Delay Guarantee
  • No Bandwidth Guarantee
  • Not recommended for Voice and live video.
Priority Queuing

In Priority Queuing, we have lots of high priority or medium priority traffic, low priority traffic is never sent. We generally have 4 queues of traffic-High, Medium, Normal and Low.  You need to define what traffic is going to go in each one of those queues. As long as there is a traffic in high queue, it will be serviced. Medium, Normal and Low queues will completely be neglected. Likewise, if there is traffic in Medium queue, Normal and Low queues will get neglected. It is not very flexible because sometimes if there is very high traffic in High Queue waiting to be transmitted, rest of the queues will never get serviced.
  • We can have upto 4 kind of traffic queues.
  • It is a strict priority method, high priority queues get priority
  • Gives delay guarantee for high priority queue)
  • Sometimes not recommended for Voice and live Video because Voice and Video gets high priority. In case of High priority queue congestions due to too many calls or huge video stream, rest of network will be in chaos.  
Custom Queuing

The idea of custom queuing is that router will serivce request in round robin sense, it is not brutle queuing like priority queuing. IN Custom queuing, you have upto 16 queues to define. This is not a good solution of time sensitive traffic live voice-video. For example,
1st queue (voice,video)-600kbps
2nd queue (data-time sensitive)-300 kbps
3rd queue (jun-traffic)-100 kbps

Due to its round robin behaviour after sending 600kbps of time sensitive data, it will block the stream from 1st queue and 300 kbps transfer of 2nd queue will get started. After 300 kbps of data transfer from 2nd queue, it will start transfering 100 kbps from the 3rd queue. Tranfer will again get started with 1st queue. So this delay caused by round robin behaviour of queuing method can affect time -sensitive applications like voice-video a lot. Therefore, it is also not efficient method of queuing. As already stated, like above 3 queues, we can have maximum 16 queues configured.
  • Traffic queues upto 16.
  • Round Robin method is used.
  • Number of queues contributes delays. So no delay guarantee for time sensitive data, voice and live video stream.
  • Non recommend to voice.
  • There is bandwidth guarantee due to its behaviour.
Weighted Fair Queuing

It is default queuing method for any link on Cisco routers that is less than 2048kbps (Slow WAN connections). So for T1 serial links also it is default queuing methods. Cisco routers able to high talkers on the networks which uses most of the bandwidth. Cisco identifies low talkers too like Telnet and AKA. With this method, Cisco router will give priority to low talkers over high volume sender.
  • No delay guarantee
  • No bandwidth Guarantee
  • Not recommend for VOICE
New Queuing Methods

Class based Weighted for queuing  (CBWFQ)
(Enhanced Custom Queuing + Weighted Fair Queuing)

It is the combination of enhanced custom queuing and Weighted Fair Queuing. First of all, we need to define our class-maps. We can have maximum 256 class-maps of different traffic. Using new enhanced custom Queuing, we can assign specific amount of bandwidth (in percent) to each one of
these classes. Before, in customer queuing, we could only assign amount of traffic in bytes. This way we assign specific amount of guaranteed bandwidth to a data class. After defining all the classes, whatever the bandwidth left is treated with default queuing method which is Weighted Fair Queuing or if we want we can treat default queue with FIFO.

For example if we have 600 kbps bandwidth available and we have distributed bandwidth like below:
Four queues have been identified as of now..like

35 % for Voice and Video=210 kbps
20 % for Critical Data=120 kbps
15 % for Internet=90 kbps
30 % unclassified=180 kbps

First of all 180 kbps of traffic will be treated with Weighted Fair Queuing (default method). Rest of the bandwidth have been assigned to different classes. Above classification means, In case of any congestion in the network 210 kb of voice and Video packet, 120 kb of Critical Data and 90 kb of amount of information will be tranfered in round robin fashion. What this is, in case we have calls which are using more than 210 kb, will be terminated and will not be resumed until 120+90+180=390 kb of information from other classes get transferred.

This way, there will be some delay between the voice call and live streaming video. This delay can cause disruption. So CBWBQ is not considered appropriate for such classes.

LLQ (Low Latency Queuing)

PQ+CBWFQ=PQ+Custom Queuing+Weighted Fair Queuing We need to give voice and live video the ultimate priority. No matter what, if a voice packet comes in, it always gets priority over other classes.
We as a administrator need to specify how much bandwidth Priority Queue can consume. As soon as
that limit is reached, packets start getting dropped. The remaining bandwidth after reserving bandwidth for Priority Queuing is treated with CBWFQ (Class Based Weighted Faire Queuing). CBWFQ classes can use unused bandwidth of other classes based on their relative value (percentage).
Feature:
  • Only One Priority Queue can be configured.
  • Upto 256 Custom Queues if requred.
  • Delay guarantee for Priority Queue.
  • Bandwidth Guarantee for all Queues.
Rules from Cisco but not restricted
  • By default, bandwidth is available for QoS is 75% of total bandwidth. Upper limit can be changed with configuration. 
  • Cisco recommends to leave 25 % for default class (ie. routing, protocols, synch, others).
  • You should only allocate maximum 33% of available bandwidth to Priority applications (PQ).
  • Bandwidth is policed in Priority Queue so the applications in PQ cannot use more than that. 
  • After PQ reservation, the remaining bandwidth will be 75%-33%=42% of total available bandwidth.



Thursday, 5 April 2012

Quality of Service (QoS)-Part9

IP Precedence and DSCP

Why Marking necessary?

Though Cisco routers have inbuilt capability of classifying and re-classifying on every hop on the way but the problem is we have to look inside the packet on every hop, which is processor intensive and can become a big problem in a large network such as ISP.


So It is always good to mark your traffic at trust boundary. Trust boundary is the first place that you decide to mark your traffic. That marking will be trusted throughout the network. First device in the trust boundary will classify and mark the traffic and rest of the devices in the path can take decision based on particular marking and that is without opening whole box of packet. In a large ISP network, this is a scalable solution.


According to Cisco, We should mark the traffic as closed to the source as we possibly can. So trust boundary should be closer to the source. That device might be the source device, any switch or the router.


But in practical life, based on requirement, the best idea would be to apply QoS before the traffic enters to the Wide Area Network because this is the point which is most likely to be congested. Another way, it would be better idea to apply QoS on LAN devices , switches, PCs or QoS enable IP phones only in case, if LAN network is prone to be congested frequently. Another thing that might prevent applying QoS in LAN is that most of the devices in LAN are not QoS capable.
Layer 2 and Layer 3 Marking
Layer 2 Marking
Below layer 2 markings are stripped at router before applying QoS in explosed IP packet.
Class of Service (CoS)
Frame-relay DE bit
ATM CLP bit
MPLS Exp bits


Layer 3 Marking
Below markings are encapsulated in layer 2 frames/Cells/labels and do not change while passing through the routers. So it more reliable. 
  • IP Precedence
  • DSCP (Differenciate Services Code Point)

IP Precedence

It is placed in layer 3 header in Type of Service (ToS) filed. It is 8 bits field. But IP Precedence only use first 3 bits for this field. Therefore , IP Precedence marks with values 0-7.  So, we can say converting all the available 8 bits from binary to decimal, we can get 0-255 values but IP Precedence only uses 0-7 values. Because we can have maximum 7 values with 3 bits.

Binary review:

first bit from right ON means=1
second bit from right ON means=2
third bit from right ON means=4
fourth bit from right ON means=8
fifth bit from the right ON means=16
Sixth bit form the right ON means=32
Seventh bit from the right ON means=64
Eighth bit from the right ON means=128
So:
0=000
1=001=1
2-010=2
3=011=0+1
4=100=4
5=101=4+1
6=110=4+2
7=111=4+2+1
Thus,
Available IP precedence classes :


7- Reserved  (BPDU,Hello)===111
6- Reserved  (routing protocol/Synchronisation protocol)===110
5- Voice Bearer======================================== 101
4- Video-conferencing================================== 100
3- Call-signalling        (Provides statstics for the audio calls)= 011
2- High Priority Data    (like Citix, SAP)=================== 010
1- Medium-priority Data ===============================  001  
0- Best-Efforts Data (Web surfing, peer-to-peer)=========   000


DSCP (Differentiated Services Code Point)


DSCP is new method of marking. DSCP uses all available 8 bits of ToS filed but to maintain 
compatibilty with old IP Precedence standards, designers introduce new method so that IP 
precedence only router can understand the importance of of the packet coming from DSCP configured router. IN DSCP, they took these 8 bits and divided into 3 groups. First group will contain first 3 bits, Second group contains another 3 bits and third group contains rest of 2 bits out of 8 bits


First group is called Major Class
Second group is called Minfor class or Drop Precedence class
Third group is called ECN (Explicit Congestion Notification) bits


First two groups are mainly responsible for QoS marking. It means first 6 bits can be used in 
DSCP. If we convert it to decimal, we can have 0-63 values in DSCP.
111111=63  


Instead of using all 64 values (0-63), Cisco defined two classes-major and minor to make it backward compatible with IP Precedence.
Major Class
Major class of traffic gives first 3 bits means values between 0-7.  This way DSCP becomes 
compatible with IP Precedence and providing same set of marking that IP precedence was defining.
11=3 (third bit is unused)


When Cisco defined DSCP, they defines some of the new classes  Line IP Precedence, higher numbered classes are treated better than lower numbered class. Means DSCP AF4 is better than DSCP AF3.
For example:


Default Class=========DSCP value 0========IP Prec 0===000
Assured Forwarding (AF)-
AF1========DSCP 1 ==========IP Prec 1 =====001
AF2========DSCP 2 ==========IP Prec 2 =====010
AF3========DSCP 3 ==========IP Prec 3 =====011
AF4========DSCP 4 ==========IP Prec 4 =====100
Expedite Forwarding(EF)=======DSCP 5 =====IP Prec 5==101


DSCP 6 and DSCP 7 are reserved and can never be used so not shown in DSCP configuration as well.

Minor Class (Drop Precedence)

It is three bit field expresses Drop Preference of the Major classes. Last bit of this Minor class is always 0. So IP Precedence only needed left 2 bits. With two bits we can only have maximum 3 values.


But unlike Major class, In Minor classes lower is better (Lower is better).  This is why it is also called drop precedence. In Minor class, three numbers are allowed (1,2,3) for each major AF class through (1-4). For example:


AF1 --------DSCP 4 ------IP Prec 4
  AF11
  AF12
  AF13
AF2 --------DSCP 4 ------IP Prec 4
  AF21
  AF22
  AF23
AF3 --------DSCP 4 ------IP Prec 4
  AF31
  AF32
  AF33
AF4 --------DSCP 4 ------IP Prec 4
  AF41
  AF42
  AF43


Some calculations


AF11 means :


Major=1=3bits================001
Minor=1=2bits+1 bit (unused)==01 +0==010


IN decimal=major +minor=001010=10


AF31 means :


Major=3=3bits==011
Minor=1=2bits+1 bit (unused)==01+0=010


In decimal=major +minor=011010=16+8+2=26


AF43 means :


Major=4=3bits=011
Minor=3=2bits+1 bit (unused)=10+0=100


In decimal=011100=16+8+4=28


So we can have 12 DSCP AF classes with the use of Major and Minor classes. One default class and one EF (Expedite Forwarding) class addition of these 12 DSCP AF classes concludes 14 DSCP AF classes against 7 classes of IP Precedence. In decimal, we can have DSCP valued between 0-63.
Class Selector
Class Selector is used when you want IP Precedence kind of flexibility in DSCP enable router. This is another term we can use in DSCP. This is the functional equivalent to IP precedence, they just call it class selector in DSCP speak.  


Note that drop preference is n’t used in any of the available 7 CS classes like we do in IP Precedence.




Wednesday, 4 April 2012

Quality of Service (QoS)-Part4


Classification,Marking and NBAR


Classification: Inspecting one or more aspects of a packet to see what that packet is carrying.
Primarily classification can be done by access-list based on source/destination ip addresses and port numbers. Here are the limitations of access-list:

  • A few application keep changing IP addresses and even port numbers 
  • You can either allow or block all the traffic or some kind of traffic based on port number but you cannot police, shape that traffic generated by different applications/protocols.
Marking: Writing information to a packet to identify the classification decision. It is just coloring the packets. Colouring or marking helps identify the packet without opening the actual data. It is not at all necessary to do marking for running QoS. The only job of marking is colouring the packet for identification. That'z it.


For example:
RTP or RTCP are two protocols that are used to manage audio sessions. We can prioritize traffic for these protocol before entering into the router. Our first job will be to classify this traffic. Classification can be done in different levels:


Layer 2=based on source ATM VCC, FR DLC, MAC
Problem: QoS based on layer 2 information creates lots of CPU and configuration overhead.What about that PC that run multiple services (VOICE,VIDEO,DATA) but have a single mac address. Or a DLC which is used to tranfer data/voice packets both. So this is not scalable.


Layer 3=based on source IP address
Problem: Devices keep changing IP addresses and this becomes difficult when DHCP keep changing IPs. So this is also not scalable.


Layer 4=based on source port numbers
Problem:RTP and RTCP can change UDP port numbers dynamically. And prioritizing UDP traffic will also prioritize other application traffic that uses UDP as transport layer protocol. So it is also unreliable.






Layer 5-7=NBAR PDLM (Application itself)
NBAR is used to classify based on application itself (like RTP or RTCP). But router has to perform a big job to pass through all the layers (7 to 2) before it looks into actual so a very high administrative processor overhead attached in doing classification based on NBAR. And we do not want same job to be done by every router(layer 3) device in the path.


Most likely we will not do this and we will only configure first router in trust boundary to mark the classify the traffic and mark it so that other devices won't have to do classification again. Other router will only see ToS field in the IP header and forward the traffic accordingly.


Trust boundary:So the first incoming router is known as trust boundary . Trust boundary is where we first begin to trust and mark the traffic. Trust boundary should be as close as 
possible as recommended by Cisco.


Commands:
router#show class-map
  Class Map match-all test (id1)
    Description:test class
      Match any
      Match non access-group 5


Above commands will classify traffic from any source which do not fulfil access-group 5.
Router(config-cmap)#match ?
    access-group
    any
    class-map  ===>based on another class-map called embedded class-map
    cos  =======>>match layer 2 to new layer 3 marking
    destination-address  =====>>destination mac address
    dscp
    source-address ====>>source mac address
    fr-de   ====> match frame-relay DE bit (mark a kind of traffic with DE bith before it get to the ISP)


Router(config)#class-map test2   
Router(config-cmap)#match class-map test


In above 2 commands new class-map test2 will match only from the traffic which is already matched in class-map test. suppose class-map test is matching internet traffic and in new class-map we want to match ftp traffic only, not http traffic. This method can be used.


Router(config)#class-map test3   
Router(config-cmap)#match input-interface s0


Above class-map test3 will match all the traffic coming to serial 0.


Router(config)#class-map test4   
Router(config-cmap)#ip ?
dscp
precedence
rtp   ===>>rtp ports


Router(config)#class-map test5
Router(config-cmap)#packet    ===>> packet length in bytes


Router(config)#class-map test6
Router(config-cmap)#mpls    =====>>mark experimental bits in mpls label


Network Based Application Recognition (NBAR)
By using this feature you can classify traffic without knowing IP addresses or port numbers. This NBAR application can start recognizing based on application signature itself. This is why it is very powerful. NBAR does not use the advance feature of all of the mentioned protocol/ applicaiton. Therefore, we require deep packet inspection for the latest protocol/application version. 
NBAR feature allows you to quickly create class-maps that matches specific application. NBAR is extendable through a system of PDLM .
for example :
Router(config)#class-map test7
Router(config-cmap)#protocol ?
    arp
bgp
cdp
gnutella
eigrp
icmp
http
cetrix
gopher
gre
exchange
fasttrack
edonkey
jpeg
    xwindows
telnet
bittorrent
SAP
etc.


NBAR:- Network Based Application Recognition.
PDLM=(Packet Description Language Module)
e.g
If there is any change in protocol name or version, you can download PDLM for that application version and definition for all of the new application that coming out. These are only 1-3 kB files and copy them into the flash of the router after downloading from Cisco site. 


Once it is copied into flash apply this below command.
Router(config)#ip nbar pdlm flash://bittorrent.pdlm


NBAR has built in packet sniffing capability. NBAR can watch through the traffic that is traversing the router (input and output) By turning on this feature, we will be turning on some processing overhead in the router.


Router(config)# int s0
Router(config-if)# ip nbar protocol-discovery


to show traffic flow enabled by nbar below command is used:
Router#show ip nbar protocol-discovery stats bit-rate top-n <n>
or 


To check what type of traffic in unknown category, below command is used:
router#show ip nbar unclassified-port-stats