Quality of Service (QoS)-Part5
Congestion Management-Queuing Methods
Legacy Queuing Methods
These methods are still available and still being used by some of the routers on Cisco networks. These were the queuing methods that you would have deployed using the CLI before MQC standards came into existence. FIFO is the default queuing method but rest of the queuing methods are only not used unless there is any congestion observed on the router interface. That is why sometimes, these queuing methods are called congestion management.
First In First Out (FIFO)
It is one of the easiest queuing method to understand. The first packet get in the line will be the first one to get sent. Problem with FIFO is, that sometimes it is little bit too fair in essence that first one to get there is the first one to get out but first one might not be the packet that we most want to get out first. For example, bandwidth hungry data application may be sending thousands of large packets but we want some of our voice packet to be sent first and in that case this method is not the best way to go. Lot of people do accurately referred this method as best effort delivery system and it is ON by default in every Cisco router for high speed links.
Problem with FIFO:
In Priority Queuing, we have lots of high priority or medium priority traffic, low priority traffic is never sent. We generally have 4 queues of traffic-High, Medium, Normal and Low. You need to define what traffic is going to go in each one of those queues. As long as there is a traffic in high queue, it will be serviced. Medium, Normal and Low queues will completely be neglected. Likewise, if there is traffic in Medium queue, Normal and Low queues will get neglected. It is not very flexible because sometimes if there is very high traffic in High Queue waiting to be transmitted, rest of the queues will never get serviced.
The idea of custom queuing is that router will serivce request in round robin sense, it is not brutle queuing like priority queuing. IN Custom queuing, you have upto 16 queues to define. This is not a good solution of time sensitive traffic live voice-video. For example,
1st queue (voice,video)-600kbps
2nd queue (data-time sensitive)-300 kbps
3rd queue (jun-traffic)-100 kbps
Due to its round robin behaviour after sending 600kbps of time sensitive data, it will block the stream from 1st queue and 300 kbps transfer of 2nd queue will get started. After 300 kbps of data transfer from 2nd queue, it will start transfering 100 kbps from the 3rd queue. Tranfer will again get started with 1st queue. So this delay caused by round robin behaviour of queuing method can affect time -sensitive applications like voice-video a lot. Therefore, it is also not efficient method of queuing. As already stated, like above 3 queues, we can have maximum 16 queues configured.
It is default queuing method for any link on Cisco routers that is less than 2048kbps (Slow WAN connections). So for T1 serial links also it is default queuing methods. Cisco routers able to high talkers on the networks which uses most of the bandwidth. Cisco identifies low talkers too like Telnet and AKA. With this method, Cisco router will give priority to low talkers over high volume sender.
Class based Weighted for queuing (CBWFQ)
(Enhanced Custom Queuing + Weighted Fair Queuing)
It is the combination of enhanced custom queuing and Weighted Fair Queuing. First of all, we need to define our class-maps. We can have maximum 256 class-maps of different traffic. Using new enhanced custom Queuing, we can assign specific amount of bandwidth (in percent) to each one of
these classes. Before, in customer queuing, we could only assign amount of traffic in bytes. This way we assign specific amount of guaranteed bandwidth to a data class. After defining all the classes, whatever the bandwidth left is treated with default queuing method which is Weighted Fair Queuing or if we want we can treat default queue with FIFO.
For example if we have 600 kbps bandwidth available and we have distributed bandwidth like below:
Four queues have been identified as of now..like
35 % for Voice and Video=210 kbps
20 % for Critical Data=120 kbps
15 % for Internet=90 kbps
30 % unclassified=180 kbps
First of all 180 kbps of traffic will be treated with Weighted Fair Queuing (default method). Rest of the bandwidth have been assigned to different classes. Above classification means, In case of any congestion in the network 210 kb of voice and Video packet, 120 kb of Critical Data and 90 kb of amount of information will be tranfered in round robin fashion. What this is, in case we have calls which are using more than 210 kb, will be terminated and will not be resumed until 120+90+180=390 kb of information from other classes get transferred.
This way, there will be some delay between the voice call and live streaming video. This delay can cause disruption. So CBWBQ is not considered appropriate for such classes.
LLQ (Low Latency Queuing)
PQ+CBWFQ=PQ+Custom Queuing+Weighted Fair Queuing We need to give voice and live video the ultimate priority. No matter what, if a voice packet comes in, it always gets priority over other classes.
We as a administrator need to specify how much bandwidth Priority Queue can consume. As soon as
that limit is reached, packets start getting dropped. The remaining bandwidth after reserving bandwidth for Priority Queuing is treated with CBWFQ (Class Based Weighted Faire Queuing). CBWFQ classes can use unused bandwidth of other classes based on their relative value (percentage).
Feature:
Congestion Management-Queuing Methods
Legacy Queuing Methods
These methods are still available and still being used by some of the routers on Cisco networks. These were the queuing methods that you would have deployed using the CLI before MQC standards came into existence. FIFO is the default queuing method but rest of the queuing methods are only not used unless there is any congestion observed on the router interface. That is why sometimes, these queuing methods are called congestion management.
First In First Out (FIFO)
It is one of the easiest queuing method to understand. The first packet get in the line will be the first one to get sent. Problem with FIFO is, that sometimes it is little bit too fair in essence that first one to get there is the first one to get out but first one might not be the packet that we most want to get out first. For example, bandwidth hungry data application may be sending thousands of large packets but we want some of our voice packet to be sent first and in that case this method is not the best way to go. Lot of people do accurately referred this method as best effort delivery system and it is ON by default in every Cisco router for high speed links.
Problem with FIFO:
- We can only have a single queue.
- No Delay Guarantee
- No Bandwidth Guarantee
- Not recommended for Voice and live video.
In Priority Queuing, we have lots of high priority or medium priority traffic, low priority traffic is never sent. We generally have 4 queues of traffic-High, Medium, Normal and Low. You need to define what traffic is going to go in each one of those queues. As long as there is a traffic in high queue, it will be serviced. Medium, Normal and Low queues will completely be neglected. Likewise, if there is traffic in Medium queue, Normal and Low queues will get neglected. It is not very flexible because sometimes if there is very high traffic in High Queue waiting to be transmitted, rest of the queues will never get serviced.
- We can have upto 4 kind of traffic queues.
- It is a strict priority method, high priority queues get priority
- Gives delay guarantee for high priority queue)
- Sometimes not recommended for Voice and live Video because Voice and Video gets high priority. In case of High priority queue congestions due to too many calls or huge video stream, rest of network will be in chaos.
The idea of custom queuing is that router will serivce request in round robin sense, it is not brutle queuing like priority queuing. IN Custom queuing, you have upto 16 queues to define. This is not a good solution of time sensitive traffic live voice-video. For example,
1st queue (voice,video)-600kbps
2nd queue (data-time sensitive)-300 kbps
3rd queue (jun-traffic)-100 kbps
Due to its round robin behaviour after sending 600kbps of time sensitive data, it will block the stream from 1st queue and 300 kbps transfer of 2nd queue will get started. After 300 kbps of data transfer from 2nd queue, it will start transfering 100 kbps from the 3rd queue. Tranfer will again get started with 1st queue. So this delay caused by round robin behaviour of queuing method can affect time -sensitive applications like voice-video a lot. Therefore, it is also not efficient method of queuing. As already stated, like above 3 queues, we can have maximum 16 queues configured.
- Traffic queues upto 16.
- Round Robin method is used.
- Number of queues contributes delays. So no delay guarantee for time sensitive data, voice and live video stream.
- Non recommend to voice.
- There is bandwidth guarantee due to its behaviour.
It is default queuing method for any link on Cisco routers that is less than 2048kbps (Slow WAN connections). So for T1 serial links also it is default queuing methods. Cisco routers able to high talkers on the networks which uses most of the bandwidth. Cisco identifies low talkers too like Telnet and AKA. With this method, Cisco router will give priority to low talkers over high volume sender.
- No delay guarantee
- No bandwidth Guarantee
- Not recommend for VOICE
Class based Weighted for queuing (CBWFQ)
(Enhanced Custom Queuing + Weighted Fair Queuing)
It is the combination of enhanced custom queuing and Weighted Fair Queuing. First of all, we need to define our class-maps. We can have maximum 256 class-maps of different traffic. Using new enhanced custom Queuing, we can assign specific amount of bandwidth (in percent) to each one of
these classes. Before, in customer queuing, we could only assign amount of traffic in bytes. This way we assign specific amount of guaranteed bandwidth to a data class. After defining all the classes, whatever the bandwidth left is treated with default queuing method which is Weighted Fair Queuing or if we want we can treat default queue with FIFO.
For example if we have 600 kbps bandwidth available and we have distributed bandwidth like below:
Four queues have been identified as of now..like
35 % for Voice and Video=210 kbps
20 % for Critical Data=120 kbps
15 % for Internet=90 kbps
30 % unclassified=180 kbps
First of all 180 kbps of traffic will be treated with Weighted Fair Queuing (default method). Rest of the bandwidth have been assigned to different classes. Above classification means, In case of any congestion in the network 210 kb of voice and Video packet, 120 kb of Critical Data and 90 kb of amount of information will be tranfered in round robin fashion. What this is, in case we have calls which are using more than 210 kb, will be terminated and will not be resumed until 120+90+180=390 kb of information from other classes get transferred.
This way, there will be some delay between the voice call and live streaming video. This delay can cause disruption. So CBWBQ is not considered appropriate for such classes.
LLQ (Low Latency Queuing)
PQ+CBWFQ=PQ+Custom Queuing+Weighted Fair Queuing We need to give voice and live video the ultimate priority. No matter what, if a voice packet comes in, it always gets priority over other classes.
We as a administrator need to specify how much bandwidth Priority Queue can consume. As soon as
that limit is reached, packets start getting dropped. The remaining bandwidth after reserving bandwidth for Priority Queuing is treated with CBWFQ (Class Based Weighted Faire Queuing). CBWFQ classes can use unused bandwidth of other classes based on their relative value (percentage).
Feature:
- Only One Priority Queue can be configured.
- Upto 256 Custom Queues if requred.
- Delay guarantee for Priority Queue.
- Bandwidth Guarantee for all Queues.
- By default, bandwidth is available for QoS is 75% of total bandwidth. Upper limit can be changed with configuration.
- Cisco recommends to leave 25 % for default class (ie. routing, protocols, synch, others).
- You should only allocate maximum 33% of available bandwidth to Priority applications (PQ).
- Bandwidth is policed in Priority Queue so the applications in PQ cannot use more than that.
- After PQ reservation, the remaining bandwidth will be 75%-33%=42% of total available bandwidth.
No comments:
Post a Comment