在生产环境上经常需要模拟一下应用间异地调用的场景,或者模拟一下协议栈优化在丢包的情况下是否有效果。可以使用netem来做。具体的直接看看的介绍就行。
Examples
Emulating wide area network delays
This is the simplest example, it just adds a fixed amount of delay to all packets going out of the local Ethernet.
Real wide area networks show variability so it is possible to add random variation.
tc qdisc change dev eth0 root netem delay 100ms 10msThis causes the added delay to be 100ms ± 10ms. Network delay variation isn’t purely random, so to emulate that there is a correlation value as well.
tc qdisc change dev eth0 root netem delay 100ms 10ms 25%This causes the added delay to be 100ms ± 10ms with the next random element depending 25% on the last one. This isn’t true statistical correlation, but an approximation.
Delay distribution
Typically, the delay in a network is not uniform. It is more common to use a something like a normal distribution to describe the variation in delay. The netem discipline can take a table to specify a non-uniform distribution.
tc qdisc change dev eth0 root netem delay 100ms 20ms distribution normalThe actual tables (normal, pareto, paretonormal) are generated as part of the iproute2 compilation and placed in /usr/lib/tc; so it is possible with some effort to make your own distribution based on experimental data.
Packet loss
Random packet loss is specified in the ‘tc’ command in percent. The smallest possible non-zero value is:
232 = 0.0000000232%
tc qdisc change dev eth0 root netem loss 0.1%This causes 1/10th of a percent (i.e 1 out of 1000) packets to be randomly dropped.
An optional correlation may also be added. This causes the random number generator to be less random and can be used to emulate packet burst losses.
tc qdisc change dev eth0 root netem loss 0.3% 25%This will cause 0.3% of packets to be lost, and each successive probability depends by a quarter on the last one.
Probn = .25 * Probn-1 + .75 * Random
Caveats
When loss is used locally (not on a bridge or router), the loss is reported to the upper level protocols. This may cause TCP to resend and behave as if there was no loss. When testing protocol reponse to loss it is best to use a netem on a bridge or router
Packet duplication
Packet duplication is specified the same way as packet loss.
tc qdisc change dev eth0 root netem duplicate 1%
Packet corruption
Random noise can be emulated (in 2.6.16 or later) with the corrupt option. This introduces a single bit error at a random offset in the packet.
tc qdisc change dev eth0 root netem corrupt 0.1%
Packet re-ordering
There are two different ways to specify reordering. The first method gap uses a fixed sequence and reorders every Nth packet. A simple usage of this is:
tc qdisc change dev eth0 root netem delay 100ms 75msIf the first packet gets a random delay of 100ms (100ms base – 0ms jitter) and the second packet is sent 1ms later and gets a delay of 50ms (100ms base – 50ms jitter); the second packet will be sent first. This is because the queue discipline tfifo inside netem, keeps packets in order by time to send.
Caveats
Mixing forms of reordering may lead to unexpected resultsAny method of reordering to work, some delay is necessary.If the delay is less than the inter-packet arrival time then no reordering will be seen.
Rate control
There is no rate control built-in to the netem discipline, instead use one of the other disciplines that does do rate control. In this example, we use Token Bucket Filter (TBF) to limit output.
Delaying only some traffic
Here is a simple example that only controls traffic to one IP address.
FAQ
How come first ping takes longer?
The first ICMP packet in a ping requires an ARP request/response as well.
How come TCP is so slow over netem?
When you run TCP over large Bandwidth Delay Product links, you need to do some TCP tuning to increase the maximum possible buffer space.
How can I use netem on incoming traffic?
You need to use the Intermediate Functional Block pseudo-device IFB . This network device allows attaching queuing discplines to incoming packets.
How to reorder packets based on jitter?
Starting with version 1.1 (in 2.6.15), netem will reorder packets if the delay value has lots of jitter.
If you don’t want this behaviour then replace the internal queue discipline tfifo with a pure packet fifo pfifo. The following example has lots of jitter, but the packets will stay in order.
How does the value of HZ impact Netem?
In the 2.6 line of kernels, HZ is a configurable parameter that takes values of either 100, 250, or 1000. Because it affects the granularity with which Netem is able to delay packets, it is most beneficial to set HZ to 1000, which will allow for delays in increments of 1ms. See this mailing list post for a more detailed discussion of the impact of HZ.
In kernel versions, 2.6.22 or later, netem will use high resolution timers, if they are enabled. This allows for finer granularity (sub-jiffie) resolution.