Project Icon

linux-network-performance-parameters

Linux网络性能参数详解及优化方法

这个项目全面介绍Linux网络性能参数,包括网络队列、缓冲区、中断合并和QDisc等。详细解释了各参数的作用,并提供检查和调优方法。还涵盖TCP读写缓冲、拥塞控制等高级主题,以及相关监控工具。适合系统管理员和开发人员深入了解Linux网络栈性能优化。

🇷🇺

TOC

Introduction

Sometimes people are looking for sysctl cargo cult values that bring high throughput and low latency with no trade-off and that works on every occasion. That's not realistic, although we can say that the newer kernel versions are very well tuned by default. In fact, you might hurt performance if you mess with the defaults.

This brief tutorial shows where some of the most used and quoted sysctl/network parameters are located into the Linux network flow, it was heavily inspired by the illustrated guide to Linux networking stack and many of Marek Majkowski's posts.

Feel free to send corrections and suggestions! :)

Linux network queues overview

linux network queues

Fitting the sysctl variables into the Linux network flow

Ingress - they're coming

  1. Packets arrive at the NIC
  2. NIC will verify MAC (if not on promiscuous mode) and FCS and decide to drop or to continue
  3. NIC will DMA packets at RAM, in a region previously prepared (mapped) by the driver
  4. NIC will enqueue references to the packets at receive ring buffer queue rx until rx-usecs timeout or rx-frames
  5. NIC will raise a hard IRQ
  6. CPU will run the IRQ handler that runs the driver's code
  7. Driver will schedule a NAPI, clear the hard IRQ and return
  8. Driver raise a soft IRQ (NET_RX_SOFTIRQ)
  9. NAPI will poll data from the receive ring buffer until netdev_budget_usecs timeout or netdev_budget and dev_weight packets
  10. Linux will also allocate memory to sk_buff
  11. Linux fills the metadata: protocol, interface, setmacheader, removes ethernet
  12. Linux will pass the skb to the kernel stack (netif_receive_skb)
  13. It will set the network header, clone skb to taps (i.e. tcpdump) and pass it to tc ingress
  14. Packets are handled to a qdisc sized netdev_max_backlog with its algorithm defined by default_qdisc
  15. It calls ip_rcv and packets are handed to IP
  16. It calls netfilter (PREROUTING)
  17. It looks at the routing table, if forwarding or local
  18. If it's local it calls netfilter (LOCAL_IN)
  19. It calls the L4 protocol (for instance tcp_v4_rcv)
  20. It finds the right socket
  21. It goes to the tcp finite state machine
  22. Enqueue the packet to the receive buffer and sized as tcp_rmem rules
    1. If tcp_moderate_rcvbuf is enabled kernel will auto-tune the receive buffer
  23. Kernel will signalize that there is data available to apps (epoll or any polling system)
  24. Application wakes up and reads the data

Egress - they're leaving

  1. Application sends message (sendmsg or other)
  2. TCP send message allocates skb_buff
  3. It enqueues skb to the socket write buffer of tcp_wmem size
  4. Builds the TCP header (src and dst port, checksum)
  5. Calls L3 handler (in this case ipv4 on tcp_write_xmit and tcp_transmit_skb)
  6. L3 (ip_queue_xmit) does its work: build ip header and call netfilter (LOCAL_OUT)
  7. Calls output route action
  8. Calls netfilter (POST_ROUTING)
  9. Fragment the packet (ip_output)
  10. Calls L2 send function (dev_queue_xmit)
  11. Feeds the output (QDisc) queue of txqueuelen length with its algorithm default_qdisc
  12. The driver code enqueue the packets at the ring buffer tx
  13. The driver will do a soft IRQ (NET_TX_SOFTIRQ) after tx-usecs timeout or tx-frames
  14. Re-enable hard IRQ to NIC
  15. Driver will map all the packets (to be sent) to some DMA'ed region
  16. NIC fetches the packets (via DMA) from RAM to transmit
  17. After the transmission NIC will raise a hard IRQ to signal its completion
  18. The driver will handle this IRQ (turn it off)
  19. And schedule (soft IRQ) the NAPI poll system
  20. NAPI will handle the receive packets signaling and free the RAM

How to check - perf

If you want to see the network tracing within Linux you can use perf.

docker run -it --rm --cap-add SYS_ADMIN --entrypoint bash ljishen/perf
apt-get update
apt-get install iputils-ping

# this is going to trace all events (not syscalls) to the subsystem net:* while performing the ping
perf trace --no-syscalls --event 'net:*' ping globo.com -c1 > /dev/null

perf trace network

What, Why and How - network and sysctl parameters

Ring Buffer - rx,tx

  • What - the driver receive/send queue a single or multiple queues with a fixed size, usually implemented as FIFO, it is located at RAM
  • Why - buffer to smoothly accept bursts of connections without dropping them, you might need to increase these queues when you see drops or overrun, aka there are more packets coming than the kernel is able to consume them, the side effect might be increased latency.
  • How:
    • Check command: ethtool -g ethX
    • Change command: ethtool -G ethX rx value tx value
    • How to monitor: ethtool -S ethX | grep -e "err" -e "drop" -e "over" -e "miss" -e "timeout" -e "reset" -e "restar" -e "collis" -e "over" | grep -v "\: 0"

Interrupt Coalescence (IC) - rx-usecs, tx-usecs, rx-frames, tx-frames (hardware IRQ)

  • What - number of microseconds/frames to wait before raising a hardIRQ, from the NIC perspective it'll DMA data packets until this timeout/number of frames
  • Why - reduce CPUs usage, hard IRQ, might increase throughput at cost of latency.
  • How:
    • Check command: ethtool -c ethX
    • Change command: ethtool -C ethX rx-usecs value tx-usecs value
    • How to monitor: cat /proc/interrupts

Interrupt Coalescing (soft IRQ) and Ingress QDisc

  • What - maximum number of microseconds in one NAPI polling cycle. Polling will exit when either netdev_budget_usecs have elapsed during the poll cycle or the number of packets processed reaches netdev_budget.
  • Why - instead of reacting to tons of softIRQ, the driver keeps polling data; keep an eye on dropped (# of packets that were dropped because netdev_max_backlog was exceeded) and squeezed (# of times ksoftirq ran out of netdev_budget or time slice with work remaining).
  • How:
    • Check command: sysctl net.core.netdev_budget_usecs
    • Change command: sysctl -w net.core.netdev_budget_usecs value
    • How to monitor: cat /proc/net/softnet_stat; or a better tool
  • What - netdev_budget is the maximum number of packets taken from all interfaces in one polling cycle (NAPI poll). In one polling cycle interfaces which are registered to polling are probed in a round-robin manner. Also, a polling cycle may not exceed netdev_budget_usecs microseconds, even if netdev_budget has not been exhausted.
  • How:
    • Check command: sysctl net.core.netdev_budget
    • Change command: sysctl -w net.core.netdev_budget value
    • How to monitor: cat /proc/net/softnet_stat; or a better tool
  • What - dev_weight is the maximum number of packets that kernel can handle on a NAPI interrupt, it's a Per-CPU variable. For drivers that support LRO or GRO_HW, a hardware aggregated packet is counted as one packet in this.
  • How:
    • Check command: sysctl net.core.dev_weight
    • Change command: sysctl -w net.core.dev_weight value
    • How to monitor: cat /proc/net/softnet_stat; or a better tool
  • What - netdev_max_backlog is the maximum number of packets, queued on the INPUT side (the ingress qdisc), when the interface receives packets faster than kernel can process them.
  • How:
    • Check command: sysctl net.core.netdev_max_backlog
    • Change command: sysctl -w net.core.netdev_max_backlog value
    • How to monitor: cat /proc/net/softnet_stat; or a better tool

Egress QDisc - txqueuelen and default_qdisc

  • What - txqueuelen is the maximum number of packets, queued on the OUTPUT side.
  • Why - a buffer/queue to face connection burst and also to apply tc (traffic control).
  • How:
    • Check command: ip link show dev ethX
    • Change command: ip link set dev ethX txqueuelen N
    • How to monitor: ip -s link
  • What - default_qdisc is the default queuing discipline to use for network devices.
  • Why - each application has different load and need to traffic control and it is used also to fight against bufferbloat
  • How:
    • Check command: sysctl net.core.default_qdisc
    • Change command: sysctl -w net.core.default_qdisc value
    • How to monitor: tc -s qdisc ls dev ethX

TCP Read and Write Buffers/Queues

The policy that defines what is memory pressure is specified at tcp_mem and tcp_moderate_rcvbuf.

  • What - tcp_rmem - min (size used under memory pressure), default (initial size), max (maximum size) - size of receive buffer used by TCP sockets.
  • Why - the application buffer/queue to the write/send data, understand its consequences can help a lot.
  • How:
    • Check command: sysctl net.ipv4.tcp_rmem
    • Change command: sysctl -w net.ipv4.tcp_rmem="min default max"; when changing default value, remember to restart your user space app (i.e. your web server, nginx, etc)
    • How to monitor: cat /proc/net/sockstat
  • What - tcp_wmem - min (size used under memory pressure), default (initial size), max (maximum size) - size of send buffer used by TCP sockets.
  • How:
    • Check command: sysctl net.ipv4.tcp_wmem
    • Change command: sysctl -w net.ipv4.tcp_wmem="min default max"; when changing default value, remember to restart your user space app (i.e. your web server, nginx, etc)
    • How to monitor: cat /proc/net/sockstat
  • What tcp_moderate_rcvbuf - If set, TCP performs receive buffer auto-tuning, attempting to automatically size the buffer.
  • How:
    • Check command: sysctl net.ipv4.tcp_moderate_rcvbuf
    • Change command: sysctl -w net.ipv4.tcp_moderate_rcvbuf value
    • How to monitor: cat /proc/net/sockstat

Honorable mentions - TCP FSM and congestion algorithm

Accept and SYN Queues are governed by net.core.somaxconn and net.ipv4.tcp_max_syn_backlog. Nowadays net.core.somaxconn caps both queue sizes.

  • sysctl net.core.somaxconn - provides an upper limit on the value of the backlog parameter passed to the listen() function, known in userspace as SOMAXCONN. If you change this value, you should also change your application to a compatible value (i.e. nginx backlog).
  • cat /proc/sys/net/ipv4/tcp_fin_timeout - this specifies the number of seconds to wait for a final FIN packet before the socket is forcibly closed. This is strictly a violation of the TCP specification but required to prevent denial-of-service attacks.
  • cat /proc/sys/net/ipv4/tcp_available_congestion_control - shows the available congestion control choices that are registered.
  • cat /proc/sys/net/ipv4/tcp_congestion_control - sets the congestion control algorithm to be used for new connections.
  • cat /proc/sys/net/ipv4/tcp_max_syn_backlog - sets the maximum number of queued connection requests which have still not received an acknowledgment from the connecting client; if this number is exceeded, the kernel will begin dropping requests.
  • cat /proc/sys/net/ipv4/tcp_syncookies - enables/disables syn cookies, useful for protecting against syn flood attacks.
  • cat /proc/sys/net/ipv4/tcp_slow_start_after_idle - enables/disables tcp slow start.

How to monitor:

  • netstat -atn | awk '/tcp/ {print $6}' | sort | uniq -c - summary by state
  • ss -neopt state time-wait | wc -l - counters by a specific state: established, syn-sent, syn-recv, fin-wait-1, fin-wait-2, time-wait, closed, close-wait, last-ack, listening, closing
  • netstat -st - tcp stats summary
  • nstat -a - human-friendly tcp stats summary
  • cat /proc/net/sockstat - summarized socket stats
  • cat /proc/net/tcp - detailed stats, see each field meaning at the kernel docs
  • cat /proc/net/netstat - ListenOverflows and ListenDrops are important fields to keep an eye on
    • cat /proc/net/netstat | awk '(f==0) { i=1; while ( i<=NF) {n[i] = $i; i++ }; f=1; next} \ (f==1){ i=2; while ( i<=NF){ printf "%s = %d\n", n[i], $i; i++}; f=0} ' | grep -v "= 0; a human readable /proc/net/netstat

![tcp finite state machine](https://upload.wikimedia.org/wikipedia/commons/a/a2/Tcp_state_diagram_fixed.svg

项目侧边栏1项目侧边栏2
推荐项目
Project Cover

豆包MarsCode

豆包 MarsCode 是一款革命性的编程助手,通过AI技术提供代码补全、单测生成、代码解释和智能问答等功能,支持100+编程语言,与主流编辑器无缝集成,显著提升开发效率和代码质量。

Project Cover

AI写歌

Suno AI是一个革命性的AI音乐创作平台,能在短短30秒内帮助用户创作出一首完整的歌曲。无论是寻找创作灵感还是需要快速制作音乐,Suno AI都是音乐爱好者和专业人士的理想选择。

Project Cover

有言AI

有言平台提供一站式AIGC视频创作解决方案,通过智能技术简化视频制作流程。无论是企业宣传还是个人分享,有言都能帮助用户快速、轻松地制作出专业级别的视频内容。

Project Cover

Kimi

Kimi AI助手提供多语言对话支持,能够阅读和理解用户上传的文件内容,解析网页信息,并结合搜索结果为用户提供详尽的答案。无论是日常咨询还是专业问题,Kimi都能以友好、专业的方式提供帮助。

Project Cover

阿里绘蛙

绘蛙是阿里巴巴集团推出的革命性AI电商营销平台。利用尖端人工智能技术,为商家提供一键生成商品图和营销文案的服务,显著提升内容创作效率和营销效果。适用于淘宝、天猫等电商平台,让商品第一时间被种草。

Project Cover

吐司

探索Tensor.Art平台的独特AI模型,免费访问各种图像生成与AI训练工具,从Stable Diffusion等基础模型开始,轻松实现创新图像生成。体验前沿的AI技术,推动个人和企业的创新发展。

Project Cover

SubCat字幕猫

SubCat字幕猫APP是一款创新的视频播放器,它将改变您观看视频的方式!SubCat结合了先进的人工智能技术,为您提供即时视频字幕翻译,无论是本地视频还是网络流媒体,让您轻松享受各种语言的内容。

Project Cover

美间AI

美间AI创意设计平台,利用前沿AI技术,为设计师和营销人员提供一站式设计解决方案。从智能海报到3D效果图,再到文案生成,美间让创意设计更简单、更高效。

Project Cover

稿定AI

稿定设计 是一个多功能的在线设计和创意平台,提供广泛的设计工具和资源,以满足不同用户的需求。从专业的图形设计师到普通用户,无论是进行图片处理、智能抠图、H5页面制作还是视频剪辑,稿定设计都能提供简单、高效的解决方案。该平台以其用户友好的界面和强大的功能集合,帮助用户轻松实现创意设计。

投诉举报邮箱: service@vectorlightyear.com
@2024 懂AI·鲁ICP备2024100362号-6·鲁公网安备37021002001498号