TCP pacing
In the field of computer networking, TCP pacing is the denomination of a set of techniques to make the pattern of packet transmission generated by the Transmission Control Protocol less bursty. Where there could be insufficient buffers in switches and routers, TCP Pacing is intended to avoid packet loss due to exhaustion of buffer memory in network devices along the path.[1] It can be conducted by the network scheduler.
Bursty traffic can lead to higher queuing delays, more packet losses and lower throughput.[2] However it has been observed that TCP's congestion control mechanisms may lead to bursty traffic on high bandwidth and highly multiplexed networks,[3] a proposed solution to this problem is TCP pacing. TCP pacing involves evenly spacing data transmissions across a round-trip time. [1]
See also
[edit]References
[edit]- ^ Wei, D; Cao, P; Low, S. "TCP pacing revisited". Proceedings of IEEE INFOCOM. Vol. 2. 2006.
- ^ Kleinrock, L (1975). Queueing systems. Wiley J. OCLC 25403139.
- ^ Zhang, Lixia; Shenker, Scott; Clark, Daivd D. (August 1991). "Observations on the dynamics of a congestion control algorithm". Proceedings of the conference on Communications architecture & protocols. New York, NY, USA: ACM. pp. 133–147. doi:10.1145/115992.116006. ISBN 0897914449. S2CID 7824777.