]> git.puffer.fish Git - matthieu/frr.git/commit
bgpd: Support tcp-mss for bgp neighbors
authorAbhinay Ramesh <rabhinay@vmware.com>
Thu, 8 Apr 2021 10:28:35 +0000 (10:28 +0000)
committerAbhinay Ramesh <rabhinay@vmware.com>
Tue, 4 May 2021 06:21:24 +0000 (06:21 +0000)
commit4ab467017e922b6b32565952523051862e636e4e
tree82de506fa2ec51222ea96e16ad0ed04273434341
parentf71e1ff6a98d0e244c7da11d870d14e31b517811
bgpd: Support tcp-mss for bgp neighbors

Problem Statement:
=================
In scale setup BGP sessions start flapping.

RCA:
====
In virtualized environment there are multiple places where
MTU need to be set. If there are some places were MTU is not set
properly then there is chances that BGP packets get fragmented,
in scale setup this will lead to BGP session flap.

Fix:
====
A new tcp option is provided as part of this implementation,
which can be configured per neighbor and helps to set the TCP
max segment size. User need to derive the path MTU between the BGP
neighbors and set that value as part of tcp-mss setting.

1. CLI Configuration:
[no] neighbor <A.B.C.D|X:X::X:X|WORD> tcp-mss (1-65535)

2. Running config
    frr# show running-config
    router bgp 100
     neighbor 198.51.100.2 tcp-mss 150       => new entry
     neighbor 2001:DB8::2 tcp-mss 400        => new entry

3. Show command
    frr# show bgp neighbors 198.51.100.2
    BGP neighbor is 198.51.100.2, remote AS 100, local AS 100, internal link
    Hostname: frr
      Configured tcp-mss is 150, synced tcp-mss is 138     => new display

4. Show command json output

    frr# show bgp neighbors 2001:DB8::2 json
    {
      "2001:DB8::2":{
        "remoteAs":100,
        "bgpTimerKeepAliveIntervalMsecs":60000,
        "bgpTcpMssConfigured":400,                               => new entry
        "bgpTcpMssSynced":388,                                  => new entry

Risk:
=====
Low - This is a config driven feature and it sets the max segment
size for the TCP session between BGP peers.

Tests Executed:
===============
Have done manual testing with three router topology.
1. Executed basic config and un config scenarios
2. Verified if the config is updated in running config
   during config and no config operation
3. Verified the show command output in both CLI format and
   JSON format.
4. Verified if TCP SYN messages carry the max segment size
   in their initial packets.
5. Verified the behaviour during clear bgp session.
6. done packet capture to see if the new segment size
   takes effect.

Signed-off-by: Abhinay Ramesh <rabhinay@vmware.com>
13 files changed:
bgpd/bgp_network.c
bgpd/bgp_vty.c
bgpd/bgpd.c
bgpd/bgpd.h
doc/user/bgp.rst
lib/sockopt.c
lib/sockopt.h
tests/topotests/bgp_tcp_mss/__init__.py [new file with mode: 0644]
tests/topotests/bgp_tcp_mss/r1/bgpd.conf [new file with mode: 0644]
tests/topotests/bgp_tcp_mss/r1/zebra.conf [new file with mode: 0644]
tests/topotests/bgp_tcp_mss/r2/bgpd.conf [new file with mode: 0644]
tests/topotests/bgp_tcp_mss/r2/zebra.conf [new file with mode: 0644]
tests/topotests/bgp_tcp_mss/test_bgp_tcp_mss.py [new file with mode: 0644]