Trey Aspelund [Mon, 13 Feb 2023 22:14:16 +0000 (22:14 +0000)]
tests: update tests using 'show bgp json detail'
There were a few tests using "show bgp ... json detail" that did json
comparisons against a predefined json structure. This updates those
predefined json structures to match the new format of the output.
(new output moves path array under "paths" key and adds header keys)
Trey Aspelund [Fri, 10 Feb 2023 19:05:27 +0000 (19:05 +0000)]
bgpd: fix 'json detail' output structure
"show bgp <afi> <safi> json detail" was incorrectly displaying header
information from route_vty_out_detail_header() as an element of the
"paths" array. This corrects the behavior for 'json detail' so that a
route holds a dictionary with keys for "paths" and header info, which
aligns with how we structure the output for a specific prefix, e.g.
"show bgp <afi> <safi> <prefix> json".
Before:
```
ub20# show ip bgp json detail
{
"vrfId": 0,
"vrfName": "default",
"tableVersion": 3,
"routerId": "100.64.0.222",
"defaultLocPrf": 100,
"localAS": 1,
"routes": { "2.2.2.2/32": [
{ <<<<<<<<< should be outside the array
"prefix":"2.2.2.2/32",
"version":1,
"advertisedTo":{
"192.168.122.12":{
"hostname":"ub20-2"
}
}
},
{
"aspath":{
"string":"Local",
"segments":[
],
"length":0
},
<snip>
```
Donald Sharp [Thu, 2 Feb 2023 21:28:27 +0000 (16:28 -0500)]
lib: Fix non-use of option
Commit d7c6467ba2f55d1055babbb7fe82716ca3efdc7e added the
ability to specify non pretty printing but unfortunately
forgot to use the option variable to make the whole
thing work.
vivek [Fri, 18 Dec 2020 18:55:40 +0000 (10:55 -0800)]
bgpd: Prevent multipathing among EVPN and non-EVPN paths
Ensure that a multipath set is fully comprised of EVPN paths (i.e.,
paths imported into the VRF from EVPN address-family) or non-EVPN
paths. This is actually a condition that existed already in the code
but was not properly enforced.
This change, as a side effect, eliminates the known trigger condition
for bad or missing RMAC programming in an EVPN deployment, described
in tickets CM-29043 and CM-31222. Routes (actually, paths) in a VRF
routing table that require VXLAN tunneling to the next hop currently
need some special handling in zebra to deal with the nexthop (neigh)
and RMAC programming, and this is implemented for the entire route
(prefix), not per-path. This can lead to the bad or missing RMAC
situation, which is now eliminated by ensuring all paths in the route
are 'similar'.
The longer-term solution in CL 5.x will be to deal with the special
programming by means of explicit communication between bgpd and zebra.
This is already implemented for EVPN-MH via CM-31398. These changes
will be extended to non-MH also and the special code in zebra removed
or refined.
Signed-off-by: Vivek Venkatraman <vivek@nvidia.com> Acked-by: Trey Aspelund <taspelund@nvidia.com> Acked-by: Anuradha Karuppiah <anuradhak@nvidia.com> Acked-by: Chirag Shah <chirag@nvidia.com>
Ticket: CM-29043
Testing Done:
1. Manual testing
2. precommit on both MLX and BCM platforms
3. evpn-smoke - BCM and VX
vivek [Thu, 3 Dec 2020 04:04:19 +0000 (20:04 -0800)]
bgpd: Fix deterministic-med check for stale paths
When performing deterministic MED processing, ensure that the peer
status is not checked when we encounter a stale path. Otherwise, this
path will be skipped from the DMED consideration leading to it potentially
not being installed.
Test scenario: Consider a prefix with 2 (multi)paths. The peer that
announces the path with the winning DMED undergoes a graceful-restart.
Before it comes back up, the other path goes away. Prior to the fix, a
third router that receives both these paths would have ended up not
having any path installed to the prefix after the above events.
```
% ./gobgp neighbor 192.168.10.124
BGP neighbor is 192.168.10.124, remote AS 65001
BGP version 4, remote router ID 200.200.200.202
BGP state = ESTABLISHED, up for 00:01:49
BGP OutQ = 0, Flops = 0
Hold time is 3, keepalive interval is 1 seconds
Configured hold time is 90, keepalive interval is 30 seconds
Neighbor capabilities:
multiprotocol:
ipv4-unicast: advertised and received
ipv6-unicast: advertised
route-refresh: advertised and received
extended-nexthop: advertised
Local: nlri: ipv4-unicast, nexthop: ipv6
UnknownCapability(6): received
UnknownCapability(9): received
graceful-restart: advertised and received
Local: restart time 10 sec
ipv6-unicast
ipv4-unicast
Remote: restart time 120 sec, notification flag set
ipv4-unicast, forward flag set
4-octet-as: advertised and received
add-path: received
Remote:
ipv4-unicast: receive
enhanced-route-refresh: received
long-lived-graceful-restart: advertised and received
Local:
ipv6-unicast, restart time 10 sec
ipv4-unicast, restart time 20 sec
Remote:
ipv4-unicast, restart time 0 sec, forward flag set
fqdn: advertised and received
Local:
name: donatas-pc, domain:
Remote:
name: spine1-debian-11, domain:
software-version: advertised and received
Local:
GoBGP/3.10.0
Remote:
FRRouting/8.5-dev-MyOwnFRRVersion-gdc92f44a45-dirt
cisco-route-refresh: received
Message statistics:
```
Donald Sharp [Tue, 14 Feb 2023 20:26:44 +0000 (15:26 -0500)]
bgpd: Remove unnecessary all_digit() call
The call for all_digit is unnecessary as that the local preference
must be entered as a digit. In other words you cannot get to this
point without the string being all digits. This check is unnecessary.
Stephen Worley [Thu, 9 Feb 2023 19:57:31 +0000 (14:57 -0500)]
zebra: add VNI info to flood entry
When we are installing the flood entry for a vtep in SVD,
ensure VNI is set on the ctx object so that it gets
sent to the kernel and set appropriately with src_vni.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
sharathr [Tue, 19 Oct 2021 11:01:50 +0000 (04:01 -0700)]
zebra: Fix for mcast-group update and delete per vni for svd
Ticket: 2698649
Testing Done: precommit and evpn-min
Problem:
When the mcast-group is updated, the changes were being read from the netlink
and populated by zebra, but when kernel sends the delete of fdb delete for the
group, we are deleting the mcast-group that we newly updated. This is because,
currently we blindly reset the mcast-group during fdb delete without checking
for mcast-group associated to the vni.
Fix is to separate add/update and delete mcast-group functions and to check
for mcast-group before resetting during delete.
sharathr [Fri, 8 Oct 2021 14:27:50 +0000 (07:27 -0700)]
zebra: fix for unexpected fdb entry showing up during ifdown/ifup events
Ticket: 2674793
Testing Done: precommit, evpn-min and evpn-smoke
The problem in this case is whenever we are triggering ifdown
followed by ifup of bridge, we see that remote mac entries
are programmed with vlan-1 in the fdb from zebra and never cleaned up.
bridge has vlan_default_pvid 1 which means any port that gets added
will initially have vlan 1 which then gets deleted by ifupdown2 and
the proper vlan gets added.
The problem lies in zebra where we are not cleaning up the remote
macs during vlan change.
Fix is to uninstall the remote macs and then install them
during vlan change.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
When the VLAN-VNI mapping is configured via a map and not using
individual VXLAN interfaces, upon removal of a VNI ensure that the
remote FDB entries are uninstalled correctly.
Signed-off-by: Vivek Venkatraman <vivek@nvidia.com>
Ticket: #2613048
Reviewed By:
Testing Done:
1. Manual verification - logs in the ticket
2. Precommit (user job #171) and evpn-min (user job #170)
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Ticket: 2730328, 2724075
Reviewed By: CCR-11741, CCR-11746
Testing Done: Unit Test
2730328: At high bridge-vids count, VNI devices are not added in FRR if
FRR restarts after loading e/n/i
The issue is the wrt buffer overflow for netlink_recv_msg.
We have defined the kernel recv message buffer in stack which is of size 32768 (32K).
When the configuration is applied without FRR restart things work fine
because the recv message from kernel is well within the limit of 32K.
However with this configuration, when the FRR was restarted I could see that
some recv messages were crossing the 32K limit and hence weren't processed.
Below error logs were seen when frr was restarted with the confuguration.
2021/08/09 05:59:55 ZEBRA: [EC 4043309092] netlink-cmd (NS 0) error: data remnant size 32768
Fix is to increase the buffer size by another 2K
2724075: evpn mh/SVD - some of the remote neighs/macs aren't installed
in kernel post ifdown/ifup bridge
The issue was specific to SVD. During ifdown/ifup of the bridge,
I could see that the access-bd was not associated with the vni and hence
the remote neighs were not getting programmed in the kernel.
Fix is to reference (or associate) vxlan vni to the access-bd when
the vni is reported up. With this fix, I was able to see the remote
neighs getting programmed to the kernel.
Stephen Worley [Fri, 9 Dec 2022 22:23:32 +0000 (17:23 -0500)]
bgpd: SA set labels/num_labels to NULL/0
Static Analysis caught a bug where we could be reading
garbage values for labels/num_lables. Fix that by
ensuring it's set to NULL/0 per loop of the mpath.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Stephen Worley [Tue, 22 Nov 2022 21:41:54 +0000 (16:41 -0500)]
zebra: ignore zero_mac without VNI deletes
Ignore zebra_mac updates if they do not contain a VNI for vxlan
interface. We don't have anything we can do with them.
'''
==443593== Process terminating with default action of signal 6 (SIGABRT): dumping core
==443593== at 0x4E1156C: __pthread_kill_implementation (in /usr/lib64/libc.so.6)
==443593== by 0x4DC4D15: raise (in /usr/lib64/libc.so.6)
==443593== by 0x49823C7: core_handler (sigevent.c:261)
==443593== by 0x4DC4DBF: ??? (in /usr/lib64/libc.so.6)
==443593== by 0x4E1156B: __pthread_kill_implementation (in /usr/lib64/libc.so.6)
==443593== by 0x4DC4D15: raise (in /usr/lib64/libc.so.6)
==443593== by 0x4D987F2: abort (in /usr/lib64/libc.so.6)
==443593== by 0x49C3064: _zlog_assert_failed (zlog.c:700)
==443593== by 0x4F5E6D: zebra_vxlan_if_vni_find (zebra_vxlan_if.c:661)
==443593== by 0x4EEAC3: zebra_vxlan_check_readd_vtep (zebra_vxlan.c:4244)
==443593== by 0x450967: netlink_macfdb_change (rt_netlink.c:3722)
==443593== by 0x450011: netlink_neigh_change (rt_netlink.c:4458)
'''
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Stephen Worley [Wed, 28 Apr 2021 19:45:29 +0000 (15:45 -0400)]
zebra: handle STP state change for SVD per vlan ID
Read in STP state changes for a Single Vxlan Device
via bridge vlan netlink messages. Map the vlanid to a
VNI in the SVD table and treat it similar to how
we handle proto down of the Vxlan device traditionally
in a non-SVD device scenario.
Forwarding == Interface UP
Blocking == Interface DOWN
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Stephen Worley [Thu, 8 Apr 2021 23:20:53 +0000 (19:20 -0400)]
bgpd: add mpath label stack helper functions for dvni
Add some bgp_path_info helper functions for getting the correct l3vni
label, getting the vni from the label stack, and determinging if
the mpath is D-VNI based.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Stephen Worley [Mon, 5 Apr 2021 21:16:38 +0000 (17:16 -0400)]
zebra: nhg resolution handler for d-vni
Add code in the nhg resolution path for determining if Downstream
VNI is in play. This is the only place in all of zebra where
we should be arbitrarily setting the ifindex/labels since
this is where new nhgs are created/destroyed. If something
changes, it must happen here.
We determine if D-VNI is being used by matching the carried
label (VNI) on the nexthop with the vrf VNI from the route.
If they do not match, we can assume this is a D-VNI labeled
nexthop.
We loop through all of the group to see if any are D-VNI. If even
one is, we must treat them all as such. Otherwise, fallback to
traditional EVPN route handling and remove all the labels.
If they are going to be treated as D-VNI we retain the labels and
verify the underlying VRF vxlan interface is a Single VXlan Device.
If it is not, we cannot use D-VNI. If it is, continue on. The VNI label
will encapped via LWTUNNEL and sent to the kernel.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Stephen Worley [Thu, 1 Apr 2021 16:00:04 +0000 (12:00 -0400)]
zebra: install neigh entries on SVD
Install neigh entries always on SVD if it exists in
zebra. If zebra is using a Single Vxlan Device, we must
duplicate the install of our neigh entries to it so that
vxlan communication can also work across it in the downstream VNI
case.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Stephen Worley [Thu, 1 Apr 2021 15:55:05 +0000 (11:55 -0400)]
lib,sharpd: add ability for sharpd to install vni labels
Add the ability for sharpd to install vni labels for testing.
This patch is just for testing/dev work purposes with evpn.
It adds some code to vty for nexthop-groups so we can explicitly
add a label to nexthops and then let sharpd encode them to zebra.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Stephen Worley [Thu, 1 Apr 2021 15:50:31 +0000 (11:50 -0400)]
zebra: encode vni label via lwt encap
Encode the vni label during route install on linux
systems via lwt encap 64bit LWTUNNEL_IP_ID. The kernel expects
this in network byte order, so we convert it.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Stephen Worley [Thu, 1 Apr 2021 15:43:23 +0000 (11:43 -0400)]
bgpd: send L3VNI as route labels to zebra
Add functionality to always send the L3VNI to zebra as a label
on the route. It will be zebra's job to determine how to use it (i.e.
via Single Vxlan Device or not).
The l3VNI according to rfc should always be the second for a type2 route
and be the only one available for a type5. Hence, we can just grab the
last label in the stack here and add it onto the route.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Stephen Worley [Mon, 5 Apr 2021 21:12:01 +0000 (17:12 -0400)]
lib: add label_type as field in zapi_nexthop
Add the ability to specify the label type along with the labels
you are passing to zebra in zapi_nexthop. This is needed as we
abstract the label code to be re-used by evpn as well as mpls.
Protocols need to be able to set the type of label they have attached.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Stephen Worley [Thu, 1 Apr 2021 15:31:44 +0000 (11:31 -0400)]
lib,zebra,bgpd,staticd: use label code to store VNI info
Use the already existing mpls label code to store VNI
info for vxlan. VNI's are defined as labels just like mpls,
we should be using the same code for both.
This patch is the first part of that. Next we will need to
abstract the label code to not be so mpls specific. Currently
in this, we are just treating VXLAN as a label type and storing
it that way.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
zebra: fix for issues found during static analysis
This patch addresses fix for issues found during static analysis.
rt_netlink - initialise vtep if there is NDA_DST attribute
if_netlink - initialise vni_start and vni_end
zebra: Bug fixes in fdb read for flooded traffic and remote fdb cleanup upon vni removal
This patch addresses following issues,
- When the VLAN-VNI mapping is configured via a map and not using
individual VXLAN interfaces, upon removal of a VNI ensure that the
remote FDB entries are uninstalled correctly.
- When VNI configuration is performed using VLAN-VNI mapping (i.e., without
individual VXLAN interfaces) and flooded traffic is handled via multicast,
the multicast group corresponding to the VNI needs to be explicitly read
from the bridge FDB. This is relevant in the case of netlink interface to
the kernel and for the scenario where a new VNI is provisioned or comes up.
zebra: Handle vni determination for non-vlan-aware bridges
This patch addresses following
- Remove unused VLAN Id parameter when trying to determine the VNI associated
with a non-VLAN aware bridge. Also, add a check to ensure that in this case,
we have a per-VNI VXLAN interface. Due to sequence of events, it is possible
that we may have VLAN-VNI mappings, in which case the code should return
gracefully.
- With support for a container VXLAN interface that has VLAN-VNI mappings,
the VXLAN interface itself may be up but a particular VNI might have
been removed. Ensure that VNI mapping exists before proceeding with
further processing.