Donald Sharp [Wed, 12 Jun 2024 19:16:08 +0000 (15:16 -0400)]
zebra: Limit queue depth in dplane_fpm_nl
The dplane providers have a concept of input queues
and output queues. These queues are chained together
during normal operation. The code in zebra also has
a feedback mechanism where the MetaQ will not run when
the first input queue is backed up. Having the dplane_fpm_nl
code grab all contexts when it is backed up prevents
this system from behaving appropriately.
Modify the code to not add to the dplane_fpm_nl's internal
queue when it is already full. This will allow the backpressure
to work appropriately in zebra proper.
Donald Sharp [Wed, 12 Jun 2024 18:14:48 +0000 (14:14 -0400)]
zebra: Modify dplane loop to allow backpressure to filter up
Currently when the dplane_thread_loop is run, it moves contexts
from the dg_update_list and puts the contexts on the input queue
of the first provider. This provider is given a chance to run
and then the items on the output queue are pulled off and placed
on the input queue of the next provider. Rinse/Repeat down through
the entire list of providers. Now imagine that we have a list
of multiple providers and the last provider is getting backed up.
Contexts will end up sticking in the input Queue of the `slow`
provider. This can grow without bounds. This is a real problem
when you have a situation where an interface is flapping and an
upper level protocol is sending a continous stream of route
updates to reflect the change in ecmp. You can end up with
a very very large backlog of contexts. This is bad because
zebra can easily grow to a very very large memory size and on
restricted systems you can run out of memory. Fortunately
for us, the MetaQ already participates with this process
by not doing more route processing until the dg_update_list
goes below the working limit of dg_updates_per_cycle. Thus
if FRR modifies the behavior of this loop to not move more
contexts onto the input queue if either the input queue
or output queue of the next provider has reached this limit.
FRR will naturaly start auto handling backpressure for the dplane
context system and memory will not go out of control.
In the near future, some daemons may only register SIDs. This may be
the case for the pathd daemon when creating SRv6 binding SIDs.
When a locator is getting deleted at ZEBRA level, the daemon may have
an easy way to find out the SIds to unregister to.
This commit proposes to add the locator name to the SID_SRV6_NOTIFY
message whenever possible. Only case when an allocation failure happens,
the locator will not be present. In all other places, the notify API
at procol levels has the locator name extra-parameter.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com> Signed-off-by: Carmine Scarpitta <cscarpit@cisco.com>
Zebra sends a `SRV6_SID_NOTIFY` notification to inform clients about the
result of a SID alloc/release operation. This commit adds a handler to
process a `SRV6_SID_NOTIFY` notification received from zebra.
If the notification indicates that a SID allocation operation was
successful, then it stores the allocated SID in the SRv6 database,
installs the SID into the RIB, and advertises the SID to the other BGP
routers.
If the notification indicates that an operation has failed, it logs the
error.
Currently, when SRv6 is enabled in BGP, BGP requests a locator chunk
from Zebra. Zebra assigns a locator chunk to BGP, and then BGP can
allocate SIDs from the locator chunk.
Recently, the implementation of SRv6 in Zebra has been improved, and a
new API has been introduced for obtaining/releasing the SIDs.
Now, the daemons no longer need to request a chunk.
Instead, the daemons interact with Zebra to obtain information about the
locator and subsequently to allocate/release the SIDs.
This commit extends BGP to use the new SRv6 API. In particular, it
removes the chunk throughout the BGP code and modifies BGP to
request/save/advertise the locator instead of the chunk.
Loïc Sang [Tue, 3 Sep 2024 16:36:25 +0000 (18:36 +0200)]
bgpd: remove redundant loopback check in label update
The "if_is_vrf" check is unnecessary because it’s already handled by
"if_get_vrf_loopback". Additionally, it ignores the default loopback and
could introduce potential bugs.
Fixes: 8b81f32e9787 ("bgpd: fix label lost when vrf loopback comes back") Signed-off-by: Loïc Sang <loic.sang@6wind.com>
show bgp vrf afi unicast statistics json output is not return in json
format for non exists vrf.
Fix:
Json output is formatted for non exists vrf cases.
Command supported:
```
show bgp vrf <VRFNAME> ipv4/ipv6 unicast statistics json
show bgp vrf <VRFNAME> l2vpn evpn statistics json
```
Before Fix:
```
leaf11#
leaf11# show bgp vrf test ipv4 unicast statistics json
View/Vrf test is unknown
leaf11#
leaf11#
leaf11# show bgp vrf test ipv6 unicast statistics json
View/Vrf test is unknown
leaf11#
leaf11#
leaf11# show bgp vrf default1 l2vpn evpn statistics json
View/Vrf default1 is unknown
leaf11#
```
After Fix:
```
leaf11#
leaf11# show bgp vrf test ipv4 unicast statistics json
{
"warning":"View/Vrf is unknown"
}
leaf11#
leaf11#
leaf11# show bgp vrf test ipv6 unicast statistics json
{
"warning":"View/Vrf is unknown"
}
leaf11#
leaf11# show bgp vrf default1 l2vpn evpn statistics json
{
"warning":"View/Vrf is unknown"
}
leaf11#
```
Donald Sharp [Fri, 30 Aug 2024 13:05:11 +0000 (09:05 -0400)]
*: Create termtable specific temp memory
When trying to track down a MTYPE_TMP memory leak
it's harder to search for it when you happen to
have some usage of ttable_dump. Let's just give
it it's own memory type so that we can avoid
confusion in the future.
Donald Sharp [Fri, 22 Mar 2024 20:04:19 +0000 (16:04 -0400)]
bgpd: Ensure evpn local table display shows route send status
evpn has a concept of `local` tables where the evpn routes
are actually converted into underlying routes/neighbor
table entries( or vice versa ). Then this local route
is propagated to the global evpn l2vpn table and sent
to the peers. Certain show commands in evpn look
operate on the local table but make the output look
like the data has not been sent to the peer. This
is confusing for the operator. Modify the code
such that local tables get a `Local BGP table not advertised`
in the place where the code talks about whom has received
the data or not.
Example:
torm11# show bgp l2vpn evpn route vni 1000 mac 8a:a1:cc:73:a3:ac ip 45.0.0.5
BGP routing table entry for [2]:[0]:[48]:[8a:a1:cc:73:a3:ac]:[32]:[45.0.0.5]
Paths: (2 available, best #2)
Local BGP table not advertised
Route [2]:[0]:[48]:[8a:a1:cc:73:a3:ac]:[32]:[45.0.0.5] VNI 1000
Imported from 192.168.100.18:2:[2]:[0]:[48]:[8a:a1:cc:73:a3:ac]:[32]:[45.0.0.5], VNI 1000
65101 65005
192.168.100.18(leaf2) from leaf2(192.168.5.1) (192.168.100.14)
Origin IGP, valid, external
Extended Community: RT:65005:1000 ET:8
Last update: Thu Mar 21 14:29:04 2024
Route [2]:[0]:[48]:[8a:a1:cc:73:a3:ac]:[32]:[45.0.0.5] VNI 1000
Imported from 192.168.100.18:2:[2]:[0]:[48]:[8a:a1:cc:73:a3:ac]:[32]:[45.0.0.5], VNI 1000
65101 65005
192.168.100.18(leaf1) from leaf1(192.168.1.1) (192.168.100.13)
Origin IGP, valid, external, bestpath-from-AS 65101, best (Router ID)
Extended Community: RT:65005:1000 ET:8
Last update: Thu Mar 21 14:29:04 2024
Louis Scalbert [Fri, 30 Aug 2024 12:08:51 +0000 (14:08 +0200)]
tests: fix nhc1 route check after nhs1 down
After setting down nhs1, the test is checking that nhc1 routing table
matches routes in nhc1/nhrp_route.json. It is incorrect because it
checks that the NHRP route to nhs1 is still present but it should have
disappeared.
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>