At source have match vni plus set statement in route-map.
Validate the origin of the route's outbound correctly sets
the 'set' statment based on match vni filter.
At origin:
route-map RM-EVPN-TE-Matches permit 10
match evpn vni 4001
set large-community 10:10:119
Receiving end:
Route [5]:[0]:[24]:[78.41.1.0] VNI 4001
5550
27.0.0.15 from TORS1(downlink-5) (27.0.0.15)
Origin incomplete, metric 0, valid, external, bestpath-from-AS 5550, best (First path received)
Extended Community: RT:5550:4001 ET:8 Rmac:00:02:00:00:00:4d
Large Community: 10:10:119 <--- Large community stamped
Last update: Thu Dec 10 22:19:26 2020
Mark Stapp [Mon, 21 Dec 2020 15:10:40 +0000 (10:10 -0500)]
zebra: nht resolve-via-default doesn't need force
We don't need to use the 'force' flag when processing the
resolve-via-default clis for ip and ipv6: we can just do normal
nht processing. [7.5 version]
Donald Sharp [Fri, 18 Dec 2020 19:22:09 +0000 (14:22 -0500)]
lib: Fix dependency of match types in route-map code
Route-maps contain a hash of hash's that contain the
container type name ( say community or access list or whatever )
and then it has a hash of route-maps that this maps too
Suppose you have this:
!
frr version 7.3.1
frr defaults traditional
hostname eva
log stdout
!
debug route-map
!
router bgp 239
neighbor 192.168.161.2 remote-as external
!
address-family ipv4 unicast
neighbor 192.168.161.2 route-map foo in
exit-address-family
!
bgp community-list standard 7000:40002 permit 7000:40002
bgp community-list standard 7000:40002 permit 7000:40003
!
route-map foo deny 20
match community 7000:40002
!
route-map foo permit 10
!
line vty
!
end
You have a community hash which has an
7000:40002 entry
This entry has a hash of routemaps that are referencing it. In this above
example it would have `foo` as the single entry.
Given the above config if you do this:
eva# conf
eva(config)# route-map foo deny 20
eva(config-route-map)# match community 7000:4003
eva(config-route-map)#
We would expect the `7000:40002` community hash to no longer have
a reference to the `foo` routemap. Instead we see the code doing this:
2020/12/18 13:47:12 BGP: bgpd 7.3.1 starting: vty@2605, bgp@<all>:179
2020/12/18 13:47:47 BGP: Add route-map foo
2020/12/18 13:47:47 BGP: Route-map foo add sequence 10, type: permit
2020/12/18 13:47:57 BGP: Route-map foo add sequence 20, type: deny
2020/12/18 13:48:05 BGP: Adding dependency for filter 7000:40002 in route-map foo
2020/12/18 13:48:05 BGP: route_map_print_dependency: Dependency for 7000:40002: foo
2020/12/18 13:48:41 BGP: bgp_update_receive: rcvd End-of-RIB for IPv4 Unicast from 192.168.161.2 in vrf default
2020/12/18 13:49:19 BGP: Deleting dependency for filter 7000:4003 in route-map foo
2020/12/18 13:49:19 BGP: Adding dependency for filter 7000:4003 in route-map foo
2020/12/18 13:49:19 BGP: route_map_print_dependency: Dependency for 7000:4003: foo
Note how the code attempts to remove the dependency for `7000:4003` instead of the
dependency for `7000:40002`. Then we create a new hash for `7000:4003` and then
install the routemap name in it.
This is wrong. We should remove the `7000:40002` dependency and then install
a dependency for `7000:4003`.
Duncan Eastoe [Fri, 11 Dec 2020 11:07:59 +0000 (11:07 +0000)]
zebra: routes stuck with 'q' when using dplane FPM
New work enqueued to the dplane_fpm_nl provider is initially de-queued
and re-enqueued, in fpm_nl_process(), to be processed by the provider's
own thread.
After performing this initial de-queue/enqueue we return to
dplane_thread_loop() and check the dplane_fpm_nl output queue for any
work which has been completed.
Since this work is being processed in another thread it is very likely
that there will be some (or all) work still outstanding at this point.
The dataplane thread finishes up any other tasks and then waits until
it is next scheduled. In the meantime the dplane_fpm_nl thread is
processing its work queue until completion.
The issue arises here as the dataplane thread is not explicitly
re-scheduled once dplane_fpm_nl has drained its work queue and
populated its output queue with completed work.
This completed work can sit in the output queue for an indeterminate
period of time, depending upon when the dataplane thread is next
scheduled for other work. If the RIB has reached a stable state then
this could be a significant period of time. During this period zebra
marks these routes as queued, even though they have actually been
processed by all dataplane providers.
An un-related RIB change which triggers a FIB update will result in
the dataplane thread being scheduled and this completed work then
being processed. At this point the routes will then no longer be
marked as queued by zebra. However this new FIB update might itself
then fall victim to the same scenario!
We can observe the above behaviour in these detailed dplane logs.
11:24:47 zebra[7282]: dplane: incoming new work counter: 2
11:24:47 zebra[7282]: dplane enqueues 2 new work to provider 'Kernel'
11:24:47 zebra[7282]: dplane provider 'Kernel': processing
11:24:47 zebra[7282]: Dplane NEIGH_DISCOVER, ip 192.168.2.2, ifindex 9
11:24:47 zebra[7282]: Dplane NEIGH_DISCOVER, ip 192.168.2.2, ifindex 9
11:24:47 zebra[7282]: dplane dequeues 2 completed work from provider Kernel
11:24:47 zebra[7282]: dplane enqueues 2 new work to provider 'dplane_fpm_nl'
11:24:47 zebra[7282]: dplane dequeues 1 completed work from provider dplane_fpm_nl
11:24:47 zebra[7282]: dplane has 1 completed, 0 errors, for zebra main
2 contexts (all incoming work) have been queued to dplane_fpm_nl - all good.
1 completed context was de-queued, so there is outstanding work.
11:24:58 zebra[7282]: dplane: incoming new work counter: 2
11:24:58 zebra[7282]: dplane enqueues 2 new work to provider 'Kernel'
11:24:58 zebra[7282]: dplane provider 'Kernel': processing
11:24:58 zebra[7282]: ID (193) Dplane nexthop update ctx 0x55c429b6fed0 op NH_INSTALL
11:24:58 zebra[7282]: 0:5.5.5.5/32 Dplane route update ctx 0x55c429b79690 op ROUTE_INSTALL
11:24:58 zebra[7282]: dplane dequeues 2 completed work from provider Kernel
11:24:58 zebra[7282]: dplane enqueues 2 new work to provider 'dplane_fpm_nl'
11:24:58 zebra[7282]: dplane dequeues 2 completed work from provider dplane_fpm_nl
11:24:58 zebra[7282]: dplane has 2 completed, 0 errors, for zebra main
A further 2 contexts (all incoming work) have been queued to dplane_fpm_nl - all good.
2 completed contexts were de-queued, which sounds good as that is what we en-queued.
However, there is an outstanding context from earlier, so there is still outstanding
work.
Indeed the new 5.5.5.5/32 route is marked as queued:
O>q 5.5.5.5/32 [110/10] via 192.168.2.2, dp0p1s3, weight 1, 00:01:19
This remains the case until we trigger a FIB update by installation of the
(eg.) 10.10.10.10/32 route:
11:26:41 zebra[7282]: dplane: incoming new work counter: 2
11:26:41 zebra[7282]: dplane enqueues 2 new work to provider 'Kernel'
11:26:41 zebra[7282]: dplane provider 'Kernel': processing
11:26:41 zebra[7282]: ID (195) Dplane nexthop update ctx 0x55c429b78ce0 op NH_INSTALL
11:26:41 zebra[7282]: 0:10.10.10.10/32 Dplane route update ctx 0x55c429b7a040 op ROUTE_INSTALL
11:26:41 zebra[7282]: dplane dequeues 2 completed work from provider Kernel
11:26:41 zebra[7282]: dplane enqueues 2 new work to provider 'dplane_fpm_nl'
11:26:41 zebra[7282]: dplane dequeues 2 completed work from provider dplane_fpm_nl
11:26:41 zebra[7282]: dplane has 2 completed, 0 errors, for zebra main
11:26:41 zebra[7282]: zebra2proto: Please add this protocol(2) to proper rt_netlink.c handling
11:26:41 zebra[7282]: Nexthop dplane ctx 0x55c429b6fed0, op NH_INSTALL, nexthop ID (193), result SUCCESS
11:26:41 zebra[7282]: default(0:254):5.5.5.5/32 Processing dplane result ctx 0x55c429b79690, op ROUTE_INSTALL result SUCCESS
We observe the same 2 enqueues and 2 dequeues as before, which again suggests
that there is outstanding work.
As expected, the 5.5.5.5/32 route is no longer marked as queued:
O>* 5.5.5.5/32 [110/10] via 192.168.2.2, dp0p1s3, weight 1, 00:02:06
But the 10.10.10.10/32 route is, as we have not yet processed the completed
context:
C>q 10.10.10.10/32 is directly connected, lo, 00:26:05
Duncan Eastoe [Fri, 11 Dec 2020 11:03:53 +0000 (11:03 +0000)]
zebra: dplane API to get provider output q length
Returns the current number of (completed) contexts in the provider's
output queue (dp_ctx_out_q), allowing access to this data from the
provider itself.
zebra: anticipate zns creation at vrf creation when backend is vrf-lite
in the case the namespace pointer is already available, feed it at vrf
creation. this prevents from crashing if the netlink parsing already
began, and the vrf-lite is not enabled yet.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
frr-reload: ignore-case in the es-id and es-sys-mac config comparisons
MAC address can be configured as lower/upper hex characters but is
always rendered as lower case in "show run". Avoid incorrect "change
detection" by ignoring case.
The condition to normalize ipv6 addresses was accidentally broken via -
[ e238920df07be0b61e483f0a58e0b99ab3d2e0ea tools: Fix reload with 'ipv6 address...' in interface
]
The condition was supposed to be skipped only if "ipv6 add" was present
in the line.
Donald Sharp [Fri, 4 Dec 2020 13:01:31 +0000 (08:01 -0500)]
bgpd: Let's actually track if the nh was updated
In bgp_zebra_announce when iterating over multipath
we were checking to ensure that the nexthop was updated
but never initially clearing the nh_updated variable.
Thus leading to a situation where we could crash.
Donald Sharp [Sat, 28 Nov 2020 20:35:18 +0000 (15:35 -0500)]
ospfd: Restore POINTOMULTIPOINT to working order
Commit: 1d376ff539508f336cb5872c5592b780e3db180b removed
the code to find nexthops for the POINTOMULTIPOINT and
replaced it with a generic bit of code that was
supposed to handle both POINTOPOINT and POINTOMULTIPOINT
the problem is that the ospf rfc states that the
network mask on point to multipoint should be /32
which will not allow you to properly do a prefix match
on it against the network.
Restore original behavior as much as possible and leave
the new POINTOPOINT code alone.
Fixes: #7624 Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Donald Sharp [Tue, 1 Dec 2020 20:37:03 +0000 (15:37 -0500)]
ospfd: Set Curr_mtu to when we get the mtu
Currently if you start ospfd, bring up neighbors and then issue
a tcpdump on a interface ospf is peering over, this causes the neighbor
relationship to be restarted:
root@spectrum301(mlx-4600c-01):mgmt:~# tcpdump -i vlan402
2020-11-13T21:25:38.059671+00:00 spectrum301 ospfd[29953]: AdjChg: Nbr 202.0.0.3(default) on vlan402:200.0.3.1: Full -> Deleted (KillNbr)
2020-11-13T21:25:38.065520+00:00 spectrum301 ospfd[29953]: ospfTrapNbrStateChange: trap sent: 200.0.3.2 now Deleted/DROther
2020-11-13T21:25:38.065922+00:00 spectrum301 ospfd[29953]: ospfTrapIfStateChange: trap sent: 200.0.3.1 now Down
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vlan402, link-type EN10MB (Ethernet), capture size 262144 bytes
21:25:38.072330 IP 200.0.3.1 > igmp.mcast.net: igmp v3 report, 1 group record(s)
2020-11-13T21:25:38.080430+00:00 spectrum301 ospfd[29953]: ospfTrapIfStateChange: trap sent: 200.0.3.1 now Point-To-Point
2020-11-13T21:25:38.080654+00:00 spectrum301 ospfd[29953]: SPF Processing Time(usecs): 9734
2020-11-13T21:25:38.080829+00:00 spectrum301 ospfd[29953]: SPF Time: 6422
2020-11-13T21:25:38.080991+00:00 spectrum301 ospfd[29953]: InterArea: 1572
2020-11-13T21:25:38.081152+00:00 spectrum301 ospfd[29953]: Prune: 67
2020-11-13T21:25:38.081329+00:00 spectrum301 ospfd[29953]: RouteInstall: 1396
2020-11-13T21:25:38.081548+00:00 spectrum301 ospfd[29953]: Reason(s) for SPF: N, S, ABR, ASBR
21:25:38.092510 IP 200.0.3.1 > ospf-all.mcast.net: OSPFv2, Hello, length 44
This is happening because the curr_mtu is not being properly stored. It was being set
on interface creation( but we have not actually read in the mtu part of the interface data, so
it is still 0 ).
Modify the code to store the curr_mtu at a point in interface creation *After* we have read
in interface data.
Ticket: CM-32276 Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The lookup for non default VRFs was always using a tableId; if not
provided, we were defaulting to RT_TABLE_MAIN. This is fine for the
default VRF but not for others. As a result, the command was silently
failing for non-default VRFs unless we also specified the correct tableId.
Fix this by only performing the lookup using the tableId if it is
provided; else use zebra_vrf_table.
Signed-off-by: Emanuele Di Pascale <emanuele@voltanet.io>
Mark Stapp [Wed, 9 Dec 2020 14:17:25 +0000 (09:17 -0500)]
zebra: use zserv_send_message instead of writen
Following functions is using writen to dispatch message
into socket, but another function uses zserv_send_message.
This commit does tiny unification for zapi's socket messaging.
Igor Ryzhov [Wed, 2 Dec 2020 00:36:10 +0000 (03:36 +0300)]
ospf: fix instance initialization when using multi-instance mode
OSPF instance initialization was moved from "router ospf" vty command to
ospf_get function some time ago but the same thing must be done in
ospf_get_instance function used when multi-instance mode is enabled.
Renato Westphal [Fri, 20 Nov 2020 22:26:45 +0000 (19:26 -0300)]
isisd: check vertex type before checking its data
vertex->N is an union whose "id" and "ip" fields are only valid
depending on the vertex type (IS adjacency or IP reachability
information). As such, add a vertex type check before consulting
vertex->N.id in order to prevent unexpected behavior from happening.
Renato Westphal [Fri, 6 Nov 2020 13:02:16 +0000 (10:02 -0300)]
isisd: fix some crashes with --tcli
The "ifp" variable returned by nb_running_get_entry() might be
NULL when using the transactional CLI mode. Make the required
modifications to avoid null pointer dereferences.
vdhingra [Thu, 19 Nov 2020 12:46:39 +0000 (04:46 -0800)]
bgpd: sh running config is not considering values provided via -e for max-paths
problem
1. run the bgp with -e1 option
2. c t
router bgp 100
3. show running config
!
address-family ipv6 multicast
maximum-paths 1
maximum-paths ibgp 1
exit-address-family
!
address families should not dump maximum-paths if there
value is same as value provided at run time.
fix
if the maxpaths_ebgp value is same as multipath_num global
object, don't dump maximum-paths.
ckishimo [Fri, 20 Nov 2020 21:53:20 +0000 (13:53 -0800)]
ospfd: fix NSSA translate-always
When an ABR NSSA router is configured to be ALWAYS the translator:
r22(config-router)# area 1 nssa translate-always
It will advertise this condition in the type-1 LSA setting the Nt
bit, taking over the translator role from r33
r22# show ip ospf
We are an ABR and always an NSSA Translator.
r33# show ip ospf
We are an ABR, but not the NSSA Elected Translator.
However when the command above is removed:
r22(config-router)# no area 1 nssa translate-always
the bit Nt needs to be cleared otherwise we end up with no translator
in the area
r22# show ip ospf
We are an ABR, but not the NSSA Elected Translator.
r33# show ip ospf
We are an ABR, but not the NSSA Elected Translator.
This PR forces the ABR to send a type-1 LSA with the Nt bit updated
according to the translator role
ckishimo [Thu, 19 Nov 2020 07:23:14 +0000 (23:23 -0800)]
ospfd: fix NSSA translator
Having 2 ABR in NSSA area where R3 is the elected translator
R3# show ip ospf
We are an ABR and the NSSA Elected Translator.
R2# show ip ospf
We are an ABR, but not the NSSA Elected Translator.
When R3 loses the Border condition by shutting down the interface
to the backbone, we end up with no translator in the NSSA area. It
is expected R2 to take over the translator role
R3# sh ip ospf
It is not ABR, therefore not Translator.
R2# show ip ospf
We are an ABR, but not the NSSA Elected Translator.
This PR forces the ABR to reevaluate the translator condition, so
R2 becomes the elected Translator
Donald Sharp [Thu, 19 Nov 2020 13:04:51 +0000 (08:04 -0500)]
ospf6d: More lists being leaked
Apparently the person who wrote this code was big into
cut-n-paste. Commit 710a61d57c8f1b0ea66a37f09bad2161d7e2ddb7
found the first instance, but upon code inspection this morning
it became evident that 2 other functions had the exact same
problem.
Fix. Note I have not cleaned up the cut-n-paste code for
two reasons: a) I'm chasing something else b) this code
has been fairly un-maintained for a very long time. No
need to start up now.
when deleting a whole l2vpn context in ldpd which also had pseudowires
in it, we were first deleting the l2vpn with a 'no l2vpn XXX' command,
and then adding it again by running 'l2vpn XXX\n no member pseudowire YYY'
which obviously was not needed. As a result the l2vpn would be reinstated.
Signed-off-by: Emanuele Di Pascale <emanuele@voltanet.io>
Donald Sharp [Wed, 25 Nov 2020 14:49:28 +0000 (09:49 -0500)]
ospfd: Prevent crash by accessing memory not owned.
When allocating memory for the `struct ospf_metric` we
were using `uint32_t` instead of the actual size of this
structure. When we wrote to it we would be writing
into other people's memory.
Found-by: Amol Lad Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Igor Ryzhov [Tue, 17 Nov 2020 18:21:09 +0000 (21:21 +0300)]
changelog: add debian revision
It is optional, but lintian complains when a package mixes versions with
and without revision number. All previous versions have it so 7.5 should
have it too.
Rafael Zalamena [Sun, 4 Oct 2020 21:04:27 +0000 (18:04 -0300)]
lib: notify BFD when adding new profile
When a BFD integrated session already exists setting the profile
doesn't cause a session update (or vice versa): fix this issue by
handling the other cases.
Signed-off-by: Rafael Zalamena <rzalamena@opensourcerouting.org>
Carlo Galiotto [Fri, 13 Nov 2020 16:35:06 +0000 (17:35 +0100)]
ospfd: reset mpls-te prior to ospf router removal
This commits attempts to fix a problem that occurs when mpls-te gets
removed from ospfd config. Mpls-te has an inter-as option, which can be
set to Off/Area/AS. Whenever the inter-as takes "Area" or "AS" as a
value, this value will not be cleaned after removing mpls-te or after
removing the ospf router. Therefore, if mpls-te is configured with
inter-as AS or Area and we remove mpls-te or the ospf router, the
inter-as will still preserve its value; therefore, next time mpls-te is
enabled, it will automatically inherits the previous inter-as value
(either Area or AS). This leads to wrong configuration, which can be a
problem for frr_reload.py.
The commits forces mpls-te to reset inter-as to Off before it mpls-te
gets removed from the configuration and before the ospf router gets
removed.
Donald Sharp [Mon, 16 Nov 2020 20:12:43 +0000 (15:12 -0500)]
lib: When aborting log data
When a FRR process dies due to SIGILL/SIGABORT/etc attempt
to drain the log buffer. This code change is capturing
some missing logs that were not part of the log file on
a crash.
Donald Sharp [Sat, 14 Nov 2020 22:33:43 +0000 (17:33 -0500)]
bgpd: on debug esi was not properly setup
There exists a code path where the esi would be passed
to a debug without the esi being setup with any values
causing us to display what ever is on the stack.
Donald Sharp [Sat, 14 Nov 2020 20:32:49 +0000 (15:32 -0500)]
bgpd: Fix missed unlocks
When iterating over the bgp_dest table, using this pattern:
for (dest = bgp_table_top(table); dest;
dest = bgp_route_next(dest)) {
If the code breaks or returns in the middle we will not have
properly unlocked the node as that bgp_table_top locks the top
dest and bgp_route_next locks the next dest and unlocks the old
dest.
From code inspection I have found a bunch of places that
we either return in the middle of or a break is issued.
Michael Hohl [Wed, 11 Nov 2020 15:56:15 +0000 (16:56 +0100)]
docs: mention activate keyword in user docs
As of now, the BGP user documentation does not explicitly mention how
to use IPv6. This commit adds documentation of the activate command to
the user documentation which is crucial to get IPv6 networks announced
using FRRouting.
Pat Ruddy [Thu, 29 Oct 2020 16:38:42 +0000 (16:38 +0000)]
bgpd: withdraw any exported routes when deleting a vrf
When a BGP vrf instance is deleted, the routes it exported into the
main VPN table are not deleted and they remain as stale routes
attached to an unknown bgp instance. When the new vrf instance comes
along, it imports these routes from the main table and thus we see
duplicatesalongside its own identical routes.
The solution is to call the unexport logic when a BGP vrf instance is
being deleted.
problem example
---------------
volta1# sh bgp vrf VRF-a ipv4 unicast
BGP table version is 4, local router ID is 18.0.0.1, vrf id 5
Default local pref 100, local AS 567
Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,
i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes: i - IGP, e - EGP, ? - incomplete
Network Next Hop Metric LocPrf Weight Path
*> 7.0.0.6/32 7.0.0.5@0< 10 100 0 ?
*> 7.0.0.8/32 18.0.0.8 0 0 111 ?
*> 18.0.0.0/24 18.0.0.8 0 0 111 ?
*> 56.0.0.0/24 7.0.0.5@0< 0 100 0 ?
Displayed 4 routes and 4 total paths
volta1# conf t
volta1(config)# no router bgp 567 vrf VRF-a
volta1(config)#
volta1(config)# router bgp 567 vrf VRF-a
volta1(config-router)# bgp router-id 18.0.0.1
volta1(config-router)# no bgp ebgp-requires-policy
volta1(config-router)# no bgp network import-check
volta1(config-router)# neighbor 18.0.0.8 remote-as 111
volta1(config-router)# !
volta1(config-router)# address-family ipv4 unicast
volta1(config-router-af)# label vpn export 12345
volta1(config-router-af)# rd vpn export 567:111
volta1(config-router-af)# rt vpn both 567:100
volta1(config-router-af)# export vpn
volta1(config-router-af)# import vpn
volta1(config-router-af)# exit-address-family
volta1(config-router)# !
volta1(config-router)# end
volta1# sh bgp vrf VRF-a ipv4 unicast
BGP table version is 4, local router ID is 18.0.0.1, vrf id 5
Default local pref 100, local AS 567
Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,
i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes: i - IGP, e - EGP, ? - incomplete
Network Next Hop Metric LocPrf Weight Path
*> 7.0.0.6/32 7.0.0.5@0< 10 100 0 ?
* 7.0.0.8/32 18.0.0.8 0 0 111 ?
*> 18.0.0.8@-< 0 0 111 ?
* 18.0.0.0/24 18.0.0.8 0 0 111 ?
*> 18.0.0.8@-< 0 0 111 ?
*> 56.0.0.0/24 7.0.0.5@0< 0 100 0 ?
Displayed 4 routes and 6 total paths
@- routes indicating unknown bgp instance are imported
Mark Stapp [Tue, 10 Nov 2020 14:50:50 +0000 (09:50 -0500)]
tests: only test count of nexthops in bgp max-paths test
Add support to compare the number of RIB nexthops, rather than the
specific nexthop addresses. Use this in the bgp_ecmp topotests that
test maximum-paths - testing the specific nexthops is wrong there,
it's not deterministic and we get spurious failures.