Wesley Coakley [Tue, 5 Jan 2021 09:22:57 +0000 (04:22 -0500)]
bgpd: separate lcommunity validation from tokenizer
`lcommunity_gettoken` expects a space-delimeted list of 0 or more large
communities. `lcommunity_list_valid` can perform this check.
`lcommunity_list_valid` now validates large community lists more
accurately based on the following condition: Each quantity in a standard bgp
large community must:
1. Contain at least one digit
2. Fit within 4 octets
3. Contain only digits unless the lcommunity is "expanded"
4. Contain a valid regex if the lcommunity is "expanded"
Moreover we validate that each large community list contains exactly 3
such values separated by a single colon each.
One quirk of our validation which is worth documenting is:
The first line will throw an error complaining about a "malformed community-list
value". The second line will be accepted because the each value is each treated as
a regex when matching large communities, it simply will never match anything so
it's rather useless.
Duncan Eastoe [Tue, 22 Dec 2020 21:23:43 +0000 (21:23 +0000)]
zebra: zebra2proto() handle kernel/connect type
When dplane_fpm_nl is used the "Please add this protocol(n) to proper
rt_netlink.c handling" debug message is emitted for any route of type
kernel or connected.
This severely reduces performance of dplane_fpm_nl when large numbers
of these routes are present in the RIB.
The messages are not observed when using the original fpm module since
this uses a custom function, netlink_proto_from_route_type().
zebra2proto() now returns RTPROT_KERNEL for ZEBRA_ROUTE_CONNECT and
ZEBRA_ROUTE_KERNEL. This should only impact dplane_fpm_nl's use of
the common netlink routines since these routes generally ignored via
checking of RSYSTEM_ROUTE().
GalaxyGorilla [Sat, 19 Dec 2020 21:06:24 +0000 (21:06 +0000)]
pathd: un-guard clippy files
The relevant clippy machinery in python/makevars.py assumes to get
'raw' Makefile text containing all `clippy_scan` variables. If those
files in the `clippy_scan` variable are later on used in the
compilation process does not matter.
Donald Sharp [Fri, 18 Dec 2020 19:22:09 +0000 (14:22 -0500)]
lib: Fix dependency of match types in route-map code
Route-maps contain a hash of hash's that contain the
container type name ( say community or access list or whatever )
and then it has a hash of route-maps that this maps too
Suppose you have this:
!
frr version 7.3.1
frr defaults traditional
hostname eva
log stdout
!
debug route-map
!
router bgp 239
neighbor 192.168.161.2 remote-as external
!
address-family ipv4 unicast
neighbor 192.168.161.2 route-map foo in
exit-address-family
!
bgp community-list standard 7000:40002 permit 7000:40002
bgp community-list standard 7000:40002 permit 7000:40003
!
route-map foo deny 20
match community 7000:40002
!
route-map foo permit 10
!
line vty
!
end
You have a community hash which has an
7000:40002 entry
This entry has a hash of routemaps that are referencing it. In this above
example it would have `foo` as the single entry.
Given the above config if you do this:
eva# conf
eva(config)# route-map foo deny 20
eva(config-route-map)# match community 7000:4003
eva(config-route-map)#
We would expect the `7000:40002` community hash to no longer have
a reference to the `foo` routemap. Instead we see the code doing this:
2020/12/18 13:47:12 BGP: bgpd 7.3.1 starting: vty@2605, bgp@<all>:179
2020/12/18 13:47:47 BGP: Add route-map foo
2020/12/18 13:47:47 BGP: Route-map foo add sequence 10, type: permit
2020/12/18 13:47:57 BGP: Route-map foo add sequence 20, type: deny
2020/12/18 13:48:05 BGP: Adding dependency for filter 7000:40002 in route-map foo
2020/12/18 13:48:05 BGP: route_map_print_dependency: Dependency for 7000:40002: foo
2020/12/18 13:48:41 BGP: bgp_update_receive: rcvd End-of-RIB for IPv4 Unicast from 192.168.161.2 in vrf default
2020/12/18 13:49:19 BGP: Deleting dependency for filter 7000:4003 in route-map foo
2020/12/18 13:49:19 BGP: Adding dependency for filter 7000:4003 in route-map foo
2020/12/18 13:49:19 BGP: route_map_print_dependency: Dependency for 7000:4003: foo
Note how the code attempts to remove the dependency for `7000:4003` instead of the
dependency for `7000:40002`. Then we create a new hash for `7000:4003` and then
install the routemap name in it.
This is wrong. We should remove the `7000:40002` dependency and then install
a dependency for `7000:4003`.
Sebastien Merle [Fri, 16 Oct 2020 14:55:51 +0000 (16:55 +0200)]
pathd: Add optional support for PCEP to pathd
This new dynamic module makes pathd behave as a PCC for dynamic candidate path
using the external library pcpelib https://github.com/volta-networks/pceplib .
The candidate paths defined as dynamic will trigger computation requests to the
configured PCE, and the PCE response will be used to update the policy.
It supports multiple PCE. The one with smaller precedence will be elected
as the master PCE, and only if the connection repeatedly fails, the PCC will
switch to another PCE.
Duncan Eastoe [Fri, 18 Dec 2020 14:35:21 +0000 (14:35 +0000)]
zebra: reduce atomic ops in fpm_process_queue()
Maintain the count of contexts which have been processed in a local
variable, and perform a single atomic update after we have consumed
all queued contexts.
Generally this results in at least one less atomic operation per
context.
Duncan Eastoe [Fri, 18 Dec 2020 14:29:36 +0000 (14:29 +0000)]
zebra: local var in fpm_process_queue() sched cond
Don't use an atomic operation to determine whether fpm_process_queue()
needs to be re-scheduled. Instead we can simply use a local variable
to determine if we stopped processing because we ran out of buffers.
In the case where we would have re-scheduled due to new context objects
in the queue (enqueued after we stopped processing), fpm_nl_process()
will schedule us (or will have done already).
This new daemon manages Segment-Routing Traffic-Engineering
(SR-TE) Policies and installs them into zebra. It provides
the usual yang support and vtysh commands to define or change
SR-TE Policies.
In a nutshell SR-TE Policies provide the possibility to steer
traffic through a (possibly dynamic) list of Segment Routing
segments to the endpoint of the policy. This list of segments
is part of a Candidate Path which again belongs to the SR-TE
Policy. SR-TE Policies are uniquely identified by their color
and endpoint. The color can be used to e.g. match BGP
communities on incoming traffic.
There can be multiple Candidate Paths for a single
policy, the active Candidate Path is chosen according to
certain conditions of which the most important is its
preference. Candidate Paths can be explicit (fixed list of
segments) or dynamic (list of segment comes from e.g. PCEP, see
below).
Configuration example:
segment-routing
traffic-eng
segment-list SL
index 10 mpls label 1111
index 20 mpls label 2222
!
policy color 4 endpoint 10.10.10.4
name POL4
binding-sid 104
candidate-path preference 100 name exp explicit segment-list SL
candidate-path preference 200 name dyn dynamic
!
!
!
There is an important connection between dynamic Candidate
Paths and the overall topic of Path Computation. Later on for
pathd a dynamic module will be introduced that is capable
of communicating via the PCEP protocol with a PCE (Path
Computation Element) which again is capable of calculating
paths according to its local TED (Traffic Engineering Database).
This dynamic module will be able to inject the mentioned
dynamic Candidate Paths into pathd based on calculated paths
from a PCE.
Reduce code in the critical sections of fpm_nl_process() and
fpm_process_queue() to the bare minimum - basically only enqueue
and dequeue operations on the shared ctxqueue.
Donald Sharp [Thu, 17 Dec 2020 21:49:20 +0000 (16:49 -0500)]
bgpd: Remove awful test of strmatch + get_afi_safi_str
Remove awful test of a strmatch against a call to get_afi_safi_str.
These are the easy ones as that the real decision point is/was
underneath this test. This is just duplicate expensive testing.
Donald Sharp [Tue, 15 Dec 2020 19:21:56 +0000 (14:21 -0500)]
lib, vtysh: Modify start/end configuration commands to be more hidden
There exists a world where some people have put `end` in their
configuration. Then vtysh will command search for it and find
it and then bad things happen.
Ticket: CM-32665 Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When a new ES is created it is held in a non-DF state for 3 seconds
as specified by RFC7432. This allows the switch time to import
the Type-4 routes from the peers. And the peers time to rx the new
Type-4 route.
zebra: restart start-up delay timer when the first uplink comes up
When all the uplinks go down the VTEP is disconnected from the
VxLAN overlay and this was handled by proto-downing the ES bonds. When
the uplinks come up again we need to re-enable the ES bonds but that
needs to be done after a delay to allow the EVPN network to converge.
And that is done by firing off the startup-delay timer on first
uplink-up.
zebra: re-sync protodown state with the dplane on new ES add
1. When a bond is associated with an ES we may need to re-sync
the dplane protodown state (which maybe stale/set by some other
app).
2. Also change the uplink state display to avoid confusion with
protodown reason code (both used to show uplink-up).
protodown state is a combination of the dplane and zebra states.
protodown reason is maintained exclusively by zebra. Display this
information on two separate lines to make that ownership clearer.
Also display n/a for bonds as the dplane doesn't support protodowning
the bond device.
zebra: re-sync protodown state when a port/mbr is linked to an ES-bond
The code for this was already there but was not kicking in because of a
zebra local reason-code dup check. Even if the reason-code is the same,
if the dplane and zebra disagree about the protodown state zebra will
need to re-program the dplane.
Fixed a couple of spelling errors in the protodown logs to make greps
easy.
At source have match vni plus set statement in route-map.
Validate the origin of the route's outbound correctly sets
the 'set' statment based on match vni filter.
At origin:
route-map RM-EVPN-TE-Matches permit 10
match evpn vni 4001
set large-community 10:10:119
Receiving end:
Route [5]:[0]:[24]:[78.41.1.0] VNI 4001
5550
27.0.0.15 from TORS1(downlink-5) (27.0.0.15)
Origin incomplete, metric 0, valid, external, bestpath-from-AS 5550, best (First path received)
Extended Community: RT:5550:4001 ET:8 Rmac:00:02:00:00:00:4d
Large Community: 10:10:119 <--- Large community stamped
Last update: Thu Dec 10 22:19:26 2020
Donald Sharp [Thu, 10 Dec 2020 15:18:00 +0000 (10:18 -0500)]
pbrd: Pay attention to interface up/down events with nht
When an interface goes up/down we need to pay attention to this
in PBR. In the past we were relying *only* on the nht events
but this is not sufficient for cases where an interface is flapping
up and down. If this is happening it could be happening fast enough
that zebra is not sending nht events because they are consolidated
into a single event from it's perspective and that is the right thing
to do. This commit will allow us to back out commit:
As that commit introduced extra processing in zebra that is actually
causing issues in other places. The problem that commit was trying
to solve should have always been handled in pbrd instead of making
zebra do work that is unnatural to it's actual flow.
Duncan Eastoe [Fri, 11 Dec 2020 11:07:59 +0000 (11:07 +0000)]
zebra: routes stuck with 'q' when using dplane FPM
New work enqueued to the dplane_fpm_nl provider is initially de-queued
and re-enqueued, in fpm_nl_process(), to be processed by the provider's
own thread.
After performing this initial de-queue/enqueue we return to
dplane_thread_loop() and check the dplane_fpm_nl output queue for any
work which has been completed.
Since this work is being processed in another thread it is very likely
that there will be some (or all) work still outstanding at this point.
The dataplane thread finishes up any other tasks and then waits until
it is next scheduled. In the meantime the dplane_fpm_nl thread is
processing its work queue until completion.
The issue arises here as the dataplane thread is not explicitly
re-scheduled once dplane_fpm_nl has drained its work queue and
populated its output queue with completed work.
This completed work can sit in the output queue for an indeterminate
period of time, depending upon when the dataplane thread is next
scheduled for other work. If the RIB has reached a stable state then
this could be a significant period of time. During this period zebra
marks these routes as queued, even though they have actually been
processed by all dataplane providers.
An un-related RIB change which triggers a FIB update will result in
the dataplane thread being scheduled and this completed work then
being processed. At this point the routes will then no longer be
marked as queued by zebra. However this new FIB update might itself
then fall victim to the same scenario!
We can observe the above behaviour in these detailed dplane logs.
11:24:47 zebra[7282]: dplane: incoming new work counter: 2
11:24:47 zebra[7282]: dplane enqueues 2 new work to provider 'Kernel'
11:24:47 zebra[7282]: dplane provider 'Kernel': processing
11:24:47 zebra[7282]: Dplane NEIGH_DISCOVER, ip 192.168.2.2, ifindex 9
11:24:47 zebra[7282]: Dplane NEIGH_DISCOVER, ip 192.168.2.2, ifindex 9
11:24:47 zebra[7282]: dplane dequeues 2 completed work from provider Kernel
11:24:47 zebra[7282]: dplane enqueues 2 new work to provider 'dplane_fpm_nl'
11:24:47 zebra[7282]: dplane dequeues 1 completed work from provider dplane_fpm_nl
11:24:47 zebra[7282]: dplane has 1 completed, 0 errors, for zebra main
2 contexts (all incoming work) have been queued to dplane_fpm_nl - all good.
1 completed context was de-queued, so there is outstanding work.
11:24:58 zebra[7282]: dplane: incoming new work counter: 2
11:24:58 zebra[7282]: dplane enqueues 2 new work to provider 'Kernel'
11:24:58 zebra[7282]: dplane provider 'Kernel': processing
11:24:58 zebra[7282]: ID (193) Dplane nexthop update ctx 0x55c429b6fed0 op NH_INSTALL
11:24:58 zebra[7282]: 0:5.5.5.5/32 Dplane route update ctx 0x55c429b79690 op ROUTE_INSTALL
11:24:58 zebra[7282]: dplane dequeues 2 completed work from provider Kernel
11:24:58 zebra[7282]: dplane enqueues 2 new work to provider 'dplane_fpm_nl'
11:24:58 zebra[7282]: dplane dequeues 2 completed work from provider dplane_fpm_nl
11:24:58 zebra[7282]: dplane has 2 completed, 0 errors, for zebra main
A further 2 contexts (all incoming work) have been queued to dplane_fpm_nl - all good.
2 completed contexts were de-queued, which sounds good as that is what we en-queued.
However, there is an outstanding context from earlier, so there is still outstanding
work.
Indeed the new 5.5.5.5/32 route is marked as queued:
O>q 5.5.5.5/32 [110/10] via 192.168.2.2, dp0p1s3, weight 1, 00:01:19
This remains the case until we trigger a FIB update by installation of the
(eg.) 10.10.10.10/32 route:
11:26:41 zebra[7282]: dplane: incoming new work counter: 2
11:26:41 zebra[7282]: dplane enqueues 2 new work to provider 'Kernel'
11:26:41 zebra[7282]: dplane provider 'Kernel': processing
11:26:41 zebra[7282]: ID (195) Dplane nexthop update ctx 0x55c429b78ce0 op NH_INSTALL
11:26:41 zebra[7282]: 0:10.10.10.10/32 Dplane route update ctx 0x55c429b7a040 op ROUTE_INSTALL
11:26:41 zebra[7282]: dplane dequeues 2 completed work from provider Kernel
11:26:41 zebra[7282]: dplane enqueues 2 new work to provider 'dplane_fpm_nl'
11:26:41 zebra[7282]: dplane dequeues 2 completed work from provider dplane_fpm_nl
11:26:41 zebra[7282]: dplane has 2 completed, 0 errors, for zebra main
11:26:41 zebra[7282]: zebra2proto: Please add this protocol(2) to proper rt_netlink.c handling
11:26:41 zebra[7282]: Nexthop dplane ctx 0x55c429b6fed0, op NH_INSTALL, nexthop ID (193), result SUCCESS
11:26:41 zebra[7282]: default(0:254):5.5.5.5/32 Processing dplane result ctx 0x55c429b79690, op ROUTE_INSTALL result SUCCESS
We observe the same 2 enqueues and 2 dequeues as before, which again suggests
that there is outstanding work.
As expected, the 5.5.5.5/32 route is no longer marked as queued:
O>* 5.5.5.5/32 [110/10] via 192.168.2.2, dp0p1s3, weight 1, 00:02:06
But the 10.10.10.10/32 route is, as we have not yet processed the completed
context:
C>q 10.10.10.10/32 is directly connected, lo, 00:26:05
Duncan Eastoe [Fri, 11 Dec 2020 11:03:53 +0000 (11:03 +0000)]
zebra: dplane API to get provider output q length
Returns the current number of (completed) contexts in the provider's
output queue (dp_ctx_out_q), allowing access to this data from the
provider itself.