diff options
| author | Donald Sharp <sharpd@nvidia.com> | 2024-09-06 10:39:41 -0400 |
|---|---|---|
| committer | Donald Sharp <sharpd@nvidia.com> | 2024-09-06 10:39:41 -0400 |
| commit | bb78f73fa624bb5c2ee8612124ae51af6f6dc91c (patch) | |
| tree | a2ff17c93f87098a5e245647a4b961a2ea2a21ef /bgpd/bgp_fsm.c | |
| parent | f3f96f95bd836c438dd549327baed334ba8d44fe (diff) | |
bgpd: Reduce # of iterations when doing llgr
Code was scanning a table then identifying a prefix
that needed to be modified then calling code that
reran bestpath on the entire table again.
If you had multiple items that needed processing
you would end up scanning and setting the entire
table to be scanned multiple times. No bueno.
a) We do not need to reprocess items that are not
being modified.
b) We do not need to walk the entire table multiple
times, we have the data that is needed already.
Modify the code to just call bgp_process on the
interesting nodes.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Diffstat (limited to 'bgpd/bgp_fsm.c')
| -rw-r--r-- | bgpd/bgp_fsm.c | 9 |
1 files changed, 3 insertions, 6 deletions
diff --git a/bgpd/bgp_fsm.c b/bgpd/bgp_fsm.c index 42ba54ab7b..74ad65f1ec 100644 --- a/bgpd/bgp_fsm.c +++ b/bgpd/bgp_fsm.c @@ -696,9 +696,8 @@ static void bgp_set_llgr_stale(struct peer *peer, afi_t afi, safi_t safi) attr = *pi->attr; bgp_attr_add_llgr_community(&attr); pi->attr = bgp_attr_intern(&attr); - bgp_recalculate_afi_safi_bestpaths( - peer->bgp, afi, safi); - + bgp_process(peer->bgp, rm, pi, afi, + safi); break; } } @@ -724,9 +723,7 @@ static void bgp_set_llgr_stale(struct peer *peer, afi_t afi, safi_t safi) attr = *pi->attr; bgp_attr_add_llgr_community(&attr); pi->attr = bgp_attr_intern(&attr); - bgp_recalculate_afi_safi_bestpaths(peer->bgp, - afi, safi); - + bgp_process(peer->bgp, dest, pi, afi, safi); break; } } |
