pimd : packet processing optimization on rp change
Problem Statement:
==================
on rp_change, PIM processes all the upstream in a loop and for selected
upstreams PIM has to send join/prune based on the RPF changed.
join and prune packets are not getting aggregated in a single packet.
Root Cause Analysis:
====================
on pim_rp_change pim_upstream_update() gets called for selected upstreams.
This API calculates to whom it has to send join and to whom it has to
send prune via API pim_zebra_upstream_rpf_changed(). This API peprares
the upstream_switch_list list per interface and inserts the group and
sources.
Now PIM is still in the pim_upstream_update() API context, i.e PIM
is still processing the same upstream. In the last there is a
call to pim_zebra_update_all_interfaces() which processes the
upstream_switch_list list, sends the packets out and clears the list.
Fix:
====
Don't process the upstream_switch_list in the upstream context.
process all the upstreams prepare the upstream_switch_list and then
process in one go. This will club all the S,G entries.
It also saves list cleanup with respect to memory allocation and
deallocation multiple times.