summaryrefslogtreecommitdiff
path: root/doc/developer
diff options
context:
space:
mode:
Diffstat (limited to 'doc/developer')
-rw-r--r--doc/developer/index.rst2
-rw-r--r--doc/developer/link-state.rst314
-rw-r--r--doc/developer/logging.rst2
-rw-r--r--doc/developer/path-internals-daemon.rst115
-rw-r--r--doc/developer/path-internals-pcep.rst193
-rw-r--r--doc/developer/path-internals.rst11
-rw-r--r--doc/developer/path.rst11
-rw-r--r--doc/developer/subdir.am6
-rw-r--r--doc/developer/tracing.rst2
-rw-r--r--doc/developer/workflow.rst3
10 files changed, 657 insertions, 2 deletions
diff --git a/doc/developer/index.rst b/doc/developer/index.rst
index 5a7da806ff..8e7913419f 100644
--- a/doc/developer/index.rst
+++ b/doc/developer/index.rst
@@ -18,3 +18,5 @@ FRRouting Developer's Guide
ospf
zebra
vtysh
+ path
+ link-state
diff --git a/doc/developer/link-state.rst b/doc/developer/link-state.rst
new file mode 100644
index 0000000000..f1fc52966b
--- /dev/null
+++ b/doc/developer/link-state.rst
@@ -0,0 +1,314 @@
+Link State API Documentation
+============================
+
+Introduction
+------------
+
+The Link State (LS) API aims to provide a set of structures and functions to
+build and manage a Traffic Engineering Database for the various FRR daemons.
+This API has been designed for several use cases:
+
+- BGP Link State (BGP-LS): where BGP protocol need to collect the link state
+ information from the routing daemons (IS-IS and/or OSPF) to implement RFC7572
+- Path Computation Element (PCE): where path computation algorithms are based
+ on Traffic Engineering Database
+- ReSerVation Protocol (RSVP): where signaling need to know the Traffic
+ Engineering topology of the network in order to determine the path of
+ RSVP tunnels
+
+Architecture
+------------
+
+The main requirements from the various uses cases are as follow:
+
+- Provides a set of data model and function to ease Link State information
+ manipulation (storage, serialize, parse ...)
+- Ease and normalize Link State information exchange between FRR daemons
+- Provides database structure for Traffic Engineering Database (TED)
+
+To ease Link State understanding, FRR daemons have been classified into two
+categories:
+
+- **Consumer**: Daemons that consume Link State information e.g. BGPd
+- **Producer**: Daemons that are able to collect Link State information and
+ send them to consumer daemons e.g. OSPFd IS-ISd
+
+Zebra daemon, and more precisely, the ZAPI message is used to convey the Link
+State information between *producer* and *consumer*, but, Zebra acts as a
+simple pass through and does not store any Link State information. A new ZAPI
+**Opaque** message has been design for that purpose.
+
+Each consumer and producer daemons are free to store or not Link State data and
+organise the information following the Traffic Engineering Database model
+provided by the API or any other data structure e.g. Hash, RB-tree ...
+
+Link State API
+--------------
+
+This is the low level API that allows any daemons manipulate the Link State
+elements that are stored in the Link State Database.
+
+Data structures
+^^^^^^^^^^^^^^^
+
+3 types of Link State structure have been defined:
+
+.. c:type:: struct ls_node
+
+ that groups all information related to a node
+
+.. c:type:: struct ls_attributes
+
+ that groups all information related to a link
+
+.. c:type:: struct ls_prefix
+
+ that groups all information related to a prefix
+
+These 3 types of structures are those handled by BGP-LS (see RFC7752) and
+suitable to describe a Traffic Engineering topology.
+
+Each structure, in addition to the specific parameters, embed the node
+identifier which advertises the Link State and a bit mask as flags to
+indicates which parameters are valid i.e. for which the value is valid and
+corresponds to a Link State information conveyed by the routing protocol.
+
+.. c:type:: struct ls_node_id
+
+ defines the Node identifier as router ID IPv4 address plus the area ID for
+ OSPF or the ISO System ID plus the IS-IS level for IS-IS.
+
+Functions
+^^^^^^^^^
+
+A set of functions is provided to create, delete and compare Link State Node:
+
+.. c:function:: struct ls_node *ls_node_new(struct ls_node_id adv, struct in_addr router_id, struct in6_addr router6_id)
+.. c:function:: voidls_node_del(struct ls_node *node)
+.. c:function:: int ls_node_same(struct ls_node *n1, struct ls_node *n2)
+
+and Link State Attributes:
+
+.. c:function:: struct ls_attributes *ls_attributes_new(struct ls_node_id adv, struct in_addr local, struct in6_addr local6, uint32_t local_id)
+.. c:function:: void ls_attributes_del(struct ls_attributes *attr)
+.. c:function:: int ls_attributes_same(struct ls_attributes *a1, struct ls_attributes *a2)
+
+The low level API doesn't provide any particular functions for the Link State
+Prefix structure as this latter is simpler to manipulate.
+
+Link State TED
+--------------
+
+This is the high level API that provides functions to create, update, delete a
+Link State Database to from a Traffic Engineering Database (TED).
+
+Data Structures
+^^^^^^^^^^^^^^^
+
+The Traffic Engineering is modeled as a Graph in order to ease Path Computation
+algorithm implementation. Denoted **G(V, E)**, a graph is composed by a list of
+**Vertices (V)** which represents the network Node and a list of **Edges (E)**
+which represents Link. An additional list of **prefixes (P)** is also added and
+also attached to the *Vertex (V)* which advertise it.
+
+*Vertex (V)* contains the list of outgoing *Edges (E)* that connect this Vertex
+with its direct neighbors and the list of incoming *Edges (E)* that connect
+the direct neighbors to this Vertex. Indeed, the *Edge (E)* is unidirectional,
+thus, it is necessary to add 2 Edges to model a bidirectional relation between
+2 Vertices. Finally, the *Vertex (V)* contains a pointer to the corresponding
+Link State Node.
+
+*Edge (E)* contains the source and destination Vertex that this Edge
+is connecting and a pointer to the corresponding Link State Attributes.
+
+A unique Key is used to identify both Vertices and Edges within the Graph.
+
+
+::
+
+ -------------- --------------------------- --------------
+ | Connected |---->| Connected Edge Va to Vb |--->| Connected |
+ --->| Vertex | --------------------------- | Vertex |---->
+ | | | |
+ | - Key (Va) | | - Key (Vb) |
+ <---| - Vertex | --------------------------- | - Vertex |<----
+ | |<----| Connected Edge Vb to Va |<---| |
+ -------------- --------------------------- --------------
+
+
+4 data structures have been defined to implement the Graph model:
+
+.. c:type:: struct ls_vertex
+.. c:type:: struct ls_edge
+.. c:type:: struct ls_prefix
+.. c:type:: struct ls_ted
+
+
+Functions
+^^^^^^^^^
+
+.. c:function:: struct ls_vertex *ls_vertex_add(struct ls_ted *ted, struct ls_node *node)
+.. c:function:: struct ls_vertex *ls_vertex_update(struct ls_ted *ted, struct ls_node *node)
+.. c:function:: void ls_vertex_del(struct ls_ted *ted, struct ls_vertex *vertex)
+.. c:function:: struct ls_vertex *ls_find_vertex_by_key(struct ls_ted *ted, const uint64_t key)
+.. c:function:: struct ls_vertex *ls_find_vertex_by_id(struct ls_ted *ted, struct ls_node_id id)
+.. c:function:: int ls_vertex_same(struct ls_vertex *v1, struct ls_vertex *v2)
+
+.. c:function:: struct ls_edge *ls_edge_add(struct ls_ted *ted, struct ls_attributes *attributes)
+.. c:function:: struct ls_edge *ls_edge_update(struct ls_ted *ted, struct ls_attributes *attributes)
+.. c:function:: void ls_edge_del(struct ls_ted *ted, struct ls_edge *edge)
+.. c:function:: struct ls_edge *ls_find_edge_by_key(struct ls_ted *ted, const uint64_t key)
+.. c:function:: struct ls_edge *ls_find_edge_by_source(struct ls_ted *ted, struct ls_attributes *attributes);
+.. c:function:: struct ls_edge *ls_find_edge_by_destination(struct ls_ted *ted, struct ls_attributes *attributes);
+
+.. c:function:: struct ls_subnet *ls_subnet_add(struct ls_ted *ted, struct ls_prefix *pref)
+.. c:function:: void ls_subnet_del(struct ls_ted *ted, struct ls_subnet *subnet)
+.. c:function:: struct ls_subnet *ls_find_subnet(struct ls_ted *ted, const struct prefix prefix)
+
+.. c:function:: struct ls_ted *ls_ted_new(const uint32_t key, char *name, uint32_t asn)
+.. c:function:: void ls_ted_del(struct ls_ted *ted)
+.. c:function:: void ls_connect_vertices(struct ls_vertex *src, struct ls_vertex *dst, struct ls_edge *edge)
+.. c:function:: void ls_connect(struct ls_vertex *vertex, struct ls_edge *edge, bool source)
+.. c:function:: void ls_disconnect(struct ls_vertex *vertex, struct ls_edge *edge, bool source)
+.. c:function:: void ls_disconnect_edge(struct ls_edge *edge)
+
+
+Link State Messages
+-------------------
+
+This part of the API provides functions and data structure to ease the
+communication between the *Producer* and *Consumer* daemons.
+
+Communications principles
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Recent ZAPI Opaque Message is used to exchange Link State data between daemons.
+For that purpose, Link State API provides new functions to serialize and parse
+Link State information through the ZAPI Opaque message. A dedicated flag,
+named ZAPI_OPAQUE_FLAG_UNICAST, allows daemons to send a unicast or a multicast
+Opaque message and is used as follow for the Link State exchange:
+
+- Multicast: To send data update to all daemons that have subscribed to the
+ Link State Update message
+- Unicast: To send initial Link State information from a particular daemon. All
+ data are send only to the daemon that request Link State Synchronisatio
+
+Figure 1 below, illustrates the ZAPI Opaque message exchange between a
+*Producer* (an IGP like OSPF or IS-IS) and a *Consumer* (e.g. BGP). The
+message sequences are as follows:
+
+- First, both *Producer* and *Consumer* must register to their respective ZAPI
+ Opaque Message. **Link State Sync** for the *Producer* in order to receive
+ Database synchronisation request from a *Consumer*. **Link State Update** for
+ the *Consumer* in order to received any Link State update from a *Producer*.
+ These register messages are stored by Zebra to determine to which daemon it
+ should redistribute the ZAPI messages it receives.
+- Then, the *Consumer* sends a **Link State Synchronistation** request with the
+ Multicast method in order to receive the complete Link State Database from a
+ *Producer*. ZEBRA daemon forwards this message to any *Producer* daemons that
+ previously registered to this message. If no *Producer* has yet registered,
+ the request is lost. Thus, if the *Consumer* receives no response whithin a
+ given timer, it means that no *Producer* are available right now. So, the
+ *Consumer* must send the same request until it receives a Link State Database
+ Synchronistation message. This behaviour is necessary as we can't control in
+ which order daemons are started. It is up to the *Consumer* daemon to fix the
+ timeout and the number of retry.
+- When a *Producer* receives a **Link State Synchronisation** request, it
+ starts sending all elements of its own Link State Database through the
+ **Link State Database Synchronisation** message. These messages are send with
+ the Unicast method to avoid flooding other daemons with these elements. ZEBRA
+ layer ensures to forward the message to the right daemon.
+- When a *Producer* update its Link State Database, it automatically sends a
+ **Link State Update** message with the Multicast method. In turn, ZEBRA
+ daemon forwards the message to all *Consumer* daemons that previously
+ registered to this message. if no daemon is registered, the message is lost.
+- A daemon could unregister from the ZAPI Opaque message registry at any time.
+ In this case, the ZEBRA daemon stops to forward any messages it receives to
+ this daemon, even if it was previously converns.
+
+::
+
+ IGP ZEBRA Consumer
+ (OSPF/IS-IS) (ZAPI Opaque Thread) (e.g. BGP)
+ | | | \
+ | | Register LS Update | |
+ | |<----------------------------| Register Phase
+ | | | |
+ | | Request LS Sync | |
+ | |<----------------------------| |
+ : : : A |
+ | Register LS Sync | | | |
+ |----------------------------->| | | /
+ : : : |TimeOut
+ : : : |
+ | | | |
+ | | Request LS Sync | v \
+ | Request LS Sync |<----------------------------| |
+ |<-----------------------------| | Synchronistation
+ | LS DB Sync | | Phase
+ |----------------------------->| LS DB Sync | |
+ | |---------------------------->| |
+ | LS DB Sync (cont'd) | | |
+ |----------------------------->| LS DB Sync (cont'd) | |
+ | . |---------------------------->| |
+ | . | . | |
+ | . | . | |
+ | LS DB Sync (end) | . | |
+ |----------------------------->| LS DB Sync (end) | |
+ | |---------------------------->| |
+ | | | /
+ : : :
+ : : :
+ | LS Update | | \
+ |----------------------------->| LS Update | |
+ | |---------------------------->| Update Phase
+ | | | |
+ : : : /
+ : : :
+ | | | \
+ | | Unregister LS Update | |
+ | |<----------------------------| Deregister Phase
+ | | | |
+ | LS Update | | |
+ |----------------------------->| | |
+ | | | /
+ | | |
+
+ Figure 1: Link State messages exchange
+
+
+Data Structures
+^^^^^^^^^^^^^^^
+
+The Link State Message is defined to convey Link State parameters from
+the routing protocol (OSPF or IS-IS) to other daemons e.g. BGP.
+
+.. c:type:: struct ls_message
+
+The structure is composed of:
+
+- Event of the message:
+
+ - Sync: Send the whole LS DB following a request
+ - Add: Send the a new Link State element
+ - Update: Send an update of an existing Link State element
+ - Delete: Indicate that the given Link State element is removed
+
+- Type of Link State element: Node, Attribute or Prefix
+- Remote node id when known
+- Data: Node, Attributes or Prefix
+
+A Link State Message can carry only one Link State Element (Node, Attributes
+of Prefix) at once, and only one Link State Message is sent through ZAPI
+Opaque Link State type at once.
+
+Functions
+^^^^^^^^^
+
+.. c:function:: struct ls_message *ls_parse_msg(struct stream *s)
+.. c:function:: int ls_send_msg(struct zclient *zclient, struct ls_message *msg, struct zapi_opaque_reg_info *dst)
+.. c:function:: struct ls_message *ls_vertex2msg(struct ls_message *msg, struct ls_vertex *vertex)
+.. c:function:: struct ls_message *ls_edge2msg(struct ls_message *msg, struct ls_edge *edge)
+.. c:function:: struct ls_message *ls_subnet2msg(struct ls_message *msg, struct ls_subnet *subnet)
+.. c:function:: int ls_sync_ted(struct ls_ted *ted, struct zclient *zclient, struct zapi_opaque_reg_info *dst)
+
diff --git a/doc/developer/logging.rst b/doc/developer/logging.rst
index 2f2444373c..cf3aa8d17f 100644
--- a/doc/developer/logging.rst
+++ b/doc/developer/logging.rst
@@ -83,7 +83,7 @@ Extensions
+-----------+--------------------------+----------------------------------------------+
| ``%pNHs`` | ``struct nexthop *`` | ``1.2.3.4 if 15`` |
+-----------+--------------------------+----------------------------------------------+
-| ``%pFX`` + ``struct bgp_dest *`` | ``fe80::1234/64`` available in BGP only |
+| ``%pFX`` | ``struct bgp_dest *`` | ``fe80::1234/64`` (available in BGP only) |
+-----------+--------------------------+----------------------------------------------+
Printf features like field lengths can be used normally with these extensions,
diff --git a/doc/developer/path-internals-daemon.rst b/doc/developer/path-internals-daemon.rst
new file mode 100644
index 0000000000..29f017284f
--- /dev/null
+++ b/doc/developer/path-internals-daemon.rst
@@ -0,0 +1,115 @@
+PATHD Internals
+===============
+
+Architecture
+------------
+
+Overview
+........
+
+The pathd deamon manages the segment routing policies, it owns the data
+structures representing them and can load modules that manipulate them like the
+PCEP module. Its responsibility is to select a candidate path for each
+configured policy and to install it into Zebra.
+
+Zebra
+.....
+
+Zebra manages policies that are active or pending to be activated due to the
+next hop not being available yet. In zebra, policy data structures and APIs are
+defined in `zebra_srte.[hc]`.
+
+The responsibilities of Zebra are:
+
+ - Store the policies' segment list.
+ - Install the policies when their next-hop is available.
+ - Notify other daemons of the status of the policies.
+
+Adding and removing policies is done using the commands `ZEBRA_SR_POLICY_SET`
+and `ZEBRA_SR_POLICY_DELETE` as parameter of the function `zebra_send_sr_policy`
+all defined in `zclient.[hc]`.
+
+If the first segment of the policy is an unknown label, it is kept until
+notified by the mpls hooks `zebra_mpls_label_created`, and then it is installed.
+
+To get notified when a policy status changes, a client can implement the
+`sr_policy_notify_status` callback defined in `zclient.[hc]`.
+
+For encoding/decoding the various data structures used to comunicate with zebra,
+the following functions are available from `zclient.[hc]`:
+`zapi_sr_policy_encode`, `zapi_sr_policy_decode` and
+`zapi_sr_policy_notify_status_decode`.
+
+
+Pathd
+.....
+
+
+The pathd daemon manages all the possible candidate paths for the segment
+routing policies and selects the best one following the
+`segment routing policy draft <https://tools.ietf.org/html/draft-ietf-spring-segment-routing-policy-06#section-2.9>`_.
+It also supports loadable modules for handling dynamic candidate paths and the
+creation of new policies and candidate paths at runtime.
+
+The responsibilities of the pathd base daemon, not including any optional
+modules, are:
+
+ - Store the policies and all the possible candidate paths for them.
+ - Select the best candidate path for each policy and send it to Zebra.
+ - Provide VTYSH configuration to set up policies and candidate paths.
+ - Provide a Northbound API to manipulate **configured** policies and candidate paths.
+ - Handle loadable modules for extending the functionality.
+ - Provide an API to the loadable module to manipulate policies and candidate paths.
+
+
+Threading Model
+---------------
+
+The daemon runs completely inside the main thread using FRR event model, there
+is no threading involved.
+
+
+Source Code
+-----------
+
+Internal Data Structures
+........................
+
+The main data structures for policies and candidate paths are defined in
+`pathd.h` and implemented in `pathd.c`.
+
+When modifying these structures, either directly or through the functions
+exported by `pathd.h`, nothing should be deleted/freed right away. The deletion
+or modification flags must be set and when all the changes are done, the
+function `srte_apply_changes` must be called. When called, a new candidate path
+may be elected and sent to Zebra, and all the structures flagged as deleted
+will be freed. In addition, a hook will be called so dynamic modules can perform
+any required action when the elected candidate path changes.
+
+
+Northbound API
+..............
+
+The northbound API is defined in `path_nb.[ch]` and implemented in
+`path_nb_config.c` for configuration data and `path_nb_state.c` for operational
+data.
+
+
+Command Line Client
+...................
+
+The command-line client (VTYSH) is implemented in `path_cli.c`.
+
+
+Interface with Zebra
+....................
+
+All the functions interfacing with Zebra are defined and implemented in
+`path_zebra.[hc]`.
+
+
+Loadable Module API
+...................
+
+For the time being, the API the loadable module uses is defined by `pathd.h`,
+but in the future, it should be moved to a dedicated include file.
diff --git a/doc/developer/path-internals-pcep.rst b/doc/developer/path-internals-pcep.rst
new file mode 100644
index 0000000000..ca318314f1
--- /dev/null
+++ b/doc/developer/path-internals-pcep.rst
@@ -0,0 +1,193 @@
+PCEP Module Internals
+=====================
+
+Introduction
+------------
+
+The PCEP module for the pathd daemon implements the PCEP protocol described in
+:rfc:`5440` to update the policies and candidate paths.
+
+The protocol encoding/decoding and the basic session management is handled by
+the `pceplib external library 1.2 <https://github.com/volta-networks/pceplib/tree/devel-1.2>`_.
+
+Together with pceplib, this module supports at least partially:
+
+ - :rfc:`5440`
+
+ Most of the protocol defined in the RFC is implemented.
+ All the messages can be parsed, but this was only tested in the context
+ of segment routing. Only a very small subset of metric types can be
+ configured, and there is a known issue with some Cisco routers not
+ following the IANA numbers for metrics.
+
+ - :rfc:`8231`
+
+ Support delegation of candidate path after performing the initial
+ computation request. If the PCE does not respond or cannot compute
+ a path, an empty candidate path is delegated to the PCE.
+ Only tested in the context of segment routing.
+
+ - :rfc:`8408`
+
+ Only used to comunicate the support for segment routing to the PCE.
+
+ - :rfc:`8664`
+
+ All the NAI types are implemented, but only the MPLS NAI are supported.
+ If the PCE provide segments that are not MPLS labels, the PCC will
+ return an error.
+
+Note that pceplib supports more RFCs and drafts, see pceplib
+`README <https://github.com/volta-networks/pceplib/blob/master/README.md>`_
+for more details.
+
+
+Architecture
+------------
+
+Overview
+........
+
+The module is separated into multiple layers:
+
+ - pathd interface
+ - command-line console
+ - controller
+ - PCC
+ - pceplib interface
+
+The pathd interface handles all the interactions with the daemon API.
+
+The command-line console handles all the VTYSH configuration commands.
+
+The controller manages the multiple PCC connections and the interaction between
+them and the daemon interface.
+
+The PCC handles a single connection to a PCE through a pceplib session.
+
+The pceplib interface abstracts the API of the pceplib.
+
+.. figure:: ../figures/pcep_module_threading_overview.svg
+
+
+Threading Model
+---------------
+
+The module requires multiple threads to cooperate:
+
+ - The main thread used by the pathd daemon.
+ - The controller pthread used to isolate the PCC from the main thread.
+ - The possible threads started in the pceplib library.
+
+To ensure thread safety, all the controller and PCC state data structures can
+only be read and modified in the controller thread, and all the global data
+structures can only be read and modified in the main thread. Most of the
+interactions between these threads are done through FRR timers and events.
+
+The controller is the bridge between the two threads, all the functions that
+**MUST** be called from the main thread start with the prefix `pcep_ctrl_` and
+all the functions that **MUST** be called from the controller thread start
+with the prefix `pcep_thread_`. When an asynchronous action must be taken in
+a different thread, an FRR event is sent to the thread. If some synchronous
+operation is needed, the calling thread will block and run a callback in the
+other thread, there the result is **COPIED** and returned to the calling thread.
+
+No function other than the controller functions defined for it should be called
+from the main thread. The only exception being some utility functions from
+`path_pcep_lib.[hc]`.
+
+All the calls to pathd API functions **MUST** be performed in the main thread,
+for that, the controller sends FRR events handled in function
+`path_pcep.c:pcep_main_event_handler`.
+
+For the same reason, the console client only runs in the main thread. It can
+freely use the global variable, but **MUST** use controller's `pcep_ctrl_`
+functions to interact with the PCCs.
+
+
+Source Code
+-----------
+
+Generic Data Structures
+.......................
+
+The data structures are defined in multiple places, and where they are defined
+dictates where they can be used.
+
+The data structures defined in `path_pcep.h` can be used anywhere in the module.
+
+Internally, throughout the module, the `struct path` data structure is used
+to describe PCEP messages. It is a simplified flattened structure that can
+represent multiple complex PCEP message types. The conversion from this
+structure to the PCEP data structures used by pceplib is done in the pceplib
+interface layer.
+
+The data structures defined in `path_pcep_controller.h` should only be used
+in `path_pcep_controller.c`. Even if a structure pointer is passed as a parameter
+to functions defined in `path_pcep_pcc.h`, these should consider it as an opaque
+data structure only used to call back controller functions.
+
+The same applies to the structures defined in `path_pcep_pcc.h`, even if the
+controller owns a reference to this data structure, it should never read or
+modify it directly, it should be considered an opaque structure.
+
+The global data structure can be accessed from the pathd interface layer
+`path_pcep.c` and the command line client code `path_pcep_cli.c`.
+
+
+Interface With Pathd
+....................
+
+All the functions calling or called by the pathd daemon are implemented in
+`path_pcep.c`. These functions **MUST** run in the main FRR thread, and
+all the interactions with the controller and the PCCs **MUST** pass through
+the controller's `pcep_ctrl_` prefixed functions.
+
+To handle asynchronous events from the PCCs, a callback is passed to
+`pcep_ctrl_initialize` that is called in the FRR main thread context.
+
+
+Command Line Client
+...................
+
+All the command line configuration commands (VTYSH) are implemented in
+`path_pcep_cli.c`. All the functions there run in the main FRR thread and
+can freely access the global variables. All the interaction with the
+controller's and the PCCs **MUST** pass through the controller `pcep_ctrl_`
+prefixed functions.
+
+
+Debugging Helpers
+.................
+
+All the functions formating data structures for debugging and logging purposes
+are implemented in `path_pcep_debug.[hc]`.
+
+
+Interface with pceplib
+......................
+
+All the functions calling the pceplib external library are defined in
+`path_pcep_lib.[hc]`. Some functions are called from the main FRR thread, like
+`pcep_lib_initialize`, `pcep_lib_finalize`; some can be called from either
+thread, like `pcep_lib_free_counters`; some function must be called from the
+controller thread, like `pcep_lib_connect`. This will probably be formalized
+later on with function prefix like done in the controller.
+
+
+Controller
+..........
+
+The controller is defined and implemented in `path_pcep_controller.[hc]`.
+Part of the controller code runs in FRR main thread and part runs in its own
+FRR pthread started to isolate the main thread from the PCCs' event loop.
+To communicate between the threads it uses FRR events, timers and
+`thread_execute` calls.
+
+
+PCC
+...
+
+Each PCC instance owns its state and runs in the controller thread. They are
+defined and implemented in `path_pcep_pcc.[hc]`. All the interactions with
+the daemon must pass through some controller's `pcep_thread_` prefixed function.
diff --git a/doc/developer/path-internals.rst b/doc/developer/path-internals.rst
new file mode 100644
index 0000000000..2c2df0f378
--- /dev/null
+++ b/doc/developer/path-internals.rst
@@ -0,0 +1,11 @@
+.. _path_internals:
+
+*********
+Internals
+*********
+
+.. toctree::
+ :maxdepth: 2
+
+ path-internals-daemon
+ path-internals-pcep
diff --git a/doc/developer/path.rst b/doc/developer/path.rst
new file mode 100644
index 0000000000..b6d2438c58
--- /dev/null
+++ b/doc/developer/path.rst
@@ -0,0 +1,11 @@
+.. _path:
+
+*****
+PATHD
+*****
+
+.. toctree::
+ :maxdepth: 2
+
+ path-internals
+
diff --git a/doc/developer/subdir.am b/doc/developer/subdir.am
index 3c0d203007..0129be6bf1 100644
--- a/doc/developer/subdir.am
+++ b/doc/developer/subdir.am
@@ -33,6 +33,7 @@ dev_RSTFILES = \
doc/developer/include-compile.rst \
doc/developer/index.rst \
doc/developer/library.rst \
+ doc/developer/link-state.rst \
doc/developer/lists.rst \
doc/developer/locking.rst \
doc/developer/logging.rst \
@@ -46,8 +47,13 @@ dev_RSTFILES = \
doc/developer/packaging-debian.rst \
doc/developer/packaging-redhat.rst \
doc/developer/packaging.rst \
+ doc/developer/path-internals-daemon.rst \
+ doc/developer/path-internals-pcep.rst \
+ doc/developer/path-internals.rst \
+ doc/developer/path.rst \
doc/developer/rcu.rst \
doc/developer/static-linking.rst \
+ doc/developer/tracing.rst \
doc/developer/testing.rst \
doc/developer/topotests-snippets.rst \
doc/developer/topotests.rst \
diff --git a/doc/developer/tracing.rst b/doc/developer/tracing.rst
index ee0a6be008..d54f6c7aaa 100644
--- a/doc/developer/tracing.rst
+++ b/doc/developer/tracing.rst
@@ -57,7 +57,7 @@ run the target in non-forking mode (no ``-d``) and use LTTng as usual (refer to
LTTng user manual). When using USDT probes with LTTng, follow the example in
`this article
<https://lttng.org/blog/2019/10/15/new-dynamic-user-space-tracing-in-lttng/>`_.
-To trace with dtrace or SystemTap, compile with :option:`--enable-usdt=yes` and
+To trace with dtrace or SystemTap, compile with `--enable-usdt=yes` and
use your tracer as usual.
To see available USDT probes::
diff --git a/doc/developer/workflow.rst b/doc/developer/workflow.rst
index 4183ac6480..861d87b998 100644
--- a/doc/developer/workflow.rst
+++ b/doc/developer/workflow.rst
@@ -1262,6 +1262,9 @@ When documented this way, CLI commands can be cross referenced with the
This is very helpful for users who want to quickly remind themselves what a
particular command does.
+When documenting a cli that has a ``no`` form, please do not include
+the ``no`` in the ``.. index::`` line.
+
Configuration Snippets
^^^^^^^^^^^^^^^^^^^^^^