summaryrefslogtreecommitdiff
path: root/doc/developer
diff options
context:
space:
mode:
Diffstat (limited to 'doc/developer')
-rw-r--r--doc/developer/building-frr-for-openwrt.rst32
-rw-r--r--doc/developer/grpc.rst249
-rw-r--r--doc/developer/index.rst1
-rw-r--r--doc/developer/subdir.am1
-rw-r--r--doc/developer/zebra.rst72
5 files changed, 340 insertions, 15 deletions
diff --git a/doc/developer/building-frr-for-openwrt.rst b/doc/developer/building-frr-for-openwrt.rst
index 5d8f82f27e..9bd1296dad 100644
--- a/doc/developer/building-frr-for-openwrt.rst
+++ b/doc/developer/building-frr-for-openwrt.rst
@@ -1,6 +1,8 @@
-OpenWRT
+OpenWrt
=======
+General info about OpenWrt buildsystem: `link <https://openwrt.org/docs/guide-developer/build-system/start>`_.
+
Prepare build environment
-------------------------
@@ -13,16 +15,16 @@ For Debian based distributions, run:
For other environments, instructions can be found in the
`official documentation
-<https://wiki.openwrt.org/doc/howto/buildroot.exigence#examples_of_package_installations>`_.
+<https://openwrt.org/docs/guide-developer/build-system/install-buildsystem#examples_of_package_installations>`_.
-Get OpenWRT Sources (from Git)
+Get OpenWrt Sources (from Git)
------------------------------
.. note::
- The OpenWRT build will fail if you run it as root. So take care to run it as a nonprivileged user.
+ The OpenWrt build will fail if you run it as root. So take care to run it as a nonprivileged user.
-Clone the OpenWRT sources and retrieve the package feeds
+Clone the OpenWrt sources and retrieve the package feeds
::
@@ -30,21 +32,15 @@ Clone the OpenWRT sources and retrieve the package feeds
cd openwrt
./scripts/feeds update -a
./scripts/feeds install -a
- cd feeds/routing
- git fetch origin pull/319/head
- git read-tree --prefix=frr/ -u FETCH_HEAD:frr
- cd ../../package/feeds/routing/
- ln -sv ../../../feeds/routing/frr .
- cd ../../..
-
-Configure OpenWRT for your target and select the needed FRR packages in Network -> Routing and Redirection -> frr,
+
+Configure OpenWrt for your target and select the needed FRR packages in Network -> Routing and Redirection -> frr,
exit and save
::
make menuconfig
-Then, to compile either a complete OpenWRT image, or the FRR packages, run:
+Then, to compile either a complete OpenWrt image, or the FRR packages, run:
::
@@ -54,10 +50,16 @@ It may be possible that on first build ``make package/frr/compile`` not
to work and it may be needed to run a ``make`` for the entire build
environment. Add ``V=s`` to get more debugging output.
+More information about OpenWrt buildsystem can be found `here
+<https://openwrt.org/docs/guide-developer/build-system/use-buildsystem>`_.
+
Work with sources
-----------------
-To update to a newer version, or change other options, you need to edit the ``feeds/routing/frr/Makefile``.
+To update to a newer version, or change other options, you need to edit the ``feeds/packages/frr/Makefile``.
+
+More information about working with patches in OpenWrt buildsystem can be found `here
+<https://openwrt.org/docs/guide-developer/build-system/use-patches-with-buildsystem>`_.
Usage
-----
diff --git a/doc/developer/grpc.rst b/doc/developer/grpc.rst
new file mode 100644
index 0000000000..8029a08b73
--- /dev/null
+++ b/doc/developer/grpc.rst
@@ -0,0 +1,249 @@
+.. _grpc-dev:
+
+***************
+Northbound gRPC
+***************
+
+.. _grpc-languages-bindings:
+
+Programming Language Bindings
+=============================
+
+The gRPC supported programming language bindings can be found here:
+https://grpc.io/docs/languages/
+
+After picking a programming language that supports gRPC bindings, the
+next step is to generate the FRR northbound bindings. To generate the
+northbound bindings you'll need the programming language binding
+generator tools and those are language specific.
+
+Next sections will use Ruby as an example for writing scripts to use
+the northbound.
+
+
+.. _grpc-ruby-generate:
+
+Generating Ruby FRR Bindings
+----------------------------
+
+Generating FRR northbound bindings for Ruby example:
+
+::
+
+ # Install the required gems:
+ # - grpc: the gem that will talk with FRR's gRPC plugin.
+ # - grpc-tools: the gem that provides the code generator.
+ gem install grpc
+ gem install grpc-tools
+
+ # Create your project/scripts directory:
+ mkdir /tmp/frr-ruby
+
+ # Go to FRR's grpc directory:
+ cd grpc
+
+ # Generate the ruby bindings:
+ grpc_tools_ruby_protoc \
+ --ruby_out=/tmp/frr-ruby \
+ --grpc_out=/tmp/frr-ruby \
+ frr-northbound.proto
+
+
+.. _grpc-ruby-if-sample:
+
+Using Ruby To Get Interfaces State
+----------------------------------
+
+Here is a sample script to print all interfaces FRR discovered:
+
+::
+
+ require 'frr-northbound_services_pb'
+
+ # Create the connection with FRR's gRPC:
+ stub = Frr::Northbound::Stub.new('localhost:50051', :this_channel_is_insecure)
+
+ # Create a new state request to get interface state:
+ request = Frr::GetRequest.new
+ request.type = :STATE
+ request.path.push('/frr-interface:lib')
+
+ # Ask FRR.
+ response = stub.get(request)
+
+ # Print the response.
+ response.each do |result|
+ result.data.data.each_line do |line|
+ puts line
+ end
+ end
+
+
+.. note::
+
+ The generated files will assume that they are in the search path (e.g.
+ inside gem) so you'll need to either edit it to use ``require_relative`` or
+ tell Ruby where to look for them. For simplicity we'll use ``-I .`` to tell
+ it is in the current directory.
+
+
+The previous script will output something like this:
+
+::
+
+ $ cd /tmp/frr-ruby
+ # Add `-I.` so ruby finds the FRR generated file locally.
+ $ ruby -I. interface.rb
+ {
+ "frr-interface:lib": {
+ "interface": [
+ {
+ "name": "eth0",
+ "vrf": "default",
+ "state": {
+ "if-index": 2,
+ "mtu": 1500,
+ "mtu6": 1500,
+ "speed": 1000,
+ "metric": 0,
+ "phy-address": "11:22:33:44:55:66"
+ },
+ "frr-zebra:zebra": {
+ "state": {
+ "up-count": 0,
+ "down-count": 0
+ }
+ }
+ },
+ {
+ "name": "lo",
+ "vrf": "default",
+ "state": {
+ "if-index": 1,
+ "mtu": 0,
+ "mtu6": 65536,
+ "speed": 0,
+ "metric": 0,
+ "phy-address": "00:00:00:00:00:00"
+ },
+ "frr-zebra:zebra": {
+ "state": {
+ "up-count": 0,
+ "down-count": 0
+ }
+ }
+ }
+ ]
+ }
+ }
+
+
+.. _grpc-ruby-bfd-profile-sample:
+
+Using Ruby To Create BFD Profiles
+---------------------------------
+
+In this example you'll learn how to edit configuration using JSON
+and programmatic (XPath) format.
+
+::
+
+ require 'frr-northbound_services_pb'
+
+ # Create the connection with FRR's gRPC:
+ stub = Frr::Northbound::Stub.new('localhost:50051', :this_channel_is_insecure)
+
+ # Create a new candidate configuration change.
+ new_candidate = stub.create_candidate(Frr::CreateCandidateRequest.new)
+
+ # Use JSON to configure.
+ request = Frr::LoadToCandidateRequest.new
+ request.candidate_id = new_candidate.candidate_id
+ request.type = :MERGE
+ request.config = Frr::DataTree.new
+ request.config.encoding = :JSON
+ request.config.data = <<-EOJ
+ {
+ "frr-bfdd:bfdd": {
+ "bfd": {
+ "profile": [
+ {
+ "name": "test-prof",
+ "detection-multiplier": 4,
+ "required-receive-interval": 800000
+ }
+ ]
+ }
+ }
+ }
+ EOJ
+
+ # Load configuration to candidate.
+ stub.load_to_candidate(request)
+
+ # Commit candidate.
+ stub.commit(
+ Frr::CommitRequest.new(
+ candidate_id: new_candidate.candidate_id,
+ phase: :ALL,
+ comment: 'create test-prof'
+ )
+ )
+
+ #
+ # Now lets delete the previous profile and create a new one.
+ #
+
+ # Create a new candidate configuration change.
+ new_candidate = stub.create_candidate(Frr::CreateCandidateRequest.new)
+
+ # Edit the configuration candidate.
+ request = Frr::EditCandidateRequest.new
+ request.candidate_id = new_candidate.candidate_id
+
+ # Delete previously created profile.
+ request.delete.push(
+ Frr::PathValue.new(
+ path: "/frr-bfdd:bfdd/bfd/profile[name='test-prof']",
+ )
+ )
+
+ # Add new profile with two configurations.
+ request.update.push(
+ Frr::PathValue.new(
+ path: "/frr-bfdd:bfdd/bfd/profile[name='test-prof-2']/detection-multiplier",
+ value: 5.to_s
+ )
+ )
+ request.update.push(
+ Frr::PathValue.new(
+ path: "/frr-bfdd:bfdd/bfd/profile[name='test-prof-2']/desired-transmission-interval",
+ value: 900_000.to_s
+ )
+ )
+
+ # Modify the candidate.
+ stub.edit_candidate(request)
+
+ # Commit the candidate configuration.
+ stub.commit(
+ Frr::CommitRequest.new(
+ candidate_id: new_candidate.candidate_id,
+ phase: :ALL,
+ comment: 'replace test-prof with test-prof-2'
+ )
+ )
+
+
+And here is the new FRR configuration:
+
+::
+
+ $ sudo vtysh -c 'show running-config'
+ ...
+ bfd
+ profile test-prof-2
+ detect-multiplier 5
+ transmit-interval 900
+ !
+ !
diff --git a/doc/developer/index.rst b/doc/developer/index.rst
index 26b590c876..1f803b3772 100644
--- a/doc/developer/index.rst
+++ b/doc/developer/index.rst
@@ -12,6 +12,7 @@ FRRouting Developer's Guide
testing
bgpd
fpm
+ grpc
ospf
zebra
vtysh
diff --git a/doc/developer/subdir.am b/doc/developer/subdir.am
index 6dab244a03..03b4b5a3e2 100644
--- a/doc/developer/subdir.am
+++ b/doc/developer/subdir.am
@@ -28,6 +28,7 @@ dev_RSTFILES = \
doc/developer/cli.rst \
doc/developer/conf.py \
doc/developer/frr-release-procedure.rst \
+ doc/developer/grpc.rst \
doc/developer/hooks.rst \
doc/developer/include-compile.rst \
doc/developer/index.rst \
diff --git a/doc/developer/zebra.rst b/doc/developer/zebra.rst
index 6a73803d01..d51cbc9a14 100644
--- a/doc/developer/zebra.rst
+++ b/doc/developer/zebra.rst
@@ -382,3 +382,75 @@ Zebra Protocol Commands
+------------------------------------+-------+
| ZEBRA_OPAQUE_UNREGISTER | 109 |
+------------------------------------+-------+
+| ZEBRA_NEIGH_DISCOVER | 110 |
++------------------------------------+-------+
+
+Dataplane batching
+==================
+
+Dataplane batching is an optimization feature that reduces the processing
+time involved in the user space to kernel space transition for every message we
+want to send.
+
+Design
+-----------
+
+With our dataplane abstraction, we create a queue of dataplane context objects
+for the messages we want to send to the kernel. In a separate pthread, we
+loop over this queue and send the context objects to the appropriate
+dataplane. A batching enhancement tightly integrates with the dataplane
+context objects so they are able to be batch sent to dataplanes that support
+it.
+
+There is one main change in the dataplane code. It does not call
+kernel-dependent functions one-by-one, but instead it hands a list of work down
+to the kernel level for processing.
+
+Netlink
+^^^^^^^
+
+At the moment, this is the only dataplane that allows for batch sending
+messages to it.
+
+When messages must be sent to the kernel, they are consecutively added
+to the batch represented by the `struct nl_batch`. Context objects are firstly
+encoded to their binary representation. All the encoding functions use the same
+interface: take a context object, a buffer and a size of the buffer as an
+argument. It is important that they should handle a situation in which a message
+wouldn't fit in the buffer and return a proper error. To achieve a zero-copy
+(in the user space only) messages are encoded to the same buffer which will
+be passed to the kernel. Hence, we can theoretically hit the boundary of the
+buffer.
+
+Messages stored in the batch are sent if one of the conditions occurs:
+
+- When an encoding function returns the buffer overflow error. The context
+ object that caused this error is re-added to the new, empty batch.
+
+- When the size of the batch hits certain limit.
+
+- When the namespace of a currently being processed context object is
+ different from all the previous ones. They have to be sent through
+ distinct sockets, so the messages cannot share the same buffer.
+
+- After the last message from the list is processed.
+
+As mentioned earlier, there is a special threshold which is smaller than
+the size of the underlying buffer. It prevents the overflow error and thus
+eliminates the case, in which a message is encoded twice.
+
+The buffer used in the batching is global, since allocating that big amount of
+memory every time wouldn't be most effective. However, its size can be changed
+dynamically, using hidden vtysh command:
+``zebra kernel netlink batch-tx-buf (1-1048576) (1-1048576)``. This feature is
+only used in tests and shouldn't be utilized in any other place.
+
+For every failed message in the batch, the kernel responds with an error
+message. Error messages are kept in the same order as they were sent, so parsing the
+response is straightforward. We use the two pointer technique to match
+requests with responses and then set appropriate status of dataplane context
+objects. There is also a global receive buffer and it is assumed that whatever
+the kernel sends it will fit in this buffer. The payload of netlink error messages
+consists of a error code and the original netlink message of the request, so
+the batch response won't be bigger than the batch request increased by
+some space for the headers.