diff options
| author | Srujana <skanchisamud@nvidia.com> | 2024-07-30 20:39:33 +0000 |
|---|---|---|
| committer | Rajasekar Raja <rajasekarr@nvidia.com> | 2024-08-27 12:47:00 -0700 |
| commit | 9112fb367b1ae0168b4e7a81f41c2ca621979199 (patch) | |
| tree | 134a10f33a7069ca9025cf2f46726ff3bd227559 /lib/vty.h | |
| parent | fa7c77f2939b4be9649ee95a78ed3a307aeac342 (diff) | |
lib: Memory spike reduction for sh cmds at scale
The output buffer vty->obuf is a linked list where
each element is of 4KB.
Currently, when a huge sh command like <show ip route json>
is executed on a large scale, all the vty_outs are
processed and the entire data is accumulated.
After the entire vty execution, vtysh_flush proceeses
and puts this data in the socket (131KB at a time).
Problem here is the memory spike for such heavy duty
show commands.
The fix here is to chunkify the output on VTY shell by
flushing it intermediately for every 128 KB of output
accumulated and free the memory allocated for the buffer data.
This way, we achieve ~25-30% reduction in the memory spike.
Fixes: #16498
Note: This is a continuation of MR #16498
Signed-off-by: Srujana <skanchisamud@nvidia.com>
Signed-off-by: Rajasekar Raja <rajasekarr@nvidia.com>
Diffstat (limited to 'lib/vty.h')
| -rw-r--r-- | lib/vty.h | 7 |
1 files changed, 7 insertions, 0 deletions
@@ -236,6 +236,7 @@ struct vty { uintptr_t mgmt_req_pending_data; bool mgmt_locked_candidate_ds; bool mgmt_locked_running_ds; + uint64_t vty_buf_size_accumulated; }; static inline void vty_push_context(struct vty *vty, int node, uint64_t id) @@ -338,6 +339,12 @@ struct vty_arg { /* Vty read buffer size. */ #define VTY_READ_BUFSIZ 512 +/* Vty max send buffer size */ +#define VTY_SEND_BUF_MAX 16777216 + +/* Vty flush intermediate size */ +#define VTY_MAX_INTERMEDIATE_FLUSH 131072 + /* Directory separator. */ #ifndef DIRECTORY_SEP #define DIRECTORY_SEP '/' |
