Architectural Overview


DANOS consists of several services spanning the Management, Control, and Data planes of the router. Each has a unique way of communicating with the other. This document will provide a jumping off point for how to work with this infrastructure and provide guidance on the various points of extension in the system. The following diagram represents the high level relationships of the various components that make up the DANOS system. Not every feature is called out here, only the ones that are part of the "core" of DANOS.

Core components

The DANOS Core components represent the set of key infrastructure and APIs for the DANOS system. These parts of the system will be treated as non-negotiable for the system to be called "DANOS". These pieces of infrastructure are open for extension but have strict requirements on API and behavior backwards compatibility. The TSC should not entertain replacing these parts of the system unless the replacements are strictly backwards compatible with the existing API and behavior. The Extension APIs exported from the "Core" provide the required functionality to implement extensions to the system. These extensions may be done in a modular fashion and exist as either DANOS project managed repositories or as projects external to the DANOS project. The following descriptions are meant as summaries of the major functionality of a given component, please refer to the page related to a given component for more detailed information.

Management Plane

The management plane consists of the configuration and operational infrastructure. Extensions that are built on these layers include the CLI, RestAPI, NETCONF daemon, all of our feature integration components and any other management scripts.


Configd is a daemon that manages access to the system's desired configuration, operational data and RPCs. The structure of the information stored by configd is modeled in the YANG data modeling language. Configd also provides AAA interfaces for both command and data path based AAA.

Configuration access is transactional. Transactions are based on the notion of a session. One opens a session makes changes to the private candidate datastore, commits those changes, potentially makes more changes and commits those and then ends the session. Candidate datastores in all sessions are automatically rebased onto a new running datastore whenever another session commits a set of changes.

Operational data access is aggregated and filtered in configd per the request's structure in configd's API.

RPCs are proxied via configd to provide a single entry point for making requests of individual components.


Opd is a daemon that creates modeled operational commands. These commands are human friendly versions of YANG based RPCs and tree access. These commands typically call into configd to access information and present it to a user in a nicer way than the raw data returned via configd. Opd backs the operational mode shell and runs commands in an appropriate context based on how they are modeled.

Opd supports two modeling languages.

  1. Legacy template based modeling

  2. YANG DSL based modeling


The VCI Bus is a message bus that is used for communications between VCI components. This bus is implemented using a 3rd party message bus for data transport (DBus currently) and imposes additional semantics in its API such that one cannot effectively use native 3rd party clients to talk on this bus. The VCI Bus provides access to component's ConfigurationOperational StateRPCs, and Notifications in RFC7951 formatted JSON data. VCI Components and clients may subscribe to emitted notifications via this bus, access to other information should be done via configd in order to receive properly formed aggregate data.

VCI Components

VCI Components are a bridge between the YANG modeled data and the native service implementing some required functionality. These components act to translate information that a service requires from the data modeled such that it is presentable to the native service. VCI Components and YANG definitions are how one integrates new system features into the management plane APIs.


Like many long lived systems DANOS has a fair amount of legacy code. Provisiond is a VCI component that allows this legacy code to continue to function and interact with other VCI Components. It implements support for our legacy YANG extensions "commit-action" scripts, "call-rpc" scripts, "get-state" scripts. This is called out because, several of the older features in DANOS will still be using this legacy interaction model. We are in the process of porting these features to use VCI components natively, in the mean time provisiond exists to provide the required bridge. The legacy code called from these scripts interact with other parts the system using the various extension APIs and native Linux APIs.

Control Plane

Route broker

The route broker is a daemon that  ensures that eventual consistency is achieved between the control plane, kernel, and dataplane FIB. This daemon is able to track which routing information has be given to each of the interested parties. The daemon uses a pull model from the clients and coalesces updates to the routing information it is tracking for each client. The route broker is attached to FRR via the Zebra FIB Push Interface. The dataplane will connect to this broker to synchronize routing state when it starts up.


Vplaned is a daemon that manages interaction with the dataplane. The dataplane has a need for fast restart and hotplug support. Vplaned provides an intermediate processing stage for netlink and configuration commands and supports these behaviors for the dataplane. Both netlink and configuration commands active commands are stored in vplaned. On restart of the dataplane these active commands can be replayed to the dataplane. Additionally, hotplug interfaces can be supported in the same fashion.

The Vplaned process consists of a netlink message listener to relay state programmed into the Linux kernel and a configuration relay mechanism for configuration changes which are not mirrored in the kernel. Vplaned also the ability to dispatch a cached version of this information to the dataplane if it restarts. ZMQ is used as a transport to send all of this information to the dataplane.

The format of configuration commands sent to vplaned is JSON with either a base64 protobuf blob or text over a ZMQ socket. This was done for compatibility when creating the protobuf based configuration protocol.


The dataplane is a combination of the DPDK based software forwarding pipeline and the FAL interface to hardware forwarding pipelines. It provides the necessary mechanisms for bridging the DPDK pipeline with switching silicon based on user configuration.


The processing pipeline is a design used to encapsulate processing stages into discrete processing blocks, with standard defined input/outputs. The reason for this is to allow both runtime and compile time shaping of the processing path for packet flow. Currently behavior exists as a run to completion model (for performance), but with reconfigurable stages.

Additional benefits provide for improved packet performance analysis and debugging as well as promoting software reuse. 


The FAL is the integration point for hardware switch devices. This provides a generic set of APIs for our dataplane to program the hardware switch. Vendor specific code is written into FAL plugins which are loaded dynamically at runtime. 

The two key design principles of the FAL are:

  1. Keep the platform dependencies in the FAL plugin as much as possible.

  2. Keep state out of the FAL plugin as much as possible, to keep the FAL plugin as simple as possible.

The FAL aims to abstract the platform as much as possible to allow the application to be independent of platform specifics. The implementation of a FAL plugin would ideally place as much of the platform specifics into data files, rather than in the code, in order to allow new platforms to be integrated quickly, but a new piece of platform functionality that interacts with the switch chip may require new code and platform data parsing to use it.

Base System

DANOS is built on top of the Debian Linux distribution. It follows Debian best practices when it comes to packaging of libraries and applications. It follows a standard Debian build process for selecting packages included in a distribution, live-build. The Base System is a crucial differentiator for DANOS because of the unique image management system and installer. 

Base OS

DANOS targets Stable Debian Linux; beginning the upgrade from Old Stable to Stable as soon as it is released. This means that DANOS may lag Debian for a release depending on the way release cycles line up. DANOS relies on the services built in to Debian Linux, for instance DANOS has fully embraced the systemd init system for integrating features and relies on the latest features included in the Stable distribution. Since DANOS is only ever one release behind Debian Stable, developers may rely on a standard, up to date, Debian package set when developing new software for DANOS. Any package that follows Debian best practices may be used with DANOS with only minor tweaks to ensure it follows DANOS's best practices.

Image Management System

DANOS has a unique image management system. It uses squashfs images and overlayfs to boot the system. DANOS uses live-boot to achieve this. The image management system allows users to install multiple DANOS images on the same device, select which to boot into, add new images, and remove undesired images. These images are full filesystems packed into squashfs. Overlayfs is used to provide persistence for the image so changes may be made to the image.


The DANOS installer is responsible for setting up the system such that it will work properly with the image management system.

Extension APIs

The Extension APIs are how non-core components communicate with the core. A given feature may need to use multiple extension APIs to talk to the various core components to effectively implement its functionality.

Management Plane

Management API

The management API is made up of a few different libraries.

Vyatta::Config Perl API

This is the original Vyatta management API. A fairly large portion of the system still uses this API to get access to configuration data. This API is built as a compatibility layer on top of the newer Configd API with the original behavior retained (suboptimal behavior and all). Changes to this layer must be highly scrutinized as some of the behaviors have subtle implications that scripts may be relying on. These scripts are considered legacy and are currently being replaced by VCI components. This API has remained backwards compatible since 2011, only extensions in functionality have been made in ways that were compatible with the old APIs.

Configd API

This is the newer API for interfacing with the configuration infrastructure. This API can manipulate configuration, access configuration, and manage configuration sessions. It is available in several language Go, C, C++, Perl 5, Python 3, Ruby and bash. Each language has the same semantics as its peers with minor additions to provide language native helpers. Changes to Configd, our configuration management daemon, will change the semantics of this API and must be reviewed with scrutiny that we are retaining compatibility with these APIs.

The CLI, NETCONF, and RestAPI are all built on top of the configd API.

Newer configuration action scripts may also be using this API for configuration access. These scripts are considered legacy and are currently being replaced by VCI components.


The Vyatta Component Infrastructure (VCI) is the newest mechanism for integrating features with the management plane. We are in the process of porting most features to this mechanism. It consists of a very small API and may only be extended in a backwards compatible way. The components each expose a data-model consisting of the YANG modules they manage. Configuration data in this model is transformed by the component into a form that will be understood by the service implementing the functionality. Likewise, operational data is retrieved from a service and transformed into a data-model by the component. Components also implement RPCs and react to notifications from other components. 

VCI is the API that the majority of extensions will interact with, it allows an extension to have integration with the management infrastructure layer and consequently all of the northbound interfaces.


The opd API provides access to the operational command tree. It allows one to build operational command interfaces on top of the opd infrastructure. The DANOS operational CLI and REST API both use this API to run operational commands.

AAA Plugin API

The AAA API allows one to create a plugin to integrate with a AAA system of choice. One may either implement command based accounting or path based accounting or both.


DANOS uses the YANG data modeling language for all user visible information, this includes configuration data, operational information, RPCs and notifications. We strictly follow Section 10 rules to ensure releases are backwards compatible with each other. This applies even when something may have been modeled sub-optimally in the past.

Control Plane

Route broker library

The route broker is also available as a library which may be used to integrate into a different routing protocol suite should one wish to do so. This allows reuse of the common route broker code and allows deterministic synchronization with the dataplane instances.

Vplaned API

Vplaned has an API that allows for custom configuration commands to be sent to the dataplane and restored if a dataplane crashes. This API is available in many languages including Perl 5, Python 3, C++, and Go. Perl 5 and Python 3 support text based commands and are used for legacy feature integration. All languages have support for protobuf based configuration commands.

The format of configuration commands sent to vplaned is JSON with either a base64 protobuf blob or text over a ZMQ socket. This was done for compatibility when creating the protobuf based configuration protocol.


Pipeline plugin API

The dataplane pipeline plugin API is the API that the dataplane pipeline framework uses to talk to a plugin. Plugins are loaded at runtime before constructing the dataplane's pipeline graph. 

The feature plugin layer is designed to allow features for the dataplane to be plugged in without having to modify the core dataplane code. Plugin libraries are installed in a known location and the dataplane will search for them when it is starting up. 

The pipeline is the part that of the feature plugin that interacts with packets. When a packet is received it enters the pipeline where it makes its way through the nodes in a graph. Each node does its specific processing before passing the packet on to the next node Once the packet reaches an output node it is finished. The graph is constructed at dataplane start, using all the builtin nodes plus any that are added by the feature plugins.

FAL plugin API

The forwarding abstraction layer is designed to provide abstraction around a forwarding layer that is independent from the dataplane forwarding path. This is typically a hardware forwarding chip, but could be used to implement specialised software forwarding paths too.

The dataplane is referred to as "the application" and the implementation of the FAL API that talks to the specialised forwarding path/hardware forwarding chip is referred to as "the FAL plugin". The FAL plugin is a shared object library that lives within the application address space.

In order to fit with the model that dataplane-type interfaces are backed by a DPDK port and to provide a high-speed punt path, switchports are correspondingly backed by a DPDK port and it is expected that the FAL plugin react to operations on the DPDK port as driven by the dataplane accordingly and in preference to, possibly conflicting, FAL operations.

A hardware platform may choose to implement a punt path via one or more DPDK-supported host-CPU-connected interfaces in order to provide a high-speed punt path. In this case, traffic from all switch ports will be multiplexed over this/these interface(s) and as such requires a header that gives the original input interface. The format of this header is private to the switch chip and its FAL plugin. Similarly, traffic being sent out of a switchport will be multiplexed over the same DPDK-supported host-CPU-connected interface and will need to have a header inserted that identifies the output interface. These facilities are known as the RX/TX framers.

Existing Additional Projects

The DANOS distribution also contains several other features that are not considered to be part of the "core". These are the components and services for the individual features other than those described in this document. Each of these components may use multiple APIs to talk to the "core" components to implement a given feature. Many consist of multiple packages potentially containing the VCI Component, dataplane plugin, YANG definitions, and/or the Linux daemon implementing the feature. Custom DANOS images can be built with or without these features without compromising the ability to use the core features of the project. DANOS side projects can be created by building minimal images without any of these if desired. This includes but isn't limited to the following:

  • DHCP Server

  • DHCP Relay

  • DNS Forwarding

  • Dynamic DNS client

  • SSH

  • Telnet

  • Port monitoring


  • SNMP

  • IPSec VPN