CGNAT and Kafka logging

This shows an example of setting up a Kafka broker on a debian system (preferably running DANOS) which can handle messages send by the CGNAT kafka protobuf logging. The consumer of the messages is a python script which prints the topic, key, and decodes the protobuf data.

First you need to install Kafka on the debian system which is acting as the Kafka broker. The following is information on doing that based on https://www.digitalocean.com/community/tutorials/how-to-install-apache-kafka-on-debian-9 - with versions updated.

# install java sudo apt install default-jre # check the version java -version # create a user for kafka sudo useradd kafka -m sudo passwd kafka sudo adduser kafka sudo # switch to the user su -l kafka # download kafka tgz mkdir ~/Downloads curl "https://www.apache.org/dist/kafka/2.3.0/kafka_2.12-2.3.0.tgz" -o ~/Downloads/kafka.tgz # create directory to extract to mkdir ~/kafka && cd ~/kafka # extract the files tar -xvzf ~/Downloads/kafka.tgz --strip 1 # configure to allow topic deletion echo "" >> ~/kafka/config/server.properties echo "delete.topic.enable = true" >> ~/kafka/config/server.properties # create zookeeper service sudo bash cat <<EOF >/etc/systemd/system/zookeeper.service [Unit] Requires=network.target remote-fs.target After=network.target remote-fs.target [Service] Type=simple User=kafka ExecStart=/home/kafka/kafka/bin/zookeeper-server-start.sh /home/kafka/kafka/config/zookeeper.properties ExecStop=/home/kafka/kafka/bin/zookeeper-server-stop.sh Restart=on-abnormal [Install] WantedBy=multi-user.target EOF # create kafka service file cat <<EOF >/etc/systemd/system/kafka.service [Unit] Requires=zookeeper.service After=zookeeper.service [Service] Type=simple User=kafka ExecStart=/bin/sh -c '/home/kafka/kafka/bin/kafka-server-start.sh /home/kafka/kafka/config/server.properties > /home/kafka/kafka/kafka.log 2>&1' ExecStop=/home/kafka/kafka/bin/kafka-server-stop.sh Restart=on-abnormal [Install] WantedBy=multi-user.target EOF systemctl start kafka # check service started journalctl -u kafka # enable on boot systemctl enable kafka # exit from sudo bash

The following does some stand-alone testing of the Kafka installation, which should be run to confirm Kafka is working before trying to read log packets sent by DANOS CGNAT Kafka messages:

# start kafka monitoring a topic ~/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic TutorialTopic # publish to the topic echo "Hello, World" | ~/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic TutorialTopic > /dev/null # read the info from kafka ~/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic TutorialTopic --from-beginning

Now to do something similar to the above tests, but for logs from CGNAT. First, you will need to:

a) configure CGNAT so it produces the Kafka Protobuf Logs and

b) do an action (e.g. create CGNAT sessions) in order to cause the logs to be produced.

Start handling topics that you have configured CGNAT to send on - note these commands are all executed under login "kafka":

~/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic cgnat-session ~/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic cgnat-subscriber ~/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic cgnat-port-block-allocation ~/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic cgnat-res-constraint 

Your /etc/hosts file must have an entry for the Kafka bootstrap server, as Kafka seems to check the name and IP address match.

For the consumer, we could use "kafka-console-consumer.sh", as follows, but that will not decode the data which is in protobuf format:

So instead we will do it using a python Kafka consumer, which will make use of the protobuf file used by CGNAT. If your Kafka broker is also a DANOS system, then there are protobuf libraries installed which can be used. These are the files installed by the package “libvyatta-dataplane-proto-support”. If you are using a different system, when you should copy file /usr/share/vyatta-dataplane/protobuf/CgnatLogging.proto, and use the “protoc” package to create python libraries that can decode the CGNAT log messages. You must also install generic python protobuf package “python3-protobuf” and the package with a python interface to Kafka “kafka-python”. For example:

Now run the script consumer.py (shown below) with the appropriate topics passed in as parameters. Note that it contacts the Kafka broker on the local system, so that will need changed if running on a different system:

Cause CGNAT log messages to be sent, and then you should see something like the following, which shows decoding of a "session create" and “subscriber start” protobuf message.