CGNAT and Kafka logging

This shows an example of setting up a Kafka broker on a debian system (preferably running DANOS) which can handle messages send by the CGNAT kafka protobuf logging. The consumer of the messages is a python script which prints the topic, key, and decodes the protobuf data.

First you need to install Kafka on the debian system which is acting as the Kafka broker. The following is information on doing that based on - with versions updated.

# install java sudo apt install default-jre # check the version java -version # create a user for kafka sudo useradd kafka -m sudo passwd kafka sudo adduser kafka sudo # switch to the user su -l kafka # download kafka tgz mkdir ~/Downloads curl "" -o ~/Downloads/kafka.tgz # create directory to extract to mkdir ~/kafka && cd ~/kafka # extract the files tar -xvzf ~/Downloads/kafka.tgz --strip 1 # configure to allow topic deletion echo "" >> ~/kafka/config/ echo "delete.topic.enable = true" >> ~/kafka/config/ # create zookeeper service sudo bash cat <<EOF >/etc/systemd/system/zookeeper.service [Unit] [Service] Type=simple User=kafka ExecStart=/home/kafka/kafka/bin/ /home/kafka/kafka/config/ ExecStop=/home/kafka/kafka/bin/ Restart=on-abnormal [Install] EOF # create kafka service file cat <<EOF >/etc/systemd/system/kafka.service [Unit] Requires=zookeeper.service After=zookeeper.service [Service] Type=simple User=kafka ExecStart=/bin/sh -c '/home/kafka/kafka/bin/ /home/kafka/kafka/config/ > /home/kafka/kafka/kafka.log 2>&1' ExecStop=/home/kafka/kafka/bin/ Restart=on-abnormal [Install] EOF systemctl start kafka # check service started journalctl -u kafka # enable on boot systemctl enable kafka # exit from sudo bash

The following does some stand-alone testing of the Kafka installation, which should be run to confirm Kafka is working before trying to read log packets sent by DANOS CGNAT Kafka messages:

# start kafka monitoring a topic ~/kafka/bin/ --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic TutorialTopic # publish to the topic echo "Hello, World" | ~/kafka/bin/ --broker-list localhost:9092 --topic TutorialTopic > /dev/null # read the info from kafka ~/kafka/bin/ --bootstrap-server localhost:9092 --topic TutorialTopic --from-beginning

Now to do something similar to the above tests, but for logs from CGNAT. First, you will need to:

a) configure CGNAT so it produces the Kafka Protobuf Logs and

b) do an action (e.g. create CGNAT sessions) in order to cause the logs to be produced.

Start handling topics that you have configured CGNAT to send on - note these commands are all executed under login "kafka":

~/kafka/bin/ --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic cgnat-session ~/kafka/bin/ --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic cgnat-subscriber ~/kafka/bin/ --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic cgnat-port-block-allocation ~/kafka/bin/ --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic cgnat-res-constraint 

Your /etc/hosts file must have an entry for the Kafka bootstrap server, as Kafka seems to check the name and IP address match.

For the consumer, we could use "", as follows, but that will not decode the data which is in protobuf format:

~/kafka/bin/ --bootstrap-server localhost:9092 --topic cgnat-session --property print.key=true --from-beginning

So instead we will do it using a python Kafka consumer, which will make use of the protobuf file used by CGNAT. If your Kafka broker is also a DANOS system, then there are protobuf libraries installed which can be used. These are the files installed by the package “libvyatta-dataplane-proto-support”. If you are using a different system, when you should copy file /usr/share/vyatta-dataplane/protobuf/CgnatLogging.proto, and use the “protoc” package to create python libraries that can decode the CGNAT log messages. You must also install generic python protobuf package “python3-protobuf” and the package with a python interface to Kafka “kafka-python”. For example:

sudo apt-get install python3-protobuf     sudo pip3 install kafka-python

Now run the script (shown below) with the appropriate topics passed in as parameters. Note that it contacts the Kafka broker on the local system, so that will need changed if running on a different system:

!/usr/bin/env python3 # import sys from kafka import KafkaConsumer from vyatta.proto import CgnatLogging_pb2 as pb if len(sys.argv) <= 1: print('Error: at least one topic is needed', file=sys.stderr) sys.exit(2) params = sys.argv[1:] cgnat_log = pb.CgnatLog() consumer = KafkaConsumer(*params, bootstrap_servers=['localhost:9092'], auto_offset_reset='earliest', enable_auto_commit=True, group_id='my-group') for message in consumer: print("Topic: {}".format(message.topic)) print("Key: {}".format(message.key.decode('UTF-8'))) print("") cgnat_log.ParseFromString(message.value) print(cgnat_log) print("---------")
./ cgnat-session cgnat-subscriber cgnat-port-block-allocation cgnat-res-constraint

Cause CGNAT log messages to be sent, and then you should see something like the following, which shows decoding of a "session create" and “subscriber start” protobuf message.

Topic: cgnat-session Key: vm-cgn-1 sessionLog {   cgnInstance: "vm-cgn-1"   eventType: EVENT_SESSION_CREATE   sessionId: 2   subSessionId: 1   ifName: "dp0p1s2"   protocol: 6   direction: DIRECTION_OUT   subscriberAddress: 168427779   subscriberPort: 500   natAllocatedAddress: 168428531   natAllocatedPort: 1024   destinationAddress: 167772160   destinationPort: 80   startTimestamp {     seconds: 1569246812343     nanos: 924000000   }   state: SESSION_OPENING   stateHistory: 0 }   --------- Topic: cgnat-subscriber Key: vm-cgn-1   subscriberLog {   cgnInstance: "vm-cgn-1"   eventType: EVENT_SUBSCRIBER_START   subscriberAddress: 168427779   startTimestamp {     seconds: 1567679116     nanos: 645000000   } }