Logback Configuration and ELK Stack
(Ref: CCSDK-4102: Make changes to allow log streaming in the A1-PMSClosed)
It is possible to leverage different kinds of format and functionality using logback configuration in the A1 Policy Management Service. This is useful for readability for the user but also for sharing logs with other services e.g. log aggregation services. This document will outline one way to use a ‘logback xml’ configuration together with the ELK stack (Elasticsearch, Logstash and Kibana) to format, display, collect, parse and visualise logs coming from A1PMS.
Prerequisites
Docker compose
maven
A1PMS code-base checked out from git.
Java 17
ELK Stack
Firstly, we will outline how we can use docker-compose to deploy the ELK stack in our local machines. The relevant docker-compose file is below
version: '3.8'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.5.3
container_name: elasticsearch
environment:
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=false
ports:
- "9200:9200"
networks:
- elk
logstash:
image: docker.elastic.co/logstash/logstash:8.5.3
container_name: logstash
volumes:
- ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
ports:
- "5044:5044" # TCP input for Logstash
- "9600:9600" # Logstash monitoring API
depends_on:
- elasticsearch
networks:
- elk
kibana:
image: docker.elastic.co/kibana/kibana:8.5.3
container_name: kibana
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
ports:
- "5601:5601"
depends_on:
- elasticsearch
networks:
- elk
networks:
elk:
driver: bridge
There are three services deployed in this docker compose
Logstash – Log Collector & Processor
Ingests logs from various sources, processes them (parsing, filtering, transforming), and forwards them to a destination like Elasticsearch. Supports multiple input sources (files, TCP, Kafka, Syslog, Beats, etc.).
In this case we will use TCP
Elasticsearch - Elasticsearch – Log Storage & Search Engine
Stores logs in a searchable and structured format, allowing fast queries and analytics.
Stores log data as JSON documents.
Kibana – Log Visualization & Dashboarding
Provides a UI to explore, analyze, and visualize log data stored in Elasticsearch.
Supports searching and filtering logs.
These tools perform many other functions that are described here but the descriptions here are what we are interested in.
Deployment of these services can be done by
Copying and pasting the above into a file (docker-compose.yaml).
Copying and pasting the below into a file named logstash.conf that will be in the same directory as the docker-compose.yaml file. This file configures logstash to send the streaming logs to elasticsearch.
input {
tcp {
port => 5044
codec => json_lines
}
}
filter {
mutate {
add_field => { "service" => "a1pms-app" }
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch:9200"]
index => "a1-pms-spring-boot-logs-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
Then, we can run the below command from the same directory as the docker-compose.yaml file and the logstash.conf file.
docker compose up -d
That should bring up all the services.
A1PMS Logback Configuration
The next step is to change the logback configuration of A1PMS to “feed into” logstash. Using the below logback-json.xml file will do a few things
Create one appender that formats the logs as JSON and adds some fields
Send the logs to logstash.
Creates one appender that will output to console as plain log format with some additional fields.
<configuration>
<appender name="json" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>localhost:5044</destination>
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<version>
<fieldName>version</fieldName>
<version>1.2.0</version>
</version>
<timestamp>
<fieldName>timestamp</fieldName>
<pattern>yyyy-MM-dd'T'HH:mm:ss.SSSZ</pattern>
</timestamp>
<pattern>
<omitEmptyFields>true</omitEmptyFields>
<pattern>
{
"service_id": "${SERVICE_ID:-a1pms}",
"message": "%msg",
"facility": "%X{facility}",
"subject": "%X{subject}",
"extra_data": {
"logger": "%logger",
"thread_info": {
"thread_name": "%thread"
},
"dst": {
"trace_id": "%mdc{traceId}"
},
"exception": {
"stack_trace": "%xEx"
}
},
"metadata": {
"application_id": "${APP_ID:-a1pms}"
}
}
</pattern>
</pattern>
</providers>
</encoder>
</appender>
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>
%d{yyyy-MM-dd'T'HH:mm:ss.SSSZ} [%thread] %-5level %logger - %msg [facility=%X{facility}, subject=%X{subject}, traceId=%mdc{traceId}] %n%xEx
</pattern>
</encoder>
</appender>
<root level="${ROOT_LOG_LEVEL:-INFO}">
<appender-ref ref="json"/>
<appender-ref ref="console"/>
</root>
<logger name="/" level="${ROOT_LOG_LEVEL:-INFO}"/>
</configuration>
Copy and paste this into a logback-custom.xml file in the src/main/resources directory of the a1-policy-management-service directory.
We will then need to point to that custom file in our application.yaml file - located in a1-policy-management-service/config. The below should be added to that file under “logging”.
logging:
config: ${LOGBACK_CONFIG_FILE:classpath:logback-custom.xml}
Once that is done, we are ready to run the build.
Running the build
In this section, we will finally run the A1PMS build, which will generate logs and output them to logstash. They will then be visible in Kibana.
mvn clean install
(Note: It is possible to change logging settings & application configuration without re-compiling or re-assembling the application - but for this case it is easier to understand and experiment by rebuilding)
We execute the newly-built adapted application, then invoke some operations to induce logging messages.
We notice then that logs in the console are formatted like below - in a plain format
2025-03-19T15:59:22.108+0000 [reactor-http-epoll-6] INFO org.onap.ccsdk.oran.a1policymanagementservice.util.v3.ReactiveEntryExitFilter - For the request ID: 3e937e51 the Status code of the response: 201 CREATED [facility=log audit, subject=n/av, traceId=491a39122e0356fa4e10928fa8758aa2]
Now we can look at Kibana. We would expect that the json formatted logs will have made their way there for visualisation.
We create a data view - like below
Then we can browse the logs in Kibana
This guide is not intended to be a full guide to how to use EKS but just to enable this functionality for A1PMS.