Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

i


Docker Diagram


Docker name

Description

sdc-cassandra
The Docker contains our Cassandra server and the logic for creating the needed schemas for SDC. On docker startup, the schemes are created and Cassandra server is started.
sdc-elasticsearchThe Docker contains Elastic Search server and the logic for creating the needed mapping for SDC. On docker startup, the mapping is created and Elastic Search server is started.
sdc-kibanaThe Docker contains the Kibana server and the logic needed for creating the SDC views there. On docker startup, the views are configured and the Kibana server is started.
sdc-backendThe Docker contains the SDC Backend Jetty server. On docker startup, the Jetty server is started with our application.
sdc-frontend The Docker contains the SDC Fronted Jetty server. On docker startup, the Jetty server is started with our application.


Connectivity Matrix

Docker name

API NAMEAPI purposeprotocol usedport number or rangeTCP/UDP
sdc-cassandra

SDC backend uses the two protocols to access the cassandratrift/async9042/9160TCP
sdc-elasticsearch
SDC backend uses the two protocols to access the EStransport9200/9300TCP
sdc-kibana
the API is used to access the kibana UIhttp5601TCP
sdc-backend
the APIs are used to access the SDC functionaltyhttp/https8080/8443TCP
sdc-frontend
the APIs are used to access the SDC UI and to proxy requests to the SDC back endhttp/https8181/9443TCP


Offered APIs


Container/VM name

API name

API purpose

protocol used

port number or range used

TCP/UDP

sdc-fe
/sdc1/feproxy/*
Proxy for all the REST calls from the SDC UI
HTTP/HTTPS
8181/8443
TCP
sdc-be/sdc2/*Internal APIs used by the UI. The request is passed through the Front end proxy serverHTTP/HTTPS8080/8443TCP
sdc-be/sdc/*External APIs offered to the different components for retrieving information from the SDC Catalog. These APIs
are protected by basic authentication. 
HTTP/HTTPS8080/8443TCP


Logging/Diagnostic Information

Diagnostic:

We provide a health check script that can show the state of our application.
The script is located at  /data/scripts/docker_health.sh. 
The script is taken from our repository in LF on VM spin.
The script calls a REST API in the FE and BE server.

The Back end health check provides the following INFO, in case one of the components is down the server will fail requests:


type

section

description

general SDC info
"sdcVersion": "1.0.0-SNAPSHOT",
"siteMode": "unknown",
This shows the current version of the Catalog application installed.
The site mode is not used in the current version. 
general Catalog info{
"healthCheckComponent": "BE",
"healthCheckStatus": "UP",
"version": "1.0.0-SNAPSHOT",
"description": "OK"
}
This shows the current version of the catalog application installed.

Catalog sub components status
Elastic Search{
"healthCheckComponent": "ES",
"healthCheckStatus": "UP",
"description": "OK"
}
This describes our connectivity to Elastic Search.
TITAN{
"healthCheckComponent": "TITAN",
"healthCheckStatus": "UP",
"description": "OK"
}
This describes our connectivity to and from the Titan client and the Cassandra server.
Cassandra

{
"healthCheckComponent": "CASSANDRA",
"healthCheckStatus": "UP",
"description": "OK"
},

thsi describe the status of the conectivety from catalog to Caassandra
Demaap{
"healthCheckComponent": "DE",
"healthCheckStatus": "UP",
"description": "OK"
}
This describes our connectivity to the Dmaap.
Onboarding

"healthCheckComponent": "ON_BOARDING",
"healthCheckStatus": "UP",
"version": "1.1.0-SNAPSHOT",
"description": "OK",

this describes the state and version of the onboarding sub component
Onboarding sub component status
Zusamen {
"healthCheckComponent": "ZU",
"healthCheckStatus": "UP",
"version": "0.2.0",
"description": "OK"
}
this describes the version and status of the zusamen.
general Onboarding info {
"healthCheckComponent": "BE",
"healthCheckStatus": "UP",
"version": "1.1.0-SNAPSHOT",
"description": "OK"
}
this describes the state and version of the onboarding sub component
Cassndra {
"healthCheckComponent": "CAS",
"healthCheckStatus": "UP",
"version": "2.1.17",
"description": "OK"
}
this describes the conectivety status to Cassndra from the onboarding and the Cassndra version the onboarding is connected two.



The Front end server health check places a REST call to the Back end server to check the connectivity status of the servers.

the status received from the Backend server is aggregated in the Frontend health Check response.

in addition to the info retrieved from the BE the info of the Frontend server is added for the Catalog and Onboarding

type

section

description

general SDC info

in the main section


Frontend

{
"healthCheckComponent": "FE",
"healthCheckStatus": "UP",
"version": "1.1.0-SNAPSHOT",
"description": "OK"
}

describe the version of the Catalog Frontend server 
general Onboarding infoin the onboarding section

{
"healthCheckComponent": "FE",
"healthCheckStatus": "UP",
"version": "1.1.0-SNAPSHOT",
"description": "OK"
}

describe the version of the Onboarding Frontend server 

Logging:


serverlocationtypedescriptionrolling
BE/data/logs/BE/2017_03_10.stderrout.logJetty server logThe log describes info regarding Jetty startup and executionthe log rolls daily
/data/logs/BE/SDC/SDC-BE/audit.logaplication auditAn audit record is created for each operation in SDCrolls at 20 mb
/data/logs/BE/SDC/SDC-BE/debug.logaplication loggingWe can enable higher logging on demand by editing the logback.xml inside the server docker.
The file is located under:  config/catalog-be/logback.xml. 
This log holds the debug and trace level output of the application.
rolls at 20 mb
/data/logs/BE/SDC/SDC-BE/error.logaplication loggingThis log holds the info and error level output of the application.rolls at 20 mb
/data/logs/BE/SDC/SDC-BE/transaction.logaplication loggingNot currently in use. will be used in future relases.rolls at 20 mb
/data/logs/BE/SDC/SDC-BE/all.logaplication logging

On demand, we can enable log aggregation into one file for easier debugging. This is done by editing the logback.xml inside the server docker.
The file is located under:  config/catalog-be/logback.xml. 
To allow this logger, set the value for this property to true <property scope="context" name="enable-all-log" value="false" />

This log holds all logging output of the application.
rolls at 20 mb
FE/data/logs/FE/2017_03_10.stderrout.logJetty server logThe log describes info regarding the Jetty startup and executionthe log rolls daily
/data/logs/FE/SDC/SDC-FE/debug.logaplication loggingWe can enable higher logging on demand by editing the logback.xml inside the server docker.
The file is located  under: config/catalog-fe/logback.xml. 
This log holds the debug and trace level output of the application.
rolls at 20 mb
/data/logs/FE/SDC/SDC-FE/error.logaplication loggingThis log holds the Info and Error level output of the application.rolls at 20 mb
/data/logs/FE/SDC/SDC-FE/all.logaplication logging

On demand we can enable log aggregation into one file for easier debuging, by editing the logback.xml inside the server docker.
The file is located under: config/catalog-fe/logback.xml. 
To allow this logger set this property to true <property scope="context" name="enable-all-log" value="false" />

This log holds all the logging output of the application.

rolls at 20 mb


 The logs are mapped from the docker to an outside path so that on docker failure the logs will still be available.


  • No labels