( Note: The akka.yaml file is going to be merged with service.yaml. That case, this procedure will change slightly)
...
Info | ||
---|---|---|
| ||
|
...
Once SDN-C pod is up and running
...
(might take up to 5~7 minutes), the akka.conf should be updated with IP addresses
...
configured in values.yaml for ODL clustering.
- Log in to one of the SDN-C pods and check the akka.conf from /opt/opendaylight/current/configuration/initial/:Info title Example akka.conf odl-cluster-data {
akka {
remote {
artery {
enabled = off
canonical.
hostname
=
"10.147.114.5"
canonical.port = 30251
}
netty.tcp {
bind-
hostname
=
"10.36.0.3"
bind-port = 2550
hostname
=
"10.147.114.5"
port = 30251
}
# when under load we might trip a false positive on the failure detector
# transport-failure-detector {
# heartbeat-interval = 4 s
# acceptable-heartbeat-pause = 16s
# }
}
cluster {
# Remove ".tcp" when using artery.
seed-nodes = [
"akka.tcp://opendaylight-cluster-data@10.147.114.5:30251"
,
"akka.tcp://opendaylight-cluster-data@10.147.114.5:30252"
,
"akka.tcp://opendaylight-cluster-data@10.147.114.5:30253"
,
"akka.tcp://opendaylight-cluster-data@10.147.114.140:30251"
,
"akka.tcp://opendaylight-cluster-data@10.147.114.140:30252"
,
"akka.tcp://opendaylight-cluster-data@10.147.114.140:30253"
]
roles = [
"member-1"
]
}
persistence {
# By default the snapshots/journal directories live in KARAF_HOME. You can choose to put it somewhere else by
# modifying the following two properties. The directory location specified may be a relative or absolute path.
# The relative path is always relative to KARAF_HOME.
# snapshot-store.local.dir = "target/snapshots"
# journal.leveldb.dir = "target/journal"
journal {
leveldb {
# Set native = off to use a Java-only implementation of leveldb.
# Note that the Java-only version is not currently considered by Akka to be production quality.
# native = off
}
}
}
}
}
To test ODL clustering:
Using Jolokia RestConf API
To monitor the status of the cluster, you must enable the Jolokia support in OpenDaylight (if not enabeld already):
Code Block |
---|
opendaylight-user@root>feature:install odl-jolokia
opendaylight-user@root>feature:list -i | grep odl-jolokia
odl-jolokia | 1.8.2.Carbon-redhat-3 | x | odl-extras-1.8.2.Carbon-redhat-3 | Jolokia JMX/HTTP bridge |
Run the Jolokia restConf read API to review ODL cluster details like "PeerAddresses", "PeerVotingStates", "Voting", "ShardName", "FollowerInfo" "Leader" for any shard. Use user: admin, password: admin to access.
To list shards, use below API:
Operational Data Store:
Code Block |
---|
Example:
http://10.147.114.5:30202/jolokia/read/org.opendaylight.controller:type=DistributedOperationalDatastore,Category=ShardManager,name=shard-manager-operational
Response:
{
"request": {
"mbean": "org.opendaylight.controller:Category=ShardManager,name=shard-manager-operational,type=DistributedOperationalDatastore",
"type": "read"
},
"value": {
"LocalShards": [
"member-1-shard-default-operational",
"member-1-shard-prefix-configuration-shard-operational",
"member-1-shard-topology-operational",
"member-1-shard-entity-ownership-operational",
"member-1-shard-inventory-operational",
"member-1-shard-toaster-operational"
],
"SyncStatus": true,
"MemberName": "member-1"
},
"timestamp": 1523296054,
"status": 200
}
|
Config Data Store:
http://<site-master-ip>:30202/jolokia/read/org.opendaylight.controller:type=DistributedConfigDatastore,Category=ShardManager,name=shard-manager-config
Code Block |
---|
Example:
http://10.147.114.5:30202/jolokia/read/org.opendaylight.controller:type=DistributedConfigDatastore,Category=ShardManager,name=shard-manager-config
Response:
{
"request": {
"mbean": "org.opendaylight.controller:Category=ShardManager,name=shard-manager-config,type=DistributedConfigDatastore",
"type": "read"
},
"value": {
"LocalShards": [
"member-1-shard-default-config",
"member-1-shard-prefix-configuration-shard-config",
"member-1-shard-topology-config",
"member-1-shard-inventory-config",
"member-1-shard-toaster-config"
],
"SyncStatus": true,
"MemberName": "member-1"
},
"timestamp": 1523295916,
"status": 200
}
|
- The exact names from the “LocalShards” lists are needed for further exploration, as they will be used as part of the URI to look up detailed info on a particular shard.
- The output helps to identify shard state (leader/follower, voting/non-voting), peers, follower details if the shard is a leader, and other statistics/counters.
To run the jolokia restConf read API to review ODL cluster details like "PeerAddresses", "PeerVotingStates", "Voting", "ShardName", "FollowerInfo" "Leader" for shard: member-1-shard-inventory-config, use:
Code Block |
---|
Example:
http://10.147.114.5:30202/jolokia/read/org.opendaylight.controller:Category=Shards,name=member-1-shard-inventory-config,type=DistributedConfigDatastore
Response:-
{
"request": {
"mbean": "org.opendaylight.controller:Category=Shards,name=member-1-shard-inventory-config,type=DistributedConfigDatastore",
"type": "read"
},
"value": {
"ReadWriteTransactionCount": 0,
"SnapshotIndex": -1,
"InMemoryJournalLogSize": 0,
"ReplicatedToAllIndex": -1,
"Leader": "member-4-shard-inventory-config",
"LastIndex": -1,
"RaftState": "Follower",
"LastCommittedTransactionTime": "1970-01-01 00:00:00.000",
"LastApplied": -1,
"LastLogIndex": -1,
"LastLeadershipChangeTime": "2018-04-09 16:22:13.425",
"PeerAddresses": "member-2-shard-inventory-config: akka.tcp://opendaylight-cluster-data@10.147.114.5:30265/user/shardmanager-config/member-2-shard-inventory-config, member-6-shard-inventory-config: akka.tcp://opendaylight-cluster-data@10.147.114.140:30266/user/shardmanager-config/member-6-shard-inventory-config, member-5-shard-inventory-config: akka.tcp://opendaylight-cluster-data@10.147.114.140:30265/user/shardmanager-config/member-5-shard-inventory-config, member-3-shard-inventory-config: akka.tcp://opendaylight-cluster-data@10.147.114.5:30266/user/shardmanager-config/member-3-shard-inventory-config, member-4-shard-inventory-config: akka.tcp://opendaylight-cluster-data@10.147.114.140:30264/user/shardmanager-config/member-4-shard-inventory-config",
"WriteOnlyTransactionCount": 0,
"FollowerInitialSyncStatus": true,
"FollowerInfo": [],
"FailedReadTransactionsCount": 0,
"StatRetrievalTime": "190.8 ?s",
"Voting": true,
"CurrentTerm": 15,
"LastTerm": -1,
"FailedTransactionsCount": 0,
"PendingTxCommitQueueSize": 0,
"VotedFor": "member-4-shard-inventory-config",
"SnapshotCaptureInitiated": false,
"CommittedTransactionsCount": 0,
"TxCohortCacheSize": 0,
"PeerVotingStates": "member-2-shard-inventory-config: true, member-6-shard-inventory-config: true, member-5-shard-inventory-config: true, member-3-shard-inventory-config: true, member-4-shard-inventory-config: true",
"LastLogTerm": -1,
"StatRetrievalError": null,
"CommitIndex": -1,
"SnapshotTerm": -1,
"AbortTransactionsCount": 0,
"ReadOnlyTransactionCount": 0,
"ShardName": "member-1-shard-inventory-config",
"LeadershipChangeCount": 3,
"InMemoryJournalDataSize": 0
},
"timestamp": 1523295450,
"status": 200
}
|
Using cluster monitoring tool:
Install cluster monitoring tool
Download the cluster monitor from github
Info | ||
---|---|---|
| ||
|
Update ./integration-test/tools/clustering/cluster-monitor/cluster.json with the ip (from above) of your ODL cluster nodes
Info | ||
---|---|---|
| ||
|
Install the
...
attached monitor.py
...
script in <integration-test repo>/tools/clustering/cluster-monitor and run it.
Info | ||
---|---|---|
| ||
|
...
Something like this should appear on each master
...