Test # | Comment | Message |
---|
| k8 utilization Wed May 20 18:45:15 UTC 2020 | Memory: root@long-nfs:~/oom/kubernetes/robot# kubectl -n onap top pods | sort -rn -k 3 | head -25 dev-appc-0 7m 2901Mi dev-portal-cassandra-59f5cb4cf5-9phmg 159m 2777Mi dev-appc-2 10m 2705Mi dev-appc-1 19m 2681Mi dev-cassandra-0 73m 2417Mi dev-cassandra-2 48m 2394Mi dev-cassandra-1 70m 2391Mi dev-sdnc-2 71m 1868Mi dev-policy-59f48bd84b-q2fp8 7m 1820Mi dev-sdnc-0 139m 1627Mi dev-sdnc-1 26m 1574Mi dev-vid-5b7558dcdc-rx2d7 9m 1510Mi dev-clamp-dash-es-6cb85979b5-cvrcs 32m 1480Mi dev-awx-0 244m 1434Mi dev-aai-elasticsearch-55b56f855c-f5pp5 2m 1422Mi dev-sdc-be-77d55774f5-zkfrt 6m 1381Mi dev-dcae-cloudify-manager-6f854859f9-ctdcv 90m 1312Mi dep-dcae-tca-analytics-55dbd5cd9d-fsm89 511m 1262Mi dev-aaf-cass-7d55bfc874-sqcdq 6m 1244Mi dev-aai-traversal-847c4c6994-qbpst 3m 956Mi dev-so-bpmn-infra-7b58b75b76-n59sf 5m 953Mi dev-message-router-zookeeper-2 2m 946Mi dev-aai-resources-74dd6994d4-nh24m 5m 869Mi dev-aai-graphadmin-65db8cfc67-svvkd 2m 836Mi dev-music-cassandra-2 147m 801Mi |
#1 | Startup issues - modified customer uuid to shorten the string in the tooling since it looked like robot selenium was having trouble "seeing" the string in the drop down. | vDNS: NoSuchElementException: Message: Could not locate element with visible text: ETE_Customer_aaaf3926-d765-4c47-93b9-857e674d2d01 vvG: NoSuchElementException: Message: Could not locate element with visible text: ETE_Customer_08f8a099-3e2b-480f-8153-5b4173d9394a vFW: Succeeded
|
#4 | ${vnf} = vFWCLvPKG Robot heat bridge run after the deployment failed trying to find the stack in openstack usually means that openstack was slow in deploying the VNF. Heatbridge had succeeded for the vFWCLvSNK inside the same service instantiate. | Keyword 'Get Deployed Stack' failed after retrying for 10 minutes. The last error was: KeyError: 'stack' |
#13 | ${vnf} = vFWCLvPKG Robot heat bridge run after the deployment failed trying to find the stack in openstack usually means that openstack was slow in deploying the VNF. Heatbridge had succeeded for the vFWCLvSNK inside the same service instantiate. | Keyword 'Get Deployed Stack' failed after retrying for 10 minutes. The last error was: KeyError: 'stack' |
#14 | or vDNS and vVG robot script couldnt find elements on the GUI drop downs. Likely transient networking issues. vFW succeeded and all three are in the test run (vDNS, vVG, vFW in that order). | vDNS : Keyword 'Wait For Model' failed after retrying for 3 minutes. The last error was: Element 'xpath=//tr[td/span/text() = 'vLB 2020-05-20 13-06-03']/td/button[contains(text(),'Deploy')]' not visible after 1 minute.
vVG: NoSuchElementException: Message: Could not locate element with visible text: ETE_Customer_9f739343-cbc7-4ee4-8697-ea52f06e7796
vFW Succeeded |
#15 | Virtual Volume Group - Failure in robot selenium to find customer in search window. Timing issue. | NoSuchElementException: Message: Could not locate element with visible text: ETE_Customer_26e85655-1f44-4e7e-8cd2-e9fab290af01 |
#17 | or Failure in robot selenium at second VNF in service package. Likely tuning of robot needed waiting for the module name to appear in the drop down under transient conditions. | Element 'xpath=//div[contains(.,'Ete_vFWCLvPKG_f716b1bd_1')]/div/button[contains(.,'Add VF-Module')]' did not appear in 1 minute. |
#18 | K8 worker node problem . kubectl top nodes listed k8s-04 as unkown. k8s-04 is on 10.12.6.0 which could be contributing factor - .0 and .32 addresses in windriver have suspect behavoir. Worker down caused a set of containers to be restarted which is the right behavoir from a k8 standpoint. Test could not run while robot container was down. | 12:00:25 Instantiate Virtual DNS GRA command terminated with exit code 137 12:22:22 + retval=137 12:22:22 ++ echo 'kubectl exec -n onap dev-robot-56c5b65dd-dkks4 -- ls -1t /share/logs | grep stability72hr | head -1' 12:22:22 ++ ssh -i /var/lib/jenkins/.ssh/onap_key ubuntu@10.12.5.205 sudo su 12:22:25 error: unable to upgrade connection: container not found ("robot") |
#19 #20 | k8 restarted robot pod. Manual fixes to vnf_orchestration_test_template to fix heat3 parsing issues were removed. reapplied manual fixes so parsing sdc artifacts to find the base_vlb resource succeeded again. | Unable to find catalog resource for vLB base_vlb' |
#32 | Robot script did not find subscriber name in search results Likely timing issue that robot is too fast in looking for json data in the drop down before it is fully loaded. | Create Service Instance → vid_interface . Click On Element When Visible //select[@prompt='Select Subscriber Name' |
#35 | vDNS instantiate failed at openstack stage. Potentially slowed openstack caused SO to resubmit a request that subsequently became a duplicate from openstack perspective. Looks like functional bug with SO to Openstack issue triggered by the environment not stability related. | CREATE failed: Conflict: resources.vlb_0_onap_private_port_0: IP address 10.0.211.24 already allocated in subnet be057760-1ffa-4827-a6df-75d355c4d45a\nNeutron server returns request_ids: ['req-ca6e5f39-7462-47c6-aaa8-9653783828cb'] View file |
---|
name | 10.0.211.24.debug.log |
---|
height | 250 |
---|
|
|
#37 | vG and vFW failed on VID screen errors looking for data items. Investigation shows that aai-traversal pod restarted. Looks like slow networking caused the pod to be redeployed but not conclusive. Initially so, vid failed healtch check until aai traversal was up then both passed healthcheck. |
|
| Thu May 21 12:33:45 UTC 2020 | Memory: root@long-nfs:/home/ubuntu# kubectl -n onap top pod | sort -rn -k3 | head -20 dev-appc-0 7m 2834Mi dev-portal-cassandra-59f5cb4cf5-9phmg 152m 2780Mi dev-appc-1 19m 2700Mi dev-appc-2 10m 2694Mi dev-cassandra-2 15m 2449Mi dev-cassandra-1 21m 2434Mi dev-vid-5b7558dcdc-rx2d7 16m 1786Mi dev-sdnc-2 64m 1664Mi dev-sdnc-0 131m 1631Mi dev-sdc-be-77d55774f5-zkfrt 9m 1578Mi dev-sdnc-1 29m 1566Mi dev-awx-0 291m 1524Mi dev-clamp-dash-es-6cb85979b5-cvrcs 37m 1496Mi dep-dcae-tca-analytics-55dbd5cd9d-fsm89 664m 1318Mi dev-dcae-cloudify-manager-6f854859f9-ctdcv 76m 1302Mi dev-aaf-cass-7d55bfc874-sqcdq 5m 1250Mi dev-cds-blueprints-processor-7fd988d584-mvdkz 40m 1228Mi dev-message-router-zookeeper-1 5m 1127Mi dev-message-router-zookeeper-0 6m 1023Mi dev-so-bpmn-infra-7b58b75b76-n59sf 8m 941Mi
|
#38 | vDNS - Timeout waiting for model to be visible via Deploy button in VID vVG and vFW Succeeded Transient Slowness since the 2nd and 3rd VNF succeeded. | Keyword 'Wait For Model' failed after retrying for 3 minutes. The last error was: TypeError: object of type 'NoneType' has no len() |
#47 | vDNS - Seleinum error seeing the Subscriber Name vVG and vFW worked. Transient | vid_interface . Click On Element When Visible //select[@prompt='Select Subscriber Name'] StaleElementReferenceException: Message: stale element reference: element is not attached to the page document |
| Fri May 22 03:41:11 UTC 2020 | root@long-nfs:/home/ubuntu# kubectl -n onap top pod | sort -nr -k 3 | head -20 dev-appc-0 7m 2839Mi dev-portal-cassandra-59f5cb4cf5-9phmg 127m 2781Mi dev-appc-2 11m 2702Mi dev-appc-1 39m 2576Mi dev-cassandra-2 62m 2517Mi dev-cassandra-1 69m 2502Mi dev-cassandra-0 64m 2433Mi dev-vid-5b7558dcdc-rx2d7 10m 2050Mi dev-policy-59f48bd84b-6h4xt 23m 1892Mi dev-sdnc-0 154m 1622Mi dev-sdnc-2 89m 1586Mi dev-sdnc-1 25m 1566Mi dev-awx-0 351m 1525Mi dev-clamp-dash-es-6cb85979b5-cvrcs 52m 1504Mi dev-pdp-0 4m 1434Mi dev-aai-elasticsearch-55b56f855c-qbzfl 11m 1428Mi dep-dcae-tca-analytics-55dbd5cd9d-fsm89 452m 1380Mi dev-dcae-cloudify-manager-6f854859f9-ctdcv 88m 1345Mi dev-cds-blueprints-processor-7fd988d584-mvdkz 38m 1286Mi dev-sdc-be-77d55774f5-zkfrt 7m 1253Mi
|
#53 | vDNS instantiate failed at openstack stage. Potentially slowed openstack caused SO to resubmit a request that subsequently became a duplicate from openstack perspective. Looks like functional bug with SO to Openstack issue triggered by the environment not stability related. vVG and vFW Succeeded in same test. | STATUS: Received vfModuleException from VnfAdapter: category='INTERNAL' message='Exception during create VF org.onap.so.openstack.utils.StackCreationException: Stack Creation Failed Openstack Status: CREATE_FAILED Status Reason: Resource CREATE failed: Conflict: resources.vlb_0_onap_private_port_0: IP address 10.0.250.24 already allocated in subnet be057760-1ffa-4827-a6df-75d355c4d45a\nNeutron server returns request_ids |
| Fri May 22 09:35:28 UTC 2020 | root@long-nfs:/home/ubuntu# kubectl -n onap top pod | sort -nr -k 3 | head -20 dev-appc-0 6m 2837Mi dev-portal-cassandra-59f5cb4cf5-9phmg 125m 2792Mi dev-appc-2 10m 2704Mi dev-appc-1 26m 2568Mi dev-cassandra-1 71m 2501Mi dev-cassandra-2 62m 2499Mi dev-cassandra-0 54m 2448Mi dev-vid-5b7558dcdc-rx2d7 9m 2074Mi dev-policy-59f48bd84b-6h4xt 19m 1880Mi dev-sdnc-0 108m 1620Mi dev-sdnc-2 71m 1586Mi dev-sdnc-1 29m 1568Mi dev-awx-0 239m 1523Mi dev-clamp-dash-es-6cb85979b5-cvrcs 44m 1513Mi dev-sdc-be-77d55774f5-zkfrt 39m 1439Mi dev-pdp-0 3m 1436Mi dev-aai-elasticsearch-55b56f855c-qbzfl 6m 1423Mi dep-dcae-tca-analytics-55dbd5cd9d-fsm89 425m 1391Mi dev-cds-blueprints-processor-7fd988d584-mvdkz 27m 1375Mi dev-dcae-cloudify-manager-6f854859f9-ctdcv 86m 1311Mi
root@long-nfs:/home/ubuntu# kubectl -n onap top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% long-k8s-01 699m 8% 15077Mi 94% long-k8s-02 1688m 21% 13367Mi 83% long-k8s-03 166m 2% 6085Mi 38% long-k8s-04 919m 11% 14554Mi 91% long-k8s-05 636m 7% 12823Mi 80% long-k8s-06 905m 11% 14291Mi 89% long-k8s-07 480m 6% 8883Mi 55% long-k8s-08 842m 10% 13220Mi 82% long-k8s-09 1692m 21% 5594Mi 35% long-orch-1 228m 11% 1454Mi 37% long-orch-2 212m 10% 1350Mi 35% long-orch-3 129m 6% 1260Mi 32% |
#58 | ODL cluster communication error on vFW preload. This type of error usually is associated with network latency issues between nodes. Akka configuration should be evaluated to loosen up the timeout settings for public cloud or other slow environments. Discuss with Dan GET to https://{{sdnc_ssl_port}}/restconf/config/VNF-API:preload-vnfs/ Succeeds | O Get Request using : alias=sdnc, uri=/restconf/config/VNF-API:preload-vnfs/vnf-preload-list/Vfmodule_Ete_vFWCLvFWSNK_e401f06d_0/VfwclVfwsnkA143de8bE20f..base_vfw..module-0, headers={'X-FromAppId': 'robot-ete', 'X-TransactionId': '922f999d-2444-4bcd-b5ad-60fbf553735d', 'Content-Type': 'application/json', 'Accept': 'application/json'} json=None 04:36:17.031 INFO Received response from [sdnc]: {"errors":{"error":[{"error-type":"application","error-tag":"operation-failed","error-message":"Error executeRead ReadData for path /(org:onap:sdnctl:vnf?revision=2015-07-20)preload-vnfs/vnf-preload-list/vnf-preload-list[{(org:onap:sdnctl:vnf?revision=2015-07-20)vnf-type=VfwclVfwsnkA143de8bE20f..base_vfw..module-0, (org:onap:sdnctl:vnf?revision=2015-07-20)vnf-name=Vfmodule_Ete_vFWCLvFWSNK_e401f06d_0}]","error-info":"Shard member-2-shard-default-config currently has no leader. Try again later."}]}} https://{{sdnc_ssl_port}}/jolokia/read/org.opendaylight.controller:type=DistributedOperationalDatastore,Category=ShardManager,name=shard-manager-operational
Code Block |
---|
title | cluster health |
---|
collapse | true |
---|
| {
"request": {
"mbean": "org.opendaylight.controller:Category=ShardManager,name=shard-manager-operational,type=DistributedOperationalDatastore",
"type": "read"
},
"value": {
"LocalShards": [
"member-3-shard-default-operational",
"member-3-shard-prefix-configuration-shard-operational",
"member-3-shard-topology-operational",
"member-3-shard-entity-ownership-operational",
"member-3-shard-inventory-operational",
"member-3-shard-toaster-operational"
],
"SyncStatus": true,
"MemberName": "member-3"
},
"timestamp": 1590141147,
"status": 200
} |
|
#59 #60 #61 | Looks like the #58 environment issue affected networking or pod performance for SDC-BE as well.. deleted SDNC-1 pod to fix the shard leader issue. deleted SDC-BE pod to fix the SDC issue (it had failed liveness probes and the automated k8 restart did not work) new containers created by k8 were successful |
|
#72 | vFWvPKG Heatbridge failed to see Deployed stack after 10 minutes. Usualy means openstack issue. vFWvSNK, vDNS and vVG had succeeded | Keyword 'Get Deployed Stack' failed after retrying for 10 minutes. The last error was: KeyError: 'stack' |
#73 | vFWvSNK Heatbridge AAI Validation failed to find the node in AAI. After the test re-ran the query and the data is there. Most likely tooling was did not wait long enough for replication across the cassandra nodes to occur. Should consider adding a delay in robot between the openstack completion in SO and the AAI query. | AAI Heatbridge Validation post response: {"requestError":{"serviceException":{"messageId":"SVC3001","text":"Resource not found for %1 using id %2 (msg=%3) (ec=%4)","variables":["POST Search","getNamedQueryResponse","Node Not Found:No Node of type vserver found for properties","ERR.5.4.6114"]}}} Code Block |
---|
| Post Request using : alias=aai, uri=/aai/search/named-query, data={
"query-parameters": {
"named-query": {
"named-query-uuid": "f199cb88-5e69-4b1f-93e0-6f257877d066"
}
},
"instance-filters": {
"instance-filter": [
{
"vserver":
{
"vserver-name": "vofwl01fwle37c"
}
}
]
}
} |
Code Block |
---|
title | post test query results |
---|
collapse | true |
---|
| {
"inventory-response-item": [
{
"vserver": {
"vserver-id": "175e2a27-436d-423b-9518-21d5c504299f",
"vserver-name": "vofwl01fwle37c",
"vserver-name2": "vofwl01fwle37c",
"prov-status": "ACTIVE",
"vserver-selflink": "http://10.12.25.2:8774/v2.1/28481f6939614cfd83e6767a0e039bcc/servers/175e2a27-436d-423b-9518-21d5c504299f",
"in-maint": false,
"is-closed-loop-disabled": false,
"resource-version": "1590194419699"
},
"extra-properties": {},
"inventory-response-items": {
"inventory-response-item": [
{
"model-name": "vFWCL_vFWSNK cd634f60-3362",
"generic-vnf": {
"vnf-id": "86a93f6d-0540-41da-8e98-5de910ec4088",
"vnf-name": "Ete_vFWCLvFWSNK_5afbe37c_0",
"vnf-type": "vFWCL 2020-05-23 00-25-/vFWCL_vFWSNK cd634f60-3362 0",
"service-id": "a105505b-bb52-4b0d-a2fd-165056e7e6ea",
"prov-status": "ACTIVE",
"orchestration-status": "Active",
"in-maint": false,
"is-closed-loop-disabled": false,
"resource-version": "1590194431230",
"model-invariant-id": "76369d2d-2797-441b-a197-764b581d7a1c",
"model-version-id": "0a064ab0-bfc1-4aee-95b6-13d8161179bd",
"model-customization-id": "da2190dc-721d-4f05-9412-3bcea987d736"
},
"extra-properties": {
"extra-property": [
{
"property-name": "model-ver.model-version-id",
"property-value": "0a064ab0-bfc1-4aee-95b6-13d8161179bd"
},
{
"property-name": "model-ver.model-name",
"property-value": "vFWCL_vFWSNK cd634f60-3362"
},
{
"property-name": "model.model-type",
"property-value": "resource"
},
{
"property-name": "model.model-invariant-id",
"property-value": "76369d2d-2797-441b-a197-764b581d7a1c"
},
{
"property-name": "model-ver.model-version",
"property-value": "1.0"
}
]
},
"inventory-response-items": {
"inventory-response-item": [
{
"model-name": "VfwclVfwsnkCd634f603362..base_vfw..module-0",
"vf-module": {
"vf-module-id": "dd63d577-7b63-4274-b346-6ad1a5f07e31",
"vf-module-name": "Vfmodule_Ete_vFWCLvFWSNK_5afbe37c_0",
"heat-stack-id": "Vfmodule_Ete_vFWCLvFWSNK_5afbe37c_0/00692c06-b498-4a4f-99f8-5d0529e40921",
"orchestration-status": "active",
"is-base-vf-module": true,
"automated-assignment": false,
"resource-version": "1590194420852",
"model-invariant-id": "74d0a469-8791-42d9-ad84-0b7f6720c00f",
"model-version-id": "177de70f-2afd-4617-81b8-01765eec8e53",
"model-customization-id": "c44d47c1-706b-4fa3-960b-57792cba809c",
"module-index": 0
},
"extra-properties": {
"extra-property": [
{
"property-name": "model-ver.model-version-id",
"property-value": "177de70f-2afd-4617-81b8-01765eec8e53"
},
{
"property-name": "model-ver.model-name",
"property-value": "VfwclVfwsnkCd634f603362..base_vfw..module-0"
},
{
"property-name": "model.model-type",
"property-value": "resource"
},
{
"property-name": "model.model-invariant-id",
"property-value": "74d0a469-8791-42d9-ad84-0b7f6720c00f"
},
{
"property-name": "model-ver.model-version",
"property-value": "1"
}
]
}
}
]
}
},
{
"tenant": {
"tenant-id": "28481f6939614cfd83e6767a0e039bcc",
"tenant-name": "Integration-Longevity",
"resource-version": "1590230418024"
},
"extra-properties": {},
"inventory-response-items": {
"inventory-response-item": [
{
"cloud-region": {
"cloud-owner": "CloudOwner",
"cloud-region-id": "RegionOne",
"cloud-type": "SharedNode",
"owner-defined-type": "OwnerType",
"cloud-region-version": "v1",
"cloud-zone": "CloudZone",
"orchestration-disabled": false,
"in-maint": false,
"resource-version": "1589992859784"
},
"extra-properties": {}
}
]
}
}
]
}
}
]
} |
|
#77 | vFWvPKG failed on AAI validation after heat bridge. Same as #73. Query succeeded after the test when run from POSTMAN.
| post response: {"requestError":{"serviceException":{"messageId":"SVC3001","text":"Resource not found for %1 using id %2 (msg=%3) (ec=%4)","variables":["POST Search","getNamedQueryResponse","Node Not Found:No Node of type vserver found for properties","ERR.5.4.6114"]}}} |
| Sat May 23 11:01:16 UTC 2020
83 hours of testing completed (10 over the 72 hour planned duration)
|
root@long-nfs:~/oom/kubernetes/robot# kubectl -n onap top pod | sort -rn -k 3 | head -20 dev-appc-0 6m 2849Mi
dev-portal-cassandra-59f5cb4cf5-9phmg 149m 2797Mi
dev-appc-2 8m 2723Mi
dev-cassandra-0 161m 2607Mi
dev-cassandra-1 371m 2595Mi
dev-cassandra-2 237m 2584Mi
dev-appc-1 13m 2583Mi
dev-vid-5b7558dcdc-rx2d7 7m 2052Mi
dev-policy-59f48bd84b-6h4xt 10m 1884Mi
dev-sdnc-0 72m 1630Mi
dev-sdnc-2 63m 1617Mi
dev-mariadb-galera-0 9m 1592Mi
dev-clamp-dash-es-6cb85979b5-cvrcs 46m 1548Mi
dev-awx-0 186m 1519Mi
dev-sdnc-1 19m 1514Mi
dev-cds-blueprints-processor-7fd988d584-mvdkz 19m 1500Mi
dep-dcae-tca-analytics-55dbd5cd9d-fsm89 254m 1482Mi
dev-message-router-kafka-0 26m 1468Mi
dev-pdp-0 4m 1444Mi
dev-sdc-be-77d55774f5-l7lvr 169m 1443Mi
|
| jenkins job collected top nodes
Code Block |
---|
| NAME %Memory
long-k8s-01 9%
long-k8s-02 10%
long-k8s-03 -62%
long-k8s-04 7%
long-k8s-05 12%
long-k8s-06 5%
long-k8s-07 17%
long-k8s-08 12%
long-k8s-09 9%
long-orch-1 13%
long-orch-2 23%
long-orch-3 18%
Average All Nodes 6%
Average Worker 2%
|
Its interesting that the memory for K8 control plane nodes (orch-1,orch-2, orch-3) grew more than ONAP work nodes. |
Code Block |
---|
| 05:41:14 + ssh -i /var/lib/jenkins/.ssh/onap_key ubuntu@10.12.5.205 sudo su
05:41:15 NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
05:41:15 long-k8s-01 610m 7% 14078Mi 88%
05:41:15 long-k8s-02 1194m 14% 13311Mi 83%
05:41:15 long-k8s-03 198m 2% 8137Mi 51%
05:41:15 long-k8s-04 606m 7% 14697Mi 92%
05:41:15 long-k8s-05 601m 7% 12905Mi 80%
05:41:15 long-k8s-06 832m 10% 14374Mi 90%
05:41:15 long-k8s-07 404m 5% 9236Mi 57%
05:41:15 long-k8s-08 1612m 20% 13925Mi 87%
05:41:15 long-k8s-09 1148m 14% 5593Mi 35%
05:41:15 long-orch-1 289m 14% 1454Mi 37%
05:41:15 long-orch-2 194m 9% 1352Mi 35%
05:41:15 long-orch-3 113m 5% 1306Mi 33% |
Code Block |
---|
| 20:57:41 NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
20:57:41 long-k8s-01 888m 11% 12869Mi 80%
20:57:41 long-k8s-02 2456m 30% 11998Mi 75%
20:57:41 long-k8s-03 895m 11% 13161Mi 82%
20:57:41 long-k8s-04 2223m 27% 13728Mi 86%
20:57:41 long-k8s-05 436m 5% 11307Mi 70%
20:57:41 long-k8s-06 843m 10% 13719Mi 86%
20:57:41 long-k8s-07 319m 3% 7688Mi 48%
20:57:41 long-k8s-08 534m 6% 12317Mi 77%
20:57:41 long-k8s-09 1734m 21% 5089Mi 31%
20:57:41 long-orch-1 202m 10% 1261Mi 32%
20:57:41 long-orch-2 259m 12% 1043Mi 27%
20:57:41 long-orch-3 286m 14% 1070Mi 27% |
|
|
|
|