Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

Write Test Cases

Test Iteration 1

Test Environment
CPUs2 vCPU
Memory4 Gi (~4.2 GB)
Data Size269 KB/DataNode
VM/K8s cluster

K8s cluster

application (CPS) instance - 1

postgres instance - 2

Bulk insert in Kubernetes cluster

#

#Nodes

ThreadTime taken to post single nodeTime taken to post all data nodesComments
15010.8 sec42.6 sec
250010.9 sec9 m 36 sec
Connections drops for a while and resumes

Reason For Connection drops: Application restartsĀ 

due Out OF Memory ErrorĀ 

350010~ 1.2 sec8 m 42 secNo improvements with increased threads count

Test Iteration 2

Test Environment
CPUs4 vCPU
Memory8 Gi (~48.5 GB)
Data Size269 KB/DataNode
VM/K8s cluster

K8s cluster

application (CPS) instance - 1

postgres instance - 2

Bulk insert with single thread in Kubernetes cluster

#

#Nodes

ThreadTime taken to post single nodeTime taken to post all data nodesComments
150010.8 - 0.9 sec7m 53 sec

Not able to post more than 500 nodes which lead to

application pod restart with the the reason code 137 (OOMKilled)

Test Iteration 3

Test Environment
CPUs8 vCPU
Memory16 Gi (~17 GB)
Data Size269 KB/DataNode
VM/K8s cluster

K8s cluster

application (CPS) instance - 1

postgres instance - 2

Bulk insert with single thread in Kubernetes cluster

#

#Nodes

ThreadTime taken to post single nodeTime taken to post all data nodesComments
15001~0.9 sec7m 44 sec

Not able to post more than 500 nodes which lead to

application pod restart with the the reason code 137 (OOMKilled)

Test Iteration 4

Test Environment
CPUs12 vCPU
Memory16 Gi (~17 GB)
Data Size269 KB/DataNode
VM/K8s cluster

K8s cluster

application (CPS) instance - 1

postgres instance - 2

Bulk insert with single thread in Kubernets cluster

#

#Nodes

ThreadTime taken to post single nodeTime taken to post all data nodesComments
150010.8 - 0.9 sec7m 23 sec

Not able to post more than 500 nodes which lead to

application pod restart with the the reason code 137 (OOMKilled)

Test 1: initial bulk insert (single thread)

##NodesTime Taken...Notes
1



2



3



Test 2: bulk insert, cps temporal notifications disabled (single thread)

##NodesTime Taken...Notes
1



2



3



Read Test Cases

  • No labels