Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 14 Next »

Problems/Issues encountered or steps required when using ONAP Casablanca after a fresh installation


Create a complex in AAI

See How-To: Register a VIM/Cloud Instance to ONAP

PUT https://aai.api.simpledemo.openecomp.org:30233/aai/v14/cloud-infrastructure/complexes/complex/clli1 

Authorization:Basic QUFJOkFBSQ==
X-TransactionId:jimmy-postman
X-FromAppId:AAI
Content-Type:application/json
Accept:application/json
Cache-Control:no-cache

Body:
{
    "physical-location-id": "clli1",
    "data-center-code": "data-center-code-BINZ",
    "complex-name": "clli1",
    "identity-url": "identity-url-BINZ",
    "physical-location-type": "physical-location-type-val-28399",
    "street1": "example-street1-val-28399",
    "street2": "example-street2-val-28399",
    "city": "example-city-val-28399",
    "state": "example-state-val-28399",
    "postal-code": "example-postal-code-val-28399",
    "country": "example-country-val-28399",
    "region": "example-region-val-28399",
    "latitude": "1111",
    "longitude": "2222",
    "elevation": "example-elevation-val-28399",
    "lata": "example-lata-val-28399"
}

Register a VIM via ESR GUI: http://msb.api.discovery.simpledemo.onap.org:30280/iui/aai-esr-gui/extsys/vim/vimView.html  

Policy 

See https://onap.readthedocs.io/en/casablanca/submodules/policy/engine.git/docs/platform/swarch_srm.html#before-installing-policies 

CLAMP certificate to access web portal

See Control Loop Flows and Models for Casablanca#Configure 

CLDS URL: https://clamp.api.simpledemo.onap.org:30258/designer/index.html

VID Portal. Problem with browser certificate

Add security exception in browser for VID URL https://vid.api.simpledemo.onap.org:30200/vid/welcome.htm

Demo init script fails

We assume ONAP is installed using the Integration team script (Heat + OOM deployment): https://github.com/onap/integration/tree/master/deployment/heat/onap-oom/scripts


demo-k8s.sh onap init
root@oom-rancher:~/oom/kubernetes/robot# ./demo-k8s.sh onap init

Number of parameters:
2
KEY:
init
++ kubectl --namespace onap get pods
++ sed 's/ .*//'
++ grep robot
+ POD=dev-robot-robot-f97bcf797-gg788
+ ETEHOME=/var/opt/OpenECOMP_ETE
++ kubectl --namespace onap exec dev-robot-robot-f97bcf797-gg788 -- bash -c 'ls -1q /share/logs/ | wc -l'
+ export GLOBAL_BUILD_NUMBER=14
+ GLOBAL_BUILD_NUMBER=14
++ printf %04d 14
+ OUTPUT_FOLDER=0014_demo_init
+ DISPLAY_NUM=104
+ VARIABLEFILES='-V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py'
+ kubectl --namespace onap exec dev-robot-robot-f97bcf797-gg788 -- /var/opt/OpenECOMP_ETE/runTags.sh -V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py -d /share/logs/0014_demo_init -i InitDemo --display 104
Starting Xvfb on display :104 with res 1280x1024x24
Executing robot tests at log level TRACE
==============================================================================
Testsuites
==============================================================================
Testsuites.Demo :: Executes the VNF Orchestration Test cases including setu...
==============================================================================
Initialize Customer And Models                                        | FAIL |
'200 <= 401 < 300' should be true.
------------------------------------------------------------------------------
Testsuites.Demo :: Executes the VNF Orchestration Test cases inclu... | FAIL |
1 critical test, 0 passed, 1 failed
1 test total, 0 passed, 1 failed
==============================================================================
Testsuites                                                            | FAIL |
1 critical test, 0 passed, 1 failed
1 test total, 0 passed, 1 failed
==============================================================================
Output:  /share/logs/0014_demo_init/output.xml
Log:     /share/logs/0014_demo_init/log.html
Report:  /share/logs/0014_demo_init/report.html

Solution

check cloud related config (e.g. OpenStack) in ~/integration-override.yaml file and redeploy affected components by running:

helm deploy dev local/onap -f ~/oom/kubernetes/onap/resources/environments/public-cloud.yaml -f ~/integration-override.yaml --namespace onap --verbose
./demo-k8s.sh onap init_robot
root@oom-rancher:~/oom/kubernetes/robot# ./demo-k8s.sh onap init_robot
Number of parameters:
2
KEY:
init_robot
WEB Site Password for user 'test': ++ kubectl --namespace onap get pods
++ sed 's/ .*//'
++ grep robot
+ POD=dev-robot-robot-f97bcf797-hbwbv
+ ETEHOME=/var/opt/OpenECOMP_ETE
++ kubectl --namespace onap exec dev-robot-robot-f97bcf797-hbwbv -- bash -c 'ls -1q /share/logs/ | wc -l'
+ export GLOBAL_BUILD_NUMBER=35
+ GLOBAL_BUILD_NUMBER=35
++ printf %04d 35
+ OUTPUT_FOLDER=0035_demo_init_robot
+ DISPLAY_NUM=125
+ VARIABLEFILES='-V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py'
+ kubectl --namespace onap exec dev-robot-robot-f97bcf797-hbwbv -- /var/opt/OpenECOMP_ETE/runTags.sh -V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py -v WEB_PASSWORD:test -d /share/logs/0035_demo_init_robot -i UpdateWebPage --display 125
Starting Xvfb on display :125 with res 1280x1024x24
Executing robot tests at log level TRACE
==============================================================================
Testsuites
==============================================================================
Testsuites.Update Onap Page :: Initializes ONAP Test Web Page and Password
==============================================================================
Update ONAP Page                                                      | PASS |
------------------------------------------------------------------------------
Testsuites.Update Onap Page :: Initializes ONAP Test Web Page and ... | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================
Testsuites                                                            | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================
Output:  /share/logs/0035_demo_init_robot/output.xml
Log:     /share/logs/0035_demo_init_robot/log.html
Report:  /share/logs/0035_demo_init_robot/report.html


SDC distribution fails

./ete-k8s.sh onap health
SDC Healthcheck (DMaaP:None):PASS


Check zookeper image version (=3.4.9):

# kubectl exec -ti dev-dmaap-message-router-zookeeper-6f68c699d7-jn9pg -- bash
root@dev-dmaap-message-router-zookeeper-6f68c699d7-jn9pg:/opt/zookeeper-3.4.9# env | grep -i version
ZOOKEEPER_VERSION=3.4.9
kubectl -n onap edit configMap dev-dmaap-dmaap-dr-node-node-props-configmap
	ProvisioningURL=https://dmaap-dr-prov:8443/internal/prov -> ProvisioningURL=http://dmaap-dr-prov:8080/internal/prov

restart dmaap-bus-controller, dmaap-message-router
kubectl delete po dev-dmaap-dmaap-dr-node-77454c5f45-k7p4l


Helm Error: trying to send message larger than max

Error: trying to send message larger than max
# helm ls
Error: trying to send message larger than max (23353031 vs. 20971520)


# kubectl get configmap | wc -l
286
# kubectl --namespace=kube-system get cm | wc -l
708

Solution: delete old configmap versions and limit history

history max
helm init --history-max 10 --upgrade


https://github.com/helm/helm/issues/2332#issuecomment-336565784 

delete old configmap versions
#!/usr/bin/env bash

TARGET_NUM_REVISIONS=10
TARGET_NUM_REVISIONS=$(($TARGET_NUM_REVISIONS+0))

RELEASES=$(kubectl --namespace=kube-system get cm -l OWNER=TILLER -o go-template --template='{{range .items}}{{ .metadata.labels.NAME }}{{"\n"}}{{ end }}' | sort -u)

# create the directory to store backups
mkdir configmaps

for RELEASE in $RELEASES
do
  # get the revisions of this release
  REVISIONS=$(kubectl --namespace=kube-system get cm -l OWNER=TILLER -l NAME=$RELEASE | awk '{if(NR>1)print $1}' | sed 's/.*\.v//' | sort -n)
  NUM_REVISIONS=$(echo $REVISIONS | tr " " "\n" | wc -l)
  NUM_REVISIONS=$(($NUM_REVISIONS+0))

  echo "Release $RELEASE has $NUM_REVISIONS revisions. Target is $TARGET_NUM_REVISIONS."
  if [ $NUM_REVISIONS -gt $TARGET_NUM_REVISIONS ]; then
    NUM_TO_DELETE=$(($NUM_REVISIONS-$TARGET_NUM_REVISIONS))
    echo "Will delete $NUM_TO_DELETE revisions"

    TO_DELETE=$(echo $REVISIONS | tr " " "\n" | head -n $NUM_TO_DELETE)

    for DELETE_REVISION in $TO_DELETE
    do
      CMNAME=$RELEASE.v$DELETE_REVISION
      echo "Deleting $CMNAME"
      # Take a backup
      kubectl --namespace=kube-system get cm $CMNAME -o yaml > configmaps/$CMNAME.yaml
      # Do the delete
      kubectl --namespace=kube-system delete cm $CMNAME
    done
  fi
done

Update Edge Rules in running AAI deployment (OOM)

Note: this is not a recommended solution, as it is not persistent after PODs deletion. See https://lists.onap.org/g/onap-discuss/message/16171 and  AAI-2154 - Getting issue details... STATUS

1) Add new edge rules to aai-traversal, aai-resources and aai-graphadmin containers. For instance:

kubectl exec -ti dev-aai-aai-resources-5b6c5f454c-kbmdh bash
root@aai-resources:/opt/app/aai-resources# vi /opt/app/aai-resources/resources/schema/onap/dbedgerules/v14/DbEdgeRules_newRules_v14.json

2) Run createDBSchema script in aai-graphadmin

kubectl exec -ti dev-aai-aai-graphadmin-75d6587db4-xpmt5 bash
root@aai-graphadmin:/opt/app/aai-graphadmin# su aaiadmin
aaiadmin@aai-graphadmin:/opt/app/aai-graphadmin$ /opt/app/aai-graphadmin/bin/createDBSchema.sh

3) Restart docker containers aai-traversalaai-resources and aai-graphadmin using docker restart


  • No labels