As of April 30, 2019 (AAF 2.1.9-SNAPSHOT-latest, dbc-client 1.0.8-STAGING-latest, dmaap-bc 1.1.4-STAGING-latest, dmaap-mr 1.1.13)
The dependency on AAF still has not been resolved, despite some attempts at workarounds.
Known Issues:
DMAAP-1180: Dmaap healthcheck failedClosed
DMAAP-1178: [BC] DMaaP fails health checkClosed
DMAAP-1177: [BC] dbc-client requests failing with 401Closed
DMAAP-1154: [BC] Fix certificate problem for cadiClosed
DMAAP-1142: [BC] dbc-client doesn't support cert based authorizationClosed
Resolution:
create override file with contents:
#################################################################
# Enable/disable and configure helm charts (ie. applications)
# to customize the ONAP deployment.
#################################################################
aaf:
enabled: true
dmaap:
enabled: true
message-router:
enabled: true
dmaap-bc:
enabled: true
dmaap-dr-node:
enabled: false
dmaap-dr-prov:
enabled: false
Deploy AAF separately first.
helm install --debug local/aaf -n central-aaf --namespace onap -f ~/dgl_overrides.yaml --timeout 900
In AAF GUI add:
role create org.onap.dmaap-bc.service
perm grant org.onap.dmaap-bc.api.access * read org.onap.dmaap-bc.service
perm grant org.onap.dmaap.mr.access * * org.onap.dmaap-bc.service
perm grant org.onap.dmaap.mr.topic * view org.onap.dmaap-bc.service
perm create org.onap.dmaap.mr.topic * * org.onap.dmaap-bc.service
perm create org.onap.dmaap-dr.feed * * org.onap.dmaap-bc.service
perm create org.onap.dmaap-dr.sub * * org.onap.dmaap-bc.service
perm create org.onap.dmaap.mr.topicFactory :org.onap.dmaap.mr.topic:org.onap.dmaap.mr create,destroy org.onap.dmaap-bc.service
role user add org.onap.dmaap-bc.service dmaap-bc@dmaap-bc.onap.org
role user add org.onap.dmaap-bc.api.Controller dmaap-bc@dmaap-bc.onap.org
3. Deploy dmaap.
helm install --debug local/dmaap -n central-dmaap --namespace onap -f ~/dgl_overrides.yaml --timeout 900
NOTES:
the message-router-mirrormaker pod is dependent on topic provisioning and AAF permissions being granted, which is done as a result of the message-router post-install job. Sometimes this sequence takes a while and the message-router-mirrormaker pod status gets in a crashback loop. Patience: if all the steps above are taken, it should eventually reach a ready state. However, it will never succeed if the full topic provisioning wasn't successful.
depending on your environment, the deployment of all the components takes a while and can easily exceed the default helm timeout. Recommend adding --timeout 900 to your helm install command line.