Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

This page documents the steps taken to set up the vFW Closed Loop test as part of an investigation to identify changes needed to support the same.

The general method used was to review the operation of the integration Robot 'instantiatevFWCL' and 'vfwclosedloop' test and replicate the steps manually for an instance of vFW deployed to a K8S cloud region.

Recap of the vFW closed loop test

The following things are needed to run the closed loop test:

  1. APPC must be able to perform a netconf mount to the VPP honeycomb component of the Packet Generator so that the number of packet streams produced by the packet generator can be configured in response to policy reacting to a threshold event.  In the case of the K8S vFW, this requires that a Service be added which exposes a NodePort (at least, this is the approach taken for this investigation).
  2. DCAE must be able to receive VES events sent by the vFirewall component.  In the K8S vFW instance, this was possible to do, although the DCAE IP and Port values were not passed in, so the VES sending application needed to be restarted with the correct values.
  3. The statistics reported by the vSink component need to be exposed via a Service (already present in the K8S vFW helm charts).  This is convenient for monitoring the behavior of the vFW and its reaction to configuration changes - whether made manually or by Policy.

Notes on the environment used for the investigation

This was done on a Dublin installation running in the Intel ONAP Integration Lab.  The K8S KUD cloud region was running as a single node cluster running in a VM in another single server Titanium Cloud system which has external network connectivity to the ONAP Integration system - e.g. on the 10.12.x.x network.

The K8S vFW instance was deployed per the steps described here: Deploying vFw and EdgeXFoundry Services on Kubernets Cluster with ONAP and as presented here:  https://wiki.lfnetworking.org/download/attachments/15630468/ONAP_Dublin_SO_Multicloud.pdf?version=1&modificationDate=1560495527000&api=v2

Per the issue here:   MULTICLOUD-718 - Getting issue details... STATUS  multicloud was deleted, multicloud-k8s updated to version 1.4.0 and then multicloud was redeployed. (before onboarding and distributing the K8S VF and Service)

root@onap-rancher:~/oom# git diff
diff --git a/kubernetes/multicloud/values.yaml b/kubernetes/multicloud/values.yaml
index bff78caf..00fd8c33 100644
--- a/kubernetes/multicloud/values.yaml
+++ b/kubernetes/multicloud/values.yaml
@@ -20,7 +20,7 @@ global:
   nodePortPrefix: 302
   loggingRepository: docker.elastic.co
   loggingImage: beats/filebeat:5.5.0
-  artifactImage: onap/multicloud/framework-artifactbroker:1.3.3
+  artifactImage: onap/multicloud/framework-artifactbroker:1.4.0
   prometheus:
     enabled: false

@@ -29,7 +29,7 @@ global:
 #################################################################
 # application image
 repository: nexus3.onap.org:10001
-image: onap/multicloud/framework:1.3.3
+image: onap/multicloud/framework:1.4.0
 pullPolicy: Always

 #Istio sidecar injection policy

Setting up a Service the Packet Generator netconf mount

In the K8S cloud region, a Service needed to be added to expose the netconf port.

Such as:  kubectl create -f pgservice.yaml where:

pgservice.yaml
apiVersion: v1
kind: Service
metadata:
  name: packetgen-service
  labels:
    app: packetgen
    chart: packetgen
    release: profile1
spec:
  selector:
    app: packetgen
    release: profile1
  ports:
  - port: 2831
    nodePort: 30831
    protocol: TCP
    targetPort: 2831
  type: NodePort

This exposes the packet generator honeycomb port 2831 to the K8S cloud region nodes as NodePort 30831.

On the ONAP side, the APPC netconf mount was then created using a Postman command:

PUT http://{{AAI1_PUB_IP}}:30230/restconf/config/network-topology:network-topology/topology/topology-netconf/node/ef4aa32c-0eb9-46c0-b6b0-c6a35184f07b

with a body of:
APPC Netconf Mount for vFW
<node xmlns="urn:TBD:params:xml:ns:yang:network-topology">
<node-id>ef4aa32c-0eb9-46c0-b6b0-c6a35184f07b</node-id>
<host xmlns="urn:opendaylight:netconf-node-topology">10.12.17.12</host>
<port xmlns="urn:opendaylight:netconf-node-topology">30831</port>
<username xmlns="urn:opendaylight:netconf-node-topology">admin</username>
<password xmlns="urn:opendaylight:netconf-node-topology">admin</password>
<tcp-only xmlns="urn:opendaylight:netconf-node-topology">false</tcp-only>
<!-- non-mandatory fields with default values, you can safely remove these if you do not wish to override any of these values-->
<reconnect-on-changed-schema xmlns="urn:opendaylight:netconf-node-topology">false</reconnect-on-changed-schema>
<connection-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">20000</connection-timeout-millis>
<max-connection-attempts xmlns="urn:opendaylight:netconf-node-topology">0</max-connection-attempts>
<between-attempts-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">2000</between-attempts-timeout-millis>
<sleep-factor xmlns="urn:opendaylight:netconf-node-topology">1.5</sleep-factor>
<!-- keepalive-delay set to 0 turns off keepalives-->
<keepalive-delay xmlns="urn:opendaylight:netconf-node-topology">120</keepalive-delay>
</node>

The node-id 'ef4aa32c-0eb9-46c0-b6b0-c6a35184f07b' is the generic-vnf-id of the deployed K8S vFW VNF.

The host IP '10.12.17.12' is the host IP of the K8S KUD cluster and the port '30831' is the exposed node port as described above.

Once this netconf mount is executed, the K8s vFW packet generator should show up in the APPC list of Mounted Resources - and the packet generator stream-count can be controlled from APPC.


Stop the automatic vFW packet generator test

Note - that after it is deployed the packet generator automatically starts a script called run_traffic_fw_demo.sh.

Before running closed loop, this script should be terminated.  This script will alternate between running 1 and 10 packet streams (e.g. 100 or 1000 packets per second).

When running the closed loop test, this script will interfere so it is best to stop it.


Configure the Firewall to send VES events to DCAE

The vFirewall component sends VES using the 'vpp_measurement_reporter' program found in the directory '/opt/VES/evel/evel-library/code/VESreporting'.

By default this will be running following deployment, but it will be using the wrong parameters.  To fix, do the following:

  • Edit the files /opt/config/dcae_collector_ip.txt and /opt/config/dcae_collector_port.txt and place in them the IP and port for the DCAE collector of the ONAP.

For example:

vFirewall DCAE collector configuration
root@profile1-firewall-6558957c88-2rxdh:/opt/config# cat dcae_collector_ip.txt
10.12.5.63                                                                     t
root@profile1-firewall-6558957c88-2rxdh:/opt/config# cat dcae_collector_port.txt
30235

Where the address '10.12.5.63' is a Host IP of one of the ONAP cluster nodes and port '30235' is the port of the DCAE VES collector service.

Terminate the 'vpp_measurement_reporter' process if it is currently running and restart it with the new configuration by running the 'go-client.sh' script, which is also found in the directory '/opt/VES/evel/evel-library/code/VESreporting'

go-client.sh
#!/bin/bash

export LD_LIBRARY_PATH="/opt/VES/evel/evel-library/libs/x86_64/"
DCAE_COLLECTOR_IP=$(cat /opt/config/dcae_collector_ip.txt)
DCAE_COLLECTOR_PORT=$(cat /opt/config/dcae_collector_port.txt)
./vpp_measurement_reporter $DCAE_COLLECTOR_IP $DCAE_COLLECTOR_PORT eth1

VES Events from the vFirewall

The current K8S vFirewall sends out a VES event that looks like this:

K8S vFW VES Event
{
  "eventList": [
    {
      "commonEventHeader": {
        "domain": "measurementsForVfScaling",
        "eventId": "mvfs00000001",
        "eventName": "vFirewallBroadcastPackets",
        "lastEpochMicrosec": 1564615158937610,
        "priority": "Normal",
        "reportingEntityName": "profile1-firewall-6558957c88-vgkcb",
        "sequence": 0,
        "sourceName": "k8s-testing",
        "startEpochMicrosec": 1564615148937610,
        "version": 3,
        "reportingEntityId": "No UUID available"
      },
      "measurementsForVfScalingFields": {
        "measurementInterval": 10,
        "vNicPerformanceArray": [
          {
            "receivedOctetsDelta": 4343,
            "receivedTotalPacketsDelta": 101,
            "transmittedOctetsDelta": 0,
            "transmittedTotalPacketsDelta": 0,
            "valuesAreSuspect": "true",
            "vNicIdentifier": "eth1"
          }
        ],
        "measurementsForVfScalingVersion": 2
      }
    }
  ]
}

There are a few key values in this event.  One of these is the 'sourceName' field.  The 'sourceName' is used as the vserver name in AAI to correlate the event with a vserver.

In the current K8S vFW demo, the sourceName is 'k8s-testing'.  This will need to be made instance specific in the future.

Add a vserver object to AAI

The Robot tests will runs 'Heatbridge' to update AAI with details about the deployed VNF.  See AAI Update after Resource Instantiation for more information about Heatbridge and AAI update.

At this time, there is no heatbridge or AAI code for the K8S vFW deployments.  So, in support of handling the AAI enrichment process by looking up via vserver, the following AAI object is added to AAI manually.

PUT https://{{AAI1_PUB_IP}}:{{AAI1_PUB_PORT}}/aai/v11/bulkadd

with body:

K8S vFW vserver AAI example
{
  "transactions": [
    {
      "put": [
        {
          "body": {
            "vserver-name2": "k8s-testing",
            "vserver-name": "k8s-testing",
            "relationship-list": {
              "relationship": [
                {
                  "relationship-data": [
                    {
                      "relationship-key": "generic-vnf.vnf-id",
                      "relationship-value": "ef4aa32c-0eb9-46c0-b6b0-c6a35184f07b"
                    }
                  ],
                  "related-to": "generic-vnf"
                },
                {
                  "relationship-data": [
                    {
                      "relationship-key": "vf-module.vf-module-id",
                      "relationship-value": "1a54c83e-91ff-4678-a40b-11f007a4c959"
                    },
                    {
                      "relationship-key": "generic-vnf.vnf-id",
                      "relationship-value": "ef4aa32c-0eb9-46c0-b6b0-c6a35184f07b"
                    }
                  ],
                  "related-to": "vf-module"
                }
              ]
            },
            "volumes": [],
            "prov-status": "ACTIVE",
            "vserver-id": "k8s-testing-pg",
            "vserver-selflink": "http://10.12.11.1:8774/v2.1/709ba629fe194f8699b12f9d6ffd86a0/servers/4375b0d6-cc41-40e5-87db-16ac80fe22c1"
          },
          "uri": "/cloud-infrastructure/cloud-regions/cloud-region/k8scloudowner1/k8sregionone/tenants/tenant/709ba629fe194f8699b12f9d6ffd86a0/vservers/vserver/k8s-testing-pg"
        }
      ]
    }
  ]
}


Configure the vFW Closed Loop Policy

Once the above is done, all that is needed is to update the vFWCL policy to match this service.

Note - this sequence was created by reviewing closely at the way the Robot vFWCL instantiation modifies the closed loop policy configuration - and then copying this sequence with appropriate modifications for the K8S vFW deployment.

All of the following curl commands are done from inside the ONAP robot pod (probably doesn't have to be robot specifically, but that one worked).

Create vFirewall Monitoring Policy

First check the health.

curl -vvv -k --silent --user 'healthcheck:zb!XztG34' -X GET https://policy-api.onap:6969/policy/api/v1/healthcheck


Then :

curl -v -k -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' -H 'Authorization: Basic aGVhbHRoY2hlY2s6emIhWHp0RzM0' -d @./newpolicytype.json https://policy-api.onap:6969/policy/api/v1/policytypes/onap.policies.monitoring.cdap.tca.hi.lo.app/versions/1.0.0/policies

where the body is:

newpolicytype.json
{
  "topology_template": {
    "policies": [
      {
        "onap.vfirewall.tca": {
          "metadata": {
            "policy-id": "onap.vfirewall.tca"
          },
          "properties": {
            "tca_policy": {
              "domain": "measurementsForVfScaling",
              "metricsPerEventName": [
                {
                  "controlLoopSchemaType": "VM",
                  "eventName": "vFirewallBroadcastPackets",
                  "policyName": "DCAE.Config_tca-hi-lo",
                  "policyScope": "DCAE",
                  "policyVersion": "v0.0.1",
                  "thresholds": [
                    {
                      "closedLoopControlName": "ControlLoop-vFirewall-fb6f9541-32e6-44df-a312-9a67320c0b08",
                      "closedLoopEventStatus": "ONSET",
                      "direction": "LESS_OR_EQUAL",
                      "fieldPath": "$.event.measurementsForVfScalingFields.vNicPerformanceArray[*].receivedTotalPacketsDelta",
                      "severity": "MAJOR",
                      "thresholdValue": 300,
                      "version": "1.0.2"
                    },
                    {
                      "closedLoopControlName": "ControlLoop-vFirewall-fb6f9541-32e6-44df-a312-9a67320c0b08",
                      "closedLoopEventStatus": "ONSET",
                      "direction": "GREATER_OR_EQUAL",
                      "fieldPath": "$.event.measurementsForVfScalingFields.vNicPerformanceArray[*].receivedTotalPacketsDelta",
                      "severity": "CRITICAL",
                      "thresholdValue": 700,
                      "version": "1.0.2"
                    }
                  ]
                }
              ]
            }
          },
          "type": "onap.policies.monitoring.cdap.tca.hi.lo.app",
          "version": "1.0.0"
        }
      }
    ]
  },
  "tosca_definitions_version": "tosca_simple_yaml_1_0_0"
}

The modification in the above body is that the two occurrences of 'closedLoopControlName' are set to "ControlLoop-vFirewall-fb6f9541-32e6-44df-a312-9a67320c0b08" where fb6f9541-32e6-44df-a312-9a67320c0b08 is the Model Invariant ID of the VNF model for the K8S vFW.  This can be found (for example) from the "Model ID" line on the VID for the VNF.


The response to the POST was:

{
  "tosca_definitions_version": "tosca_simple_yaml_1_0_0",
  "topology_template": {
    "policies": [
      {
        "onap.vfirewall.tca": {
          "type": "onap.policies.monitoring.cdap.tca.hi.lo.app",
          "type_version": "1.0.0",
          "properties": {
            "tca_policy": {
              "domain": "measurementsForVfScaling",
              "metricsPerEventName": [
                {
                  "controlLoopSchemaType": "VM",
                  "eventName": "vFirewallBroadcastPackets",
                  "policyName": "DCAE.Config_tca-hi-lo",
                  "policyScope": "DCAE",
                  "policyVersion": "v0.0.1",
                  "thresholds": [
                    {
                      "closedLoopControlName": "ControlLoop-vFirewall-fb6f9541-32e6-44df-a312-9a67320c0b08",
                      "closedLoopEventStatus": "ONSET",
                      "direction": "LESS_OR_EQUAL",
                      "fieldPath": "$.event.measurementsForVfScalingFields.vNicPerformanceArray[*].receivedTotalPacketsDelta",
                      "severity": "MAJOR",
                      "thresholdValue": 300,
                      "version": "1.0.2"
                    },
                    {
                      "closedLoopControlName": "ControlLoop-vFirewall-fb6f9541-32e6-44df-a312-9a67320c0b08",
                      "closedLoopEventStatus": "ONSET",
                      "direction": "GREATER_OR_EQUAL",
                      "fieldPath": "$.event.measurementsForVfScalingFields.vNicPerformanceArray[*].receivedTotalPacketsDelta",
                      "severity": "CRITICAL",
                      "thresholdValue": 700,
                      "version": "1.0.2"
                    }
                  ]
                }
              ]
            }
          },
          "name": "onap.vfirewall.tca",
          "version": "1.0.0",
          "metadata": {
            "policy-id": "onap.vfirewall.tca",
            "policy-version": "1"
          }
        }
      }
    ]
  },
  "name": "ToscaServiceTemplateSimple",
  "version": "1.0.0"
}


Create vFWCL Operational Policy

Now issue this command:

curl -v -k -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' -H 'Authorization: Basic aGVhbHRoY2hlY2s6emIhWHp0RzM0' -d @./newoppolicy.json https://policy-api.onap:6969/policy/api/v1/policytypes/onap.policies.controlloop.Operational/versions/1.0.0/policies

where the body is:

newoppolicy.json
{
  "content": "controlLoop%3A%0A++++version%3A+2.0.0%0A++++controlLoopName%3A+ControlLoop-vFirewall-fb6f9541-32e6-44df-a312-9a67320c0b08%0A++++trigger_policy%3A+unique-policy-id-1-modifyConfig%0A++++timeout%3A+1200%0A++++abatement%3A+false%0Apolicies%3A%0A++++-+id%3A+unique-policy-id-1-modifyConfig%0A++++++name%3A+modify_packet_gen_config%0A++++++description%3A%0A++++++actor%3A+APPC%0A++++++recipe%3A+ModifyConfig%0A++++++target%3A%0A++++++++++resourceID%3A+fb6f9541-32e6-44df-a312-9a67320c0b08%0A++++++++++type%3A+VNF%0A++++++payload%3A%0A++++++++++streams%3A+%27%7B%22active-streams%22%3A5%7D%27%0A++++++retry%3A+0%0A++++++timeout%3A+300%0A++++++success%3A+final_success%0A++++++failure%3A+final_failure%0A++++++failure_timeout%3A+final_failure_timeout%0A++++++failure_retries%3A+final_failure_retries%0A++++++failure_exception%3A+final_failure_exception%0A++++++failure_guard%3A+final_failure_guard%0A",
  "policy-id": "operational.modifyconfig"
}

Notice again that the VNF invariant model ID of fb6f9541-32e6-44df-a312-9a67320c0b08 has been placed in the body twice - as part of the 'controlLoopName' and as the 'resourceID' value.

The response from the command is:

{
  "policy-id": "operational.modifyconfig",
  "policy-version": "4",
  "content": "controlLoop%3A%0A++++version%3A+2.0.0%0A++++controlLoopName%3A+ControlLoop-vFirewall-fb6f9541-32e6-44df-a312-9a67320c0b08%0A++++trigger_policy%3A+unique-policy-id-1-modifyConfig%0A++++timeout%3A+1200%0A++++abatement%3A+false%0Apolicies%3A%0A++++-+id%3A+unique-policy-id-1-modifyConfig%0A++++++name%3A+modify_packet_gen_config%0A++++++description%3A%0A++++++actor%3A+APPC%0A++++++recipe%3A+ModifyConfig%0A++++++target%3A%0A++++++++++resourceID%3A+fb6f9541-32e6-44df-a312-9a67320c0b08%0A++++++++++type%3A+VNF%0A++++++payload%3A%0A++++++++++streams%3A+%27%7B%22active-streams%22%3A5%7D%27%0A++++++retry%3A+0%0A++++++timeout%3A+300%0A++++++success%3A+final_success%0A++++++failure%3A+final_failure%0A++++++failure_timeout%3A+final_failure_timeout%0A++++++failure_retries%3A+final_failure_retries%0A++++++failure_exception%3A+final_failure_exception%0A++++++failure_guard%3A+final_failure_guard%0A"
}

Push vFirewall Policies to PDP Group

Now issue the following command:

curl -v -k -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' -H 'Authorization: Basic aGVhbHRoY2hlY2s6emIhWHp0RzM0' -d @./newpdppush.json https://policy-pap.onap:6969/policy/pap/v1/pdps/policies

Where the body is:

{
  "policies": [
    {
      "policy-id": "onap.vfirewall.tca",
      "policy-version": 1
    },
    {
      "policy-id": "operational.modifyconfig",
      "policy-version": "4.0.0"
    }
  ]
}

Note here that the first digit of the 'policy-version' of "4.0.0" was taken from the "policy-version": "4" that was returned in the previous step where the Operational policy was posted.

The response to this command is just:  "{}"

Validate the vFWCL Policy

This is a query that the Robot test does after the above steps.

curl -v -k -X GET --header 'Content-Type: application/json' --header 'Accept: application/json' -H 'Authorization: Basic aGVhbHRoY2hlY2s6emIhWHp0RzM0' https://policy-pap.onap:6969/policy/pap/v1/pdps

Where the response in this case was:

{
  "groups": [
    {
      "name": "defaultGroup",
      "description": "The default group that registers all supported policy types and pdps.",
      "pdpGroupState": "ACTIVE",
      "properties": {},
      "pdpSubgroups": [
        {
          "pdpType": "apex",
          "supportedPolicyTypes": [
            {
              "name": "onap.policies.controlloop.operational.Apex",
              "version": "1.0.0"
            }
          ],
          "policies": [],
          "currentInstanceCount": 1,
          "desiredInstanceCount": 1,
          "properties": {},
          "pdpInstances": [
            {
              "instanceId": "apex_49",
              "pdpState": "ACTIVE",
              "healthy": "HEALTHY",
              "message": "Pdp Heartbeat"
            }
          ]
        },
        {
          "pdpType": "drools",
          "supportedPolicyTypes": [
            {
              "name": "onap.policies.controlloop.Operational",
              "version": "1.0.0"
            }
          ],
          "policies": [
            {
              "name": "operational.modifyconfig",
              "version": "1.0.0"
            },
            {
              "name": "operational.modifyconfig",
              "version": "2.0.0"
            },
            {
              "name": "operational.modifyconfig",
              "version": "3.0.0"
            },
            {
              "name": "operational.modifyconfig",
              "version": "4.0.0"
            }
          ],
          "currentInstanceCount": 1,
          "desiredInstanceCount": 1,
          "properties": {},
          "pdpInstances": [
            {
              "instanceId": "dev-policy-drools-0",
              "pdpState": "ACTIVE",
              "healthy": "HEALTHY"
            }
          ]
        },
        {
          "pdpType": "xacml",
          "supportedPolicyTypes": [
            {
              "name": "onap.policies.controlloop.guard.FrequencyLimiter",
              "version": "1.0.0"
            },
            {
              "name": "onap.policies.controlloop.guard.MinMax",
              "version": "1.0.0"
            },
            {
              "name": "onap.policies.controlloop.guard.Blacklist",
              "version": "1.0.0"
            },
            {
              "name": "onap.policies.controlloop.guard.coordination.FirstBlocksSecond",
              "version": "1.0.0"
            },
            {
              "name": "onap.Monitoring",
              "version": "1.0.0"
            },
            {
              "name": "onap.policies.monitoring.cdap.tca.hi.lo.app",
              "version": "1.0.0"
            },
            {
              "name": "onap.policies.monitoring.dcaegen2.collectors.datafile.datafile-app-server",
              "version": "1.0.0"
            },
            {
              "name": "onap.policies.monitoring.docker.sonhandler.app",
              "version": "1.0.0"
            },
            {
              "name": "onap.policies.optimization.AffinityPolicy",
              "version": "1.0.0"
            },
            {
              "name": "onap.policies.optimization.DistancePolicy",
              "version": "1.0.0"
            },
            {
              "name": "onap.policies.optimization.HpaPolicy",
              "version": "1.0.0"
            },
            {
              "name": "onap.policies.optimization.OptimizationPolicy",
              "version": "1.0.0"
            },
            {
              "name": "onap.policies.optimization.PciPolicy",
              "version": "1.0.0"
            },
            {
              "name": "onap.policies.optimization.QueryPolicy",
              "version": "1.0.0"
            },
            {
              "name": "onap.policies.optimization.SubscriberPolicy",
              "version": "1.0.0"
            },
            {
              "name": "onap.policies.optimization.Vim_fit",
              "version": "1.0.0"
            },
            {
              "name": "onap.policies.optimization.VnfPolicy",
              "version": "1.0.0"
            }
          ],
          "policies": [
            {
              "name": "onap.vfirewall.tca",
              "version": "1.0.0"
            }
          ],
          "currentInstanceCount": 1,
          "desiredInstanceCount": 1,
          "properties": {},
          "pdpInstances": [
            {
              "instanceId": "dev-policy-policy-xacml-pdp-65bbc9697f-q2xq2",
              "pdpState": "ACTIVE",
              "healthy": "HEALTHY"
            }
          ]
        }
      ]
    }
  ]
}

Update DCAE Consul

A setting in Consul needs to be updated as well.  Read how to do it here:  https://onap.readthedocs.io/en/latest/submodules/integration.git/docs/docs_vfw.html#preconditions

For this example, the two occurrences of 'closedLoopControlName' were changed to ControlLoop-vFirewall-fb6f9541-32e6-44df-a312-9a67320c0b08 (same is in the above commands).

What Happened

After making the above changes to the vFW components and the policy configuration, it was time to try testing the closed loop operation.

Using the APPC to change the packet generator stream count to either '1' or '10' - it was observed that events were being received by the VES collector and forwarded on.

AAI enrichment by vserver query was successful.  However, the policy did not work.

Looking at the drools network.log file, there were lots of events being handled, even for vservers that did not exist - i.e. they were standard vFWCL instances that had been generated previously by robot on Openstack.

The K8S vFW policy appeared to be  showing up as already 'locked' or something.

The Fix - workaround

Behavior was very confusing, so the fix turned out to be:

  1. go through a helm delete, redeploy sequence for the policy component
  2. e.g. from rancher node in ~/oom/kubernetes, do the following:

    helm delete dev-policy --purge
    ~/integration/deployment/heat/onap-rke/scripts/cleanup.sh policy
    rm -rf /dockerdata-nfs/dev-policy
    helm deploy dev local/onap -f /root/oom/kubernetes/onap/resources/environments/public-cloud.yaml -f /root/integration-override.yaml --namespace onap

    After policy pods are all back running, the above 3 policy configuration steps were executed again.  The only difference this time was in step 3, the 'policy-version' was '1.0.0' since '1' was returned for 'policy-version' in step 2.

After this was done, the VES events were started up again on the K8S vFW (they had been stopped).  Then, using APPC, the packet generator was configured to either 1 or 10 (it had been manually set to 5 before testing started to begin within policy).

Now, the policy worked and APPC was used automatically to set the streams back to 5.

Following picture shows how it was set several times over an hour and policy set it back to 5.



  • No labels