Preparation

Install ONAP

Make sure that all components pass health check when you do the following:

ssh to the robot vm, run '/opt/ete.sh health'

You will need to update your /etc/hosts so that you can access the ONAP Portal in your browser. You may also want to add IP addresses of so, sdnc, aai, etc so that you can easily ssh to those VMs. Below is a sample just for your reference:

Create images for vBRG, vBNG, vGMUX, and vG

To avoid unexpected mistakes, you may want to give each image a meaningful name and also be careful when mixing upper case and lower case characters. After this you should see images like below. The casablanca image names have 'casa' in them like "vbng-casa-base-ubuntu-16-04" .

VNF Onboarding

Create license model in SDC

Log in to SDC portal as designer. Create a license that will be used by the subsequent steps. The detailed steps are here: Creating a Licensing Model

Prepare HEAT templates

vCPE uses five VNFs: Infra, vBRG, vBNG, vGMUX, and vG, which are described using five HEAT templates. For each HEAT template, you will need to fill the env file with appropriate parameters. The HEAT templates can be obtained from gerrit: [demo.git] / heat / vCPE /

Note that for each VNF, the env file name and yaml file name are associated together by file MANIFEST.json. If for any reason you change the env anf yaml file names, please remember to change MANIFEST.json accordingly.

VNF onboarding in SDC

Onboard the VNFs in SDC one by one. The process is the same for all VNFs. The suggested names for the VNFs are given below (all lower case). The suffix can be a date plus a sequence letter, e.g., 1222a.

vcpevsp_infra_[suffix]

vcpevsp_vbrg_[suffix]

vcpevsp_vbng_[suffix]

vcpevsp_vgmux_[suffix]

vcpevsp_vgw_[suffix]

Below is an example for onboarding infra.

Sign into SDC as cs0008, choose ONBOARD and then click 'CREATE NEW VSP'.

Now enter the name of the vsp. For naming, I'd suggest all lower case with format vcpevsp_[vnf]_[suffix], see example below.

After clicking 'Create', click 'missing' and then select to use the license model created previously.

Click 'Overview' on the left size panel, then drag and drop infra.zip to the webpage to upload the HEAT.

Now click 'Proceed To Validation' to validate the HEAT template.

You may see a lot of warnings. In most cases, you can ignore those warnings.

Click 'Check in', and then 'Submit'

Go to SDC home, and then click 'Import VSP'.

In the search box, type in suffix of the vsp you onboarded a moment ago to easily locate the vsp. Then click 'Import VSP'.

Click 'Create' without changing anything.

Now a VF based on the HEAT is created successfully. Click 'Submit for Testing'.

Sign out and sign back in as tester: jm0007, select the VF you created a moment ago, test and accept it.

Note: in Casablanca you can simply Certify the VSP and continue on with Service Design and Creation

Service Design and Creation

The entire vCPE use case is divided into five services as show below. Each service is described below with suggested names.

vcpesvc_infra_[suffix]: includes two generic neutron networks named cpe_signal and cpe_public (all names are lower case) and a VNF infra.

vcpesvc_vbng_[suffix]: includes two generic neutron networks named brg_bng and bng_mux and a VNF vBNG.

vcpesvc_vgmux_[suffix]: includes a generic neutron network named mux_gw and a VNF vGMUX

vcpesvc_vbrg_[suffix]: includes a VNF vBRG.

vcpesvc_rescust_[suffix]: includes a VNF vGW and two allotted resources that will be explained shortly.

Service design and distribution for infra, vBNG, vGMUX, and vBRG

The process for creating these four services are the same, however make sure to use the vnfs & networks as described above. Below are the steps to create vcpesvc_infra_1222a. Follow the same process to create the other three services, changing networks and VNFs according to above. Log back in as Designer username: cs0008

In SDC, click 'Add Service' to create a new service

Enter name, category, description, product code, and click 'Create'.

Click 'Composition' from left side panel. Drag and drop VF vcpevsp_infra_1222a to the design.

Drag and drop a generic neutron network to the design, click to select the icon in the design, then click the pen in the upper right corner (next to the trash bin icon), a window will pop up as shown below. Now change the instance name to 'cpe_signal'.

Click and select the network icon in the design again. From the right side panel, click the icon and then select 'network_role'. In the pop up window, enter 'cpe_signal' as shown below.

Add another generic neutron network the sam way. This time change the instance name and network role to 'cpe_public'. Now the service design is completed. Click 'Submit for Testing'.

Sign out and sign back in as tester 'jm0007'. Test and approve this service.

Sign out and sign back in as governer 'gv0001'. Approve this service.

Sign out and sign back in as operator 'op0001'. Distribute this service. Click monitor to see the results. After some time (could take 30 seconds or more), you should see the service being distributed to AAI, SO, SDNC.

Service design and distribution for customer service

First of all, make sure that all the previous four services have been created and distributed successfully.

The customer service includes a VNF vGW and two allotted resources: tunnelxconn and brg. We will need to create the two allotted resources first and then use them together with vG (which was already onboarded and imported as a VF previously) to compose the service.

Check Sub Category Tag in SDC

You may need to add an Allotted Resource Category Tag to SDC for the BRG.

Log as the "demo" account and go to SDC.

Select "Category Management"

Select "Allotted Resource"

You should have "Tunnel XConn" and "BRG".

If you do not and are missing the "BRG" Sub Category Like the screen shot below. Click on New and add the "BRG" Subcategory.

Create allotted resource tunnelxconn

This allotted resource depends on the previous created service vcpesvc_vgmux_1222a. The dependency is described by filling the allotted resource with the UUID, invariant UUID, and service name of vcpesvc_vgmux_1222a. So for preparation, we first download the csar file of vcpesvc_vgmux_1222a from SDC.

Sign into SDC as designer cs0008, click create a new VF, select 'Tunnel XConnect' as category and enter other information as needed. See below for an example. I'm using vcpear_tunnelxconn_1222a as the name of this allotted resource.

Click create. And then click 'Composition', drag an 'AllottedResource' from the left side panel to the design.

Click on VF name link in between HOME link and Composition on the top menu. From here click on Properties Assignment on the left hand menu. Now open the csar file for vcpesvc_vgmux_1222a, under 'Definitions' open file 'service-VcpesvcVgmux1222a-template.yml'. (Note that the actual file name depends on what you name the service at the first place.) Now put the yml file and the SDC window side by side. Now copy&paste the invariantUUID, UUID, and node name to the corresponding fields in SDC. See the two screenshots below. Save and then submit for testing.

Create allotted resource brg

This allotted resource depends on the previous created service vcpesvc_vbrg_1222a. The dependency is described by filling the allotted resource with the UUID, invariant UUID, and service name of vcpesvc_vbrg_1222a. So for preparation, we first download the csar file of vcpesvc_vbrg_1222a from SDC.

We name this allotted resource vcpear_brg_1222a. The process to create it is the same as that for the above vcpear_vgmux_1222a, Use catagory: BRG. The only differences are the UUID, invariant UUID, and service name parameters being used. Therefore, I will not repeat the steps and screenshots here.

Sign out and sign back in as tester 'jm0007'. Test and approve both Allotted Resources.

Create customer service

Log back in as Designer username: cs0008

We name the service vcpesvc_rescust_1222a and follow the steps below to create it.

Sign into SDC as designer, add a new service and fill in parameters as below. Then click 'Create'.

Click 'Composition' from the left side panel. Drag and drop the following three components to the design.

vcpevsp_vgw_1222a

vcpear_tunnelxconn_1222a

vcpear_brg_1222a

Point your mouse to the arrow next to 'Composition' and then click 'Properties Assignment' (see below).

First select tunnelxconn from the right side panel, then fill nf_role and nf_type with value 'TunnelXConn'.

Next select brg from the right side panel, then fill nf_role and nf_type with value 'BRG'.

Click 'Submit for Testing'.

Now sign out and sign back in as tester 'jm0007' to complete test of vcpesvc_rescust_1222a.

Sign out and sign back in as governer 'gv0001'. Approve this service.

Distribute the customer service to AAI, SO, and SDNC

Before distributing the customer service, make sure that the other four services for infra, vBNG, vGMUX, and vBRG all have been successfully distributed.

Now distribute the customer service, sign out and sign back in as operator 'op0001'. Distribute this service and check the status to ensure the distribution succeeds. It may take tens of seconds to complete. The results should look like below.

Deploy Infrastructure

Download and modify automation code

A python program had been developed to automate the deployment. You can download ONAP integration repo by git clone https://gerrit.onap.org/r/integration, and the script is under integration/test/vcpe.

Now go to the vcpe directory and modify vcpecommon.py. You will need to enter your cloud and network information into the following two dictionaries.

Preparation

Create subdirectory csar/ and __var/, and download service csar from SDC and put under csar directory

install python-pip and other python modules (see the comment section)

Run automation program to deploy services

Sign into SDC as designer and download five csar files for infra, vbng, vgmux, vbrg, and rescust. Copy all the csar files to directory csar.

Now you can simply run 'vcpe.py' to see the instructions.

To get ready for service deployment. First run 'vcpe.py init'. This will modify SO and SDNC database to add service-related information.

Once it is done. Run 'vcpe.py infra'. This will deploy the following services. It may take 7-10 minutes to complete depending on the cloud infrastructure.

Infra

vBNG

vGMUX

vBRG

If the deployment succeeds, you will see a summary of the deployment from the program.

Validate deployed VNFs

By now you will be able to see 7 VMs in Horizon. However, this does not mean all the VNFs are functioning properly. In many cases we found that a VNF may need to be restarted multiple times to make it function properly. We perform validation as follows:

Run healthcheck.py. It checks for three things:

vGMUX honeycomb server is running

vBRG honeycomb server is running

vBRG has obtained an IP address and its MAC/IP data has been captured by SDNC

If this healthcheck passes, then skip the following and start to deploy customer service. Otherwise do the following and redo healthcheck.

If vGMUX check does not pass, restart vGMUX, make sure it can be connected using ssh.

If vBRG check does not pass, restart vBRG, make sure it can be connected using ssh.

(Please note that the four VPP-based VNFs (vBRG, vBNG, vGMUX, and vGW) were developed by the ONAP community in a tight schedule. We are aware that vBRG may not be stable and sometimes need to be restarted multiple times to get it work. The team is investigating the problem and hope to make it better in the near future. Your patience is appreciated.)

Deploy Customer Service and Test Data Plane

After passing healthcheck, we can deploy customer service by running 'vcpe.py customer'. This will take around 3 minutes depending on the cloud infrastructure. Once finished, the program will print the next few steps to test data plane connection from the vBRG to the web server. If you check Horizon you should be able to see a stack for vgw created a moment ago.

Tips for trouble shooting:

There could be situations that the vGW does not fully functioning and cannot be connected to using ssh. Try to restart the VM to solve this problem.

isc-dhcp-server is supposed to be installed on vGW after it is instantiated. But it could happen that the server is not properly installed. If this happens, you can simply ssh to the vGW VM and manually install it with 'apt install isc-dhcp-server'.

10. Run `vcpe.py infra`11. Make sure sniro configuration is run as part of the above step.12. Install curl command in sdnc-sdnc container13. Run healthcheck-k8s.sh to check connectivity from sdnc to brg and gmux. If healthcheck-k8s.sh fails, check /opt/config/sdnc_ip.txt to see it has the SDNC host ip correctly. If you need to change SDNC host ip, you need to clean up and rerun `vcpe.py infra`. Also verify tap interfaces tap-0 and tap-1 are up by running vppctl with show int command. If tap interfaces are not up, use vppctl tap delete tap-0 and tap-1 and then run `/opt/bind_nic.sh` followed by `/opt/set_nat.sh`.

14. Run `vcpe.py customer`15. Verify tunnelxconn and brg vxlan tunnels are set up correctly16. Set up vgw and brg dhcp and route, and ping from brg to vgw. Note vgw public ip on Openstack Horizon may be wrong. Use vgw OAM ip to login.

21. Push closed loop policy on Pap.22. Run `vcpe.py loop` and verify vgmux is restarted23. To repeat create infra step, you can delete infra vf-module stacks first and the network stacks from Openstack Horizon Orchestration->Stack page, then clean up the record in sdnc DHCP_MAC table before rerun `vcpe.py infra`24. To repeat create customer step, you can delete customer stack, then clear up tunnles by running `cleanGMUX.py gmux_public_ip` and `cleanGMUX.py brg_public_ip`. After that you can rerun create customer command

Typical Errors and Solutions

SDNC DG error

If you run vcpe.py customer and see an error similar to the following:

a. optionally you can change the version to something like 1.3.3-SNAPSHOT-FIX and update graph.versions to match but that is not needed if the xml failed to load .

3. run /opt/sdnc/svclogic/bin/install.sh

this will install the edited DG and make it active as long as the version in the xml and the version in graph.versions match

4. re-run /opt/sdnc/svclogic/bin/showActiveGraphs.sh and you should see the active DG

DHCP server doesn't work

ssh to the dhcp server

systemctl status kea-dhcp4-server.service

If the service is not installed, do 'apt install kea-dhcp4-server.service'

If the service is installed, most likely /usr/local/lib/kea-sdnc-notify.so is missing. Download this file from the following link and put it in /usr/local/lib. Link: kea-sdnc-notify.so

systemctl restart kea-dhcp4-server.service

vBRG not responding to configuration from SDNC

Symptom: Run healthcheck.py and the test fails to connect to connect to vBRG. (Note you need to edit the healthcheck.py to use the correct IP address for vBRG. The default is 10.3.0.2).

This is caused by vpp not working properly inside vBRG. There is no deterministic fix for this problem until we have a stable vBRG image. Temporarily, you may try to either restart the vBRG VM or ssh to vBRG and 'systemctl restart vpp' and then retry healthcheck.py. Note that 'systemctl restart vpp' may work better that rebooting the VM but there is no guarantee.

Inside vBRG you can also check the status with 'vppctl show int'. If vpp works properly, you should be able to see that both tap-0 and tap-1 in 'up' state. An example is below.

Unable to change subnet name

When running "vcpe.py infra" command, if you see error message about subnet can't be found. It may be because your python-openstackclient is not the latest version and don't support "openstack subnet set --name" command option. Upgrade the module with "pip install --upgrade python-openstackclient".

In updating the SO docker container on OOM , dont stop the container. It shouldn't be needed (SO error log indicates the mso.bpmn file was changed so it probably doesnt need a restart) and if you do your changes may be lost. You should update the file on the docker-nfs.

Normally Service design should happen only once irrespective of number of customers.

Do steps under

Service design and distribution for customer service

are expected to be done only once even if there are multiple customers? Or above steps expected to happen as many times as number of customers? I guess former is true. Can you please confirm?

If former is true, I guess "vcpe.py customer" needs to be run for each customer. Is there any customer specific tuning? If so, which file in vcpe.zip need to be changed? For example, I might want to give descriptive name every time service is instantiated for a given customer or it could be that I may want to provide specific network/subnet configuration for each customer network.

Srinivasa Addepalli RE: vcpe.py script - It is a testing script, correct me if I'm wrong Kang Xi, but it currently uses date + time as addition to each instantiated service to differentiate. If we wanted to expand it to operator/user facing script there would be additional work (for enhanced naming etc.). However I believe the expectation would be that operators/users of ONAP use portal/vid and other ONAP tools, or custom tools to instantiate.

@Marcus Williams What was the intention of developing vcpe.py if portal/vid can be used to instantiate the service. It would be good if vCPE testing is done using portal/vid just like the way SDC is used to onboard/create the services. Or is it that there are some missing features in Portal/VID?

vcpe.py was built for test automation. Generally, VID is for infrastructure and the expectation is that customer driven orders for things like the vGateway would be from a BSS/OSS front end out side of ONAP rather than a technician going to the VID GUI to instantiate a vGW. The BSS/OSS ordering systems would handle the service provider customer experience, ordering, fulfillment of the BRG and billing setup and one of the steps would be the call to an ONAP based system to instantiate the vGW.

Thanks Brian Freeman. I understand that the OSS/BSS systems are the consumers of the API and VID will not be used for service instantiation in real deployments. Based on Marcus reply, I understand we have some way to instantiate the services via VID (Could have been done for simplifying testing, demoing etc...). If so, it is good to utilize GUI in integration testing. In my view there are two advantages. One is that VID is also tested as part this end-to-end use case testing and second is intuitiveness

In vCPE use case, I thought there would be only two services - Infrastructure service and customer service. But, there are 4 infrastructure services - infra (to instantiate AAA, DHCP, DNS and test web server etc...), BNG, VGMUX and BRG. What was the reasoning for dividing infrastructure service into 4 of them. By putting them in 4 different services, somebody else (like OSS/BSS system or user) need to worry about making these 4 services run atomically. If they had put in one service, that complexity could have been avoided. Any reasons on why it was divided into multiple services?

They are separate deployable units. If you think about edge data centers there will be multiple locations with vGMUX, vBNG. In fact the vBNG could be down in the metro area connecting to 1 or more vGMUX sites. The centralized components like AAA, DHC, DNS could/would be centralized not regionalized and the BRG is really per customer and we put it in as a virtual device for testing only. So on a national service level these items would be deployed in different groups with different life cycles.

Just brainstorming/discussing here. Now that we have OOF and in future TOSCA based orchestration, I think we can have one service using TOSCA with various VNFDs - One for centralized components such as AAA, DHCP, DNS etc.., 2nd one for VGMUX and 3rd for BNG and others for testing etc.. OOF, with its policy constraints, can select the site (on per VNF basis) during instantiation time. Does that sound okay? Or do you still see the need for having various services in Casablanca?

followed your suggestion, I checked cloud_region (in my case regionOne) and tenant (onap) in both AAI and in the vcpecommon.py and it is the same. Any other hint on how to fix this Error writing to l3-network issue?

when I running vcpe.py infra, there are some error like this on Create Service Instance:

"statusMessage": "<requestError><serviceException><messageId>SVC3001</messageId><text>Resource not found for %1 using id %2 (msg=%3)(ec=%4)</text><variables><variable>PUT</variable><variable>business/customers/customer/SDN-ETHERNET-INTERNET/service-subscriptions/service-subscription/vCPE/service-instances/service-instance/f48c6247-c912-4333-8877-0a4777f9e8cd</variable><variable>Node Not Found:object located at service-design-and-creation/models/model/61c83e86-82e9-4744-b931-eba579b24186/model-vers/model-ver/19d3edca-6636-482a-83c8-351647e6c030#model-version not found</variable><variable>ERR.5.4.6114</variable></variables></serviceException></requestError>\n"

but I can find UUID:19d3edca-6636-482a-83c8-351647e6c030 model that named vcpesvc_infra in SDC Portal. And the request data is :

He did good work around - my comment was more for down the road. TARGET_NETWORK_ROLE is null in the table so we are mapping nf_role to TARGET_NETWORK_ROLE in the code since it was less work than adding changes to the data ingest. Our documentation wasnt setting it in SDC but it doesnt get into the table either. Seems like its a 1/2 deprecated feature.

For vCPEResCust distribution if AAI model-loader is giving a DEPLOY_ERROR on the csar with a message about model-vers doesnt exist , check that the gmux and brg distributions succeeded. The UUID's in the refereces create edges at model onboard for the AllottedResources in the TunnelXConn and BRG VFs