Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

vEPC Deployment Steps

 

The steps mentioned in this topic assumes that you have access to the Affirmed Networks FTP site to download the correct software as recommended by Affirmed Networks. Also, the base deployment of mobility management entity (MME), mobile content core (MCC), and element management system (EMS) are meant to have Juniper Networks and Affirmed Networks Professional Services further configure these systems based on customer environments and sizing requirements. Also, ensure that the heat templates mentioned in these steps are included in the deployment. These will be made available to Juniper Professional Services on request.

Before You Begin:

Before you deploy Affirmed Networks and Juniper Networks workloads on Contrail Cloud 10.0, review the following checklist:

  • Review the DPDK Guide to ensure that all configuration and tunables are configured properly on all computes.

  • Determine the hosts and the following credentials needed to deploy the mobility workloads:

    • jumphost or OSPD jumphost IP, username, and password

    • Horizon administrator password

    • Contrail administrator password which is usually the same as Horizon

  • Ensure the quotas on the system align to the resources available on the cluster. For example, take the total RAM on all the computes. Then take 75% of that number and allocate as the quota for the resources. Repeat the process for the cores in all the computes. Note the values of the quota might vary depending on the size of the deployment and as you significantly add more computes to a cluster. Here is an example of quotas set in OpenStack based on the above assumptions:

    • openstack quota set --ram 512000 admin

    • openstack quota set --instances 40 admin

    • openstack quota set --cores 310 admin

Deploy MME

To deploy MME:

  1. Use the gunzip utility to extract the files.
    sudo gzip -d epc-11.1.15.0.img.gz
  2. Upload the image into OpenStack Glance:
    glance image-create --name epc-11.1.15.0 --disk-format raw --container-format bare --min-disk 2 --min-ram 4096 --visibility public --property hw-vif-multiqueue-enabled=true --file /var/tmp/epc-11.1.15.0.img --progress
  3. Deploy the flavors:
    • nova flavor-create --ephemeral 20 --swap 0 vmme-mgmt auto 8192 2 4

    • nova flavor-create --ephemeral 20 --swap 0 vmme-rm auto 8192 2 4

    • nova flavor-create --ephemeral 20 --swap 0 vmme-callp auto 8192 2 4

    • nova flavor-create --ephemeral 20 --swap 0 vmme-sig auto 8192 2 4

    • nova flavor-create --ephemeral 20 --swap 0 vmme-data auto 8192 2 4

    • nova flavor-create --ephemeral 20 –-swap 0 vmme-lb auto 8192 2 4

  4. Set all MME flavors for huge pages:
    • openstack flavor set --property hw:mem_page_size=large vmme-mgmt

    • openstack flavor set --property hw:mem_page_size=large vmme-rm

    • openstack flavor set --property hw:mem_page_size=large vmme-callp

    • openstack flavor set --property hw:mem_page_size=large vmme-sig

    • openstack flavor set --property hw:mem_page_size=large vmme-data

    • openstack flavor set --property hw:mem_page_size=large vmme-lb

  5. Set all MME flavors for cpu policy and MQ:
    • openstack flavor set --property hw:cpu_policy=dedicated --property hw:vif_multiqueue_enabled=true vmme-mgmt

    • openstack flavor set --property hw:cpu_policy=dedicated --property hw:vif_multiqueue_enabled=true vmme-rm

    • openstack flavor set --property hw:cpu_policy=dedicated --property hw:vif_multiqueue_enabled=true vmme-callp

    • openstack flavor set --property hw:cpu_policy=dedicated --property hw:vif_multiqueue_enabled=true vmme-sig

    • openstack flavor set --property hw:cpu_policy=dedicated --property hw:vif_multiqueue_enabled=true vmme-lb

    • openstack flavor set --property hw:cpu_policy=dedicated --property hw:vif_multiqueue_enabled=true vmme-data

  6. Deploy the Heat templates for MME base, data, and management networks:
    • openstack stack create sg-permit-all -e contrail-security-group.env -t contrail-security-group.yaml

    • openstack stack create vmme-base-vn -e vmme040-base-network.env -t vn_tmpl.yaml

    • openstack stack create vmme-data-vn -e vmme040-data-network.env -t vn_tmpl.yaml

    • openstack stack create vmme-mgmt-vn -e vmme040-mgmt-network.env -t vn_tmpl-rt.yaml

    • openstack stack create vmme-oam-vn -e vmme040-oam-network.env -t vn_tmpl-rt.yaml

  7. Deploy the Heat templates for MME VMs:
    • openstack stack create vmme-mgmt -e vmme040-mgmt-0.env -t mgmt-0ns_tmpl.yaml

    • openstack stack create vmme-rm -e vmme040-rm-0.env -t nonmgmt-0ns_tmpl-nhc.yaml

    • openstack stack create vmme-sig -e vmme040-sig-0.env -t nonmgmt-0ns_tmpl-nhc.yaml

    • openstack stack create vmme-callp -e vmme040-callp-0.env -t nonmgmt-0ns_tmpl-nhc.yaml

    • openstack stack create vmme-mgmt-1 -e vmme040-mgmt-1.env -t mgmt-0ns_tmpl.yaml

    • openstack stack create vmme-rm-1 -e vmme040-rm-1.env -t nonmgmt-0ns_tmpl-nhc.yaml

    • openstack stack create vmme-sig-1 -e vmme040-sig-1.env -t nonmgmt-0ns_tmpl-nhc.yaml

    • openstack stack create vmme-callp-1 -e vmme040-callp-1.env -t

  8. Deploy the Heat templates for MME North and South and Loopback networks:
    • openstack stack create vmme-ns1-vn -e vmme040-ns1-network.env -t vn_tmpl-rt.yaml

    • openstack stack create vmme-ns2-vn -e vmme040-ns2-network.env -t vn_tmpl-rt.yaml

    • openstack stack create vmme-ns-loop-vn -e vmme040-ns-loop-network.env -t vn_tmpl-rt.yaml

  9. Deploy the Heat templates for MME data and LB VMs:
    • openstack stack create vmme-data -e vmme040-data-0.env -t data-2ns-tmpl.yaml

    • openstack stack create vmme-lb -e vmme040-lb-0.env -t lb-2ns-tmpl.yaml

    • openstack stack create vmme-data-1 -e vmme040-data-1.env -t data-2ns-tmpl.yaml

    • openstack stack create vmme-lb-1 -e vmme040-lb-1.env -t lb-2ns-tmpl.yaml

Deploy MCC

To deploy MCC:

  1. Use the tar utility to extract the files:
    tar –xzvf an-7.2.6.3-16.REL.kvm
  2. Upload the image into OpenStack Glance:
    • glance image-create --name an-7.2.6.3-16.REL-payload --disk-format qcow2 --container-format bare --visibility public --property hw_disk_bus=ide --property hw_cdrom_bus=ide --file an-7.2.6.3-16.REL.payload.qcow2 --progress

    • glance image-create --name an-7.2.6.3-16.REL-controller --disk-format qcow2 --container-format bare --visibility public --property hw_disk_bus=ide --property hw_cdrom_bus=ide --file an-7.2.6.3-16.REL.qcow2 --progress

    • glance image-create --name an-ems-v1.2 --disk-format raw --container-format bare --min-disk 2 --min-ram 4096 --visibility public --file an-ems-v1.2-rhel-server-6.8-x86_64-mysql-5.6.31.qcow2 --progress

  3. Create the MCC OpenStack flavors:
    • nova flavor-create AN-MCM auto 32768 112 8

    • nova flavor-create AN-CSM auto 32768 112 8

    • nova flavor-create AN-SSM auto 32768 50 16

    • nova flavor-create AN-EMS-v1.2 auto 32768 600 16

  4. Update the flavors with huge pages:
    • openstack flavor set --property hw:mem_page_size=large AN-CSM

    • openstack flavor set --property hw:mem_page_size=large AN-MCM

    • openstack flavor set --property hw:mem_page_size=large AN-SSM

    • openstack flavor set --property hw:mem_page_size=large AN-EMS-v1.2

  5. Deploy the Heat templates for MCC:
    • openstack stack create mcc-vns -e MCC-VNs-Create.env -t MCC-VNs-Create.yaml

    • openstack stack create mcc-csm -e CSM-VM-Create.env -t CSM-VM-Create.yaml

    • openstack stack create mcc-mcm -e MCM-VM-Create.env -t MCM-VM-Create.yaml

    • openstack stack create mcc-ssm -e SSM-VM-Create.env -t SSM-VM-Create.yaml

    • openstack stack create mcc-csm2 -e CSM-VM2-Create.env -t CSM-VM-Create.yaml

    • openstack stack create mcc-mcm2 -e MCM-VM2-Create.env -t MCM-VM-Create.yaml

    • openstack stack create mcc-ssm2 -e SSM-VM2-Create.env -t SSM-VM-Create.yaml

    • openstack stack create ems -e EMS-VM-Create.env -t EMS-VM-Create.yaml

Configure EMS, MME and MCC Network Elements

To configure MCC, log in to the MCM either from the compute using the metadata IP address or assign a FIP to the management IP address:

  1. ssh admin@<IPaddress>password: admin
  2. Type: “config”
  3. (config)# cluster 17 node 1 type v-csm admin-state enabled ; top
  4. (config)# cluster 17 node 12 type v-ssm admin-state enabled ; top
  5. (config)# cluster 17 node 2 type v-csm admin-state enabled ; top
  6. (config)# cluster 17 node 13 type v-ssm admin-state enabled ; top

    (config)# commit ; exit

  7. The output of the show cluster summary command should look like this:

To configure MME, log in to the management VM:

  1. ssh admin@<IPaddress> password:admin
  2. Config
  3. epc system rm unit 0 vm-instance vmme040-rm-0
  4. exit
  5. epc system sig unit 0 vm-instance vmme040-sig-0
  6. exit
  7. epc system callp 0 unit 0 vm-instance vmme040-callp-0
  8. exit
  9. epc system data unit 0 vm-instance vmme040-data-0
  10. exit
  11. epc system lb unit 0 vm-instance vmme040-lb-0
  12. exit
  13. epc system mgmt unit 1 vm-instance vmme040-mgmt-1
  14. exit
  15. epc system rm unit 1 vm-instance vmme040-rm-1
  16. exit
  17. epc system sig unit 1 vm-instance vmme040-sig-1
  18. exit
  19. epc system callp 0 unit 1 vm-instance vmme040-callp-1
  20. exit
  21. epc system data unit 1 vm-instance vmme040-data-1
  22. exit
  23. epc system lb unit 1 vm-instance vmme040-lb-1
  24. Commit
  25. exit
  26. The output of the show vm command should look like this:

For EMS, the default user account credentials are:

  • Username — affirmed

  • Password — acuitas

  • The root password — affirmedEMS

To configure EMS:

  1. Install Red Hat using the ems.redhat custom image 2. 3. 4. 5. 6. 7. 8.
  2. Login as root/affirmedEMS
  3. Run script: root/vm_setup.sh and assign ip address and dns servers
  4. rpm -ivh ems-7.3.7.0-32.rpm
  5. cp /tmp/License.dat /opt/Affirmed/NMS/server/ems/conf/
  6. Make sure that NTP configuration and timezone have been correctly set. Check /etc/ntp/ntp.conf and run service ntpd restart after any change.
  7. Change directory to /opt/Affirmed/NMS/bin and ./starttems.sh.
  8. To check status change directory to /opt/Affirmed/NMS/binand ./emsstatus:

    You can now log in to the GUI (https://ip_address) as an administrator using the following login credentials:

    • Username — admin

    • Password — admin123