vEPC Deployment Steps
The steps mentioned in this topic assumes that you have access to the Affirmed Networks FTP site to download the correct software as recommended by Affirmed Networks. Also, the base deployment of mobility management entity (MME), mobile content core (MCC), and element management system (EMS) are meant to have Juniper Networks and Affirmed Networks Professional Services further configure these systems based on customer environments and sizing requirements. Also, ensure that the heat templates mentioned in these steps are included in the deployment. These will be made available to Juniper Professional Services on request.
Before You Begin:
Before you deploy Affirmed Networks and Juniper Networks workloads on Contrail Cloud 10.0, review the following checklist:
Review the DPDK Guide to ensure that all configuration and tunables are configured properly on all computes.
Determine the hosts and the following credentials needed to deploy the mobility workloads:
jumphost or OSPD jumphost IP, username, and password
Horizon administrator password
Contrail administrator password which is usually the same as Horizon
Ensure the quotas on the system align to the resources available on the cluster. For example, take the total RAM on all the computes. Then take 75% of that number and allocate as the quota for the resources. Repeat the process for the cores in all the computes. Note the values of the quota might vary depending on the size of the deployment and as you significantly add more computes to a cluster. Here is an example of quotas set in OpenStack based on the above assumptions:
openstack quota set --ram 512000 admin
openstack quota set --instances 40 admin
openstack quota set --cores 310 admin
Deploy MME
To deploy MME:
- Use the gunzip utility to extract the files.
sudo gzip -d epc-11.1.15.0.img.gz
- Upload the image into OpenStack Glance:
glance image-create --name epc-11.1.15.0 --disk-format raw --container-format bare --min-disk 2 --min-ram 4096 --visibility public --property hw-vif-multiqueue-enabled=true --file /var/tmp/epc-11.1.15.0.img --progress
- Deploy the flavors:
nova flavor-create --ephemeral 20 --swap 0 vmme-mgmt auto 8192 2 4
nova flavor-create --ephemeral 20 --swap 0 vmme-rm auto 8192 2 4
nova flavor-create --ephemeral 20 --swap 0 vmme-callp auto 8192 2 4
nova flavor-create --ephemeral 20 --swap 0 vmme-sig auto 8192 2 4
nova flavor-create --ephemeral 20 --swap 0 vmme-data auto 8192 2 4
nova flavor-create --ephemeral 20 –-swap 0 vmme-lb auto 8192 2 4
- Set all MME flavors for huge pages:
openstack flavor set --property hw:mem_page_size=large vmme-mgmt
openstack flavor set --property hw:mem_page_size=large vmme-rm
openstack flavor set --property hw:mem_page_size=large vmme-callp
openstack flavor set --property hw:mem_page_size=large vmme-sig
openstack flavor set --property hw:mem_page_size=large vmme-data
openstack flavor set --property hw:mem_page_size=large vmme-lb
- Set all MME flavors for cpu policy and MQ:
openstack flavor set --property hw:cpu_policy=dedicated --property hw:vif_multiqueue_enabled=true vmme-mgmt
openstack flavor set --property hw:cpu_policy=dedicated --property hw:vif_multiqueue_enabled=true vmme-rm
openstack flavor set --property hw:cpu_policy=dedicated --property hw:vif_multiqueue_enabled=true vmme-callp
openstack flavor set --property hw:cpu_policy=dedicated --property hw:vif_multiqueue_enabled=true vmme-sig
openstack flavor set --property hw:cpu_policy=dedicated --property hw:vif_multiqueue_enabled=true vmme-lb
openstack flavor set --property hw:cpu_policy=dedicated --property hw:vif_multiqueue_enabled=true vmme-data
- Deploy the Heat templates for MME base, data, and management
networks:
openstack stack create sg-permit-all -e contrail-security-group.env -t contrail-security-group.yaml
openstack stack create vmme-base-vn -e vmme040-base-network.env -t vn_tmpl.yaml
openstack stack create vmme-data-vn -e vmme040-data-network.env -t vn_tmpl.yaml
openstack stack create vmme-mgmt-vn -e vmme040-mgmt-network.env -t vn_tmpl-rt.yaml
openstack stack create vmme-oam-vn -e vmme040-oam-network.env -t vn_tmpl-rt.yaml
- Deploy the Heat templates for MME VMs:
openstack stack create vmme-mgmt -e vmme040-mgmt-0.env -t mgmt-0ns_tmpl.yaml
openstack stack create vmme-rm -e vmme040-rm-0.env -t nonmgmt-0ns_tmpl-nhc.yaml
openstack stack create vmme-sig -e vmme040-sig-0.env -t nonmgmt-0ns_tmpl-nhc.yaml
openstack stack create vmme-callp -e vmme040-callp-0.env -t nonmgmt-0ns_tmpl-nhc.yaml
openstack stack create vmme-mgmt-1 -e vmme040-mgmt-1.env -t mgmt-0ns_tmpl.yaml
openstack stack create vmme-rm-1 -e vmme040-rm-1.env -t nonmgmt-0ns_tmpl-nhc.yaml
openstack stack create vmme-sig-1 -e vmme040-sig-1.env -t nonmgmt-0ns_tmpl-nhc.yaml
openstack stack create vmme-callp-1 -e vmme040-callp-1.env -t
- Deploy the Heat templates for MME North and South and
Loopback networks:
openstack stack create vmme-ns1-vn -e vmme040-ns1-network.env -t vn_tmpl-rt.yaml
openstack stack create vmme-ns2-vn -e vmme040-ns2-network.env -t vn_tmpl-rt.yaml
openstack stack create vmme-ns-loop-vn -e vmme040-ns-loop-network.env -t vn_tmpl-rt.yaml
- Deploy the Heat templates for MME data and LB VMs:
openstack stack create vmme-data -e vmme040-data-0.env -t data-2ns-tmpl.yaml
openstack stack create vmme-lb -e vmme040-lb-0.env -t lb-2ns-tmpl.yaml
openstack stack create vmme-data-1 -e vmme040-data-1.env -t data-2ns-tmpl.yaml
openstack stack create vmme-lb-1 -e vmme040-lb-1.env -t lb-2ns-tmpl.yaml
Deploy MCC
To deploy MCC:
- Use the tar utility to extract the files:
tar –xzvf an-7.2.6.3-16.REL.kvm
- Upload the image into OpenStack Glance:
glance image-create --name an-7.2.6.3-16.REL-payload --disk-format qcow2 --container-format bare --visibility public --property hw_disk_bus=ide --property hw_cdrom_bus=ide --file an-7.2.6.3-16.REL.payload.qcow2 --progress
glance image-create --name an-7.2.6.3-16.REL-controller --disk-format qcow2 --container-format bare --visibility public --property hw_disk_bus=ide --property hw_cdrom_bus=ide --file an-7.2.6.3-16.REL.qcow2 --progress
glance image-create --name an-ems-v1.2 --disk-format raw --container-format bare --min-disk 2 --min-ram 4096 --visibility public --file an-ems-v1.2-rhel-server-6.8-x86_64-mysql-5.6.31.qcow2 --progress
- Create the MCC OpenStack flavors:
nova flavor-create AN-MCM auto 32768 112 8
nova flavor-create AN-CSM auto 32768 112 8
nova flavor-create AN-SSM auto 32768 50 16
nova flavor-create AN-EMS-v1.2 auto 32768 600 16
- Update the flavors with huge pages:
openstack flavor set --property hw:mem_page_size=large AN-CSM
openstack flavor set --property hw:mem_page_size=large AN-MCM
openstack flavor set --property hw:mem_page_size=large AN-SSM
openstack flavor set --property hw:mem_page_size=large AN-EMS-v1.2
- Deploy the Heat templates for MCC:
openstack stack create mcc-vns -e MCC-VNs-Create.env -t MCC-VNs-Create.yaml
openstack stack create mcc-csm -e CSM-VM-Create.env -t CSM-VM-Create.yaml
openstack stack create mcc-mcm -e MCM-VM-Create.env -t MCM-VM-Create.yaml
openstack stack create mcc-ssm -e SSM-VM-Create.env -t SSM-VM-Create.yaml
openstack stack create mcc-csm2 -e CSM-VM2-Create.env -t CSM-VM-Create.yaml
openstack stack create mcc-mcm2 -e MCM-VM2-Create.env -t MCM-VM-Create.yaml
openstack stack create mcc-ssm2 -e SSM-VM2-Create.env -t SSM-VM-Create.yaml
openstack stack create ems -e EMS-VM-Create.env -t EMS-VM-Create.yaml
Configure EMS, MME and MCC Network Elements
To configure MCC, log in to the MCM either from the compute using the metadata IP address or assign a FIP to the management IP address:
- ssh admin@<IPaddress>password: admin
- Type: “config”
- (config)# cluster 17 node 1 type v-csm admin-state enabled ; top
- (config)# cluster 17 node 12 type v-ssm admin-state enabled ; top
- (config)# cluster 17 node 2 type v-csm admin-state enabled ; top
- (config)# cluster 17 node 13 type v-ssm admin-state enabled
; top
(config)# commit ; exit
- The output of the show cluster summary command
should look like this:
an3000-mcm-slot7cpu1# show cluster summary cluster SLOT CPU ADMIN CPU ID NUMBER NUMBER TYPE PERSONALITY MODEL STATE HA STATE STATE CPU UPTIME RELEASE ------------------------------------------------------------------------------------------------------------------------------------------ 17 1 1 v-csm CSM vCSM enabled active up D:002 H:00 M:11 S:02 7.2.6.3-16.REL 2 1 v-csm CSM vCSM enabled active up D:001 H:23 M:54 S:09 7.2.6.3-16.REL 7 1 v-mcm MCM vMCM enabled primary up D:002 H:00 M:51 S:03 7.2.6.3-16.REL 8 1 v-mcm MCM vMCM enabled secondary up D:001 H:23 M:54 S:43 7.2.6.3-16.REL 12 1 v-ssm SSM vSSM enabled active up D:002 H:00 M:11 S:07 7.2.6.3-16.REL 13 1 v-ssm SSM vSSM enabled active up D:001 H:23 M:54 S:17 7.2.6.3-16.REL
To configure MME, log in to the management VM:
- ssh admin@<IPaddress> password:admin
- Config
- epc system rm unit 0 vm-instance vmme040-rm-0
- exit
- epc system sig unit 0 vm-instance vmme040-sig-0
- exit
- epc system callp 0 unit 0 vm-instance vmme040-callp-0
- exit
- epc system data unit 0 vm-instance vmme040-data-0
- exit
- epc system lb unit 0 vm-instance vmme040-lb-0
- exit
- epc system mgmt unit 1 vm-instance vmme040-mgmt-1
- exit
- epc system rm unit 1 vm-instance vmme040-rm-1
- exit
- epc system sig unit 1 vm-instance vmme040-sig-1
- exit
- epc system callp 0 unit 1 vm-instance vmme040-callp-1
- exit
- epc system data unit 1 vm-instance vmme040-data-1
- exit
- epc system lb unit 1 vm-instance vmme040-lb-1
- Commit
- exit
- The output of the show vm command should look
like this:
vmme040-0# show vm UNIT LOCATION SERVICE ID ADMIN OPER STANDBY CPU MEMORY VERSION ----------------------------------------------------------------------------------------- vmme040-mgmt-0 mgmt 0 unlocked enabled active 0 3 11.1.15.0 vmme040-mgmt-1 mgmt 1 unlocked enabled hot standby 0 3 11.1.15.0 vmme040-lb-0 lb 0 unlocked disabled active 0 1 11.1.15.0 vmme040-lb-1 lb 1 unlocked disabled active 0 1 11.1.15.0 vmme040-rm-0 rm 0 unlocked enabled active 0 9 11.1.15.0 vmme040-rm-1 rm 1 unlocked enabled hot standby 0 10 11.1.15.0 vmme040-callp-0 callp0 0 unlocked enabled active 3 16 11.1.15.0 vmme040-callp-1 callp0 1 unlocked enabled hot standby 2 16 11.1.15.0 vmme040-sig-0 sig 0 unlocked disabled active 0 1 11.1.15.0 vmme040-sig-1 sig 1 unlocked disabled active 0 1 11.1.15.0 vmme040-data-0 data 0 unlocked disabled active 0 1 11.1.15.0 vmme040-data-1 data 1 unlocked disabled active 0 1 11.1.15.0
For EMS, the default user account credentials are:
Username — affirmed
Password — acuitas
The root password — affirmedEMS
To configure EMS:
- Install Red Hat using the ems.redhat custom image 2. 3. 4. 5. 6. 7. 8.
- Login as root/affirmedEMS
- Run script: root/vm_setup.sh and assign ip address and dns servers
- rpm -ivh ems-7.3.7.0-32.rpm
- cp /tmp/License.dat /opt/Affirmed/NMS/server/ems/conf/
- Make sure that NTP configuration and timezone have been correctly set. Check /etc/ntp/ntp.conf and run service ntpd restart after any change.
- Change directory to
/opt/Affirmed/NMS/bin
and./starttems.sh
. - To check status change directory to
/opt/Affirmed/NMS/bin
and./emsstatus
:*********** Acuitas Server Status ************* Node : emsjuniper - 10.31.7.113 EMS Package : AffirmedEms-7.3.7.0-32 EMS Process : Running [3573] MySQL Database : Running Server Id : 1 ***********************************************
You can now log in to the GUI (https://ip_address) as an administrator using the following login credentials:
Username — admin
Password — admin123