Release notes

These release notes provide product information about the F5 VNF Manager support for version 1.3.1.

User documentation

You can find the user documentation on:

Platform support

F5 VNF Manager version 1.3.1 requires the following supported platforms:

Platform name Platform ID System Requirements
F5 VNF Manager All versions
  • vCPUs: 4 minimum, 8 recommended
  • RAM: 8GB minimum, 16GB recommended
  • Root Disk Storage: 160GB minimum
  • 64-bit host with RHEL/CentOS 7.4
  • Private network dedicated for communicating with other VNFM components, including cluster members
BIG-IP Virtual Edition (VE) Version 13.1.X
Version 14.1.X
BIG-IQ Version 6.0.1 BIG-IQ 6.0.1 Release Notes
CentOS-7-x86_64-GenericCloud-1503 GenericCloud-1503 Release Notes

F5 VNF Manager and Virtual Infrastructure Manager (VIM) compatibility matrix:

VNF Manager ID VIM Platform ID VIM System Requirements
F5 VNF Manager 1.1.X Red Hat OpenStack Newton Version 10 Environment requirements
F5 VNF Manager 1.2.0 VMware vSphere ESXi Version 6.5 Requirements and patch notices
F5 VNF Manager 1.2.1 VMware vSphere ESXi Version 6.5
Red Hat OpenStack Newton Version 10
See previous links for requirements information.
F5 VNF Manager 1.3.0 Red Hat OpenStack Newton Version 10 and Queens Version 13
VMware vSphere ESXi Version 6.5
Newton Version 10 Environment requirements
Queens Version 13 Environment requirements
vSphere ESXi Version 6.5 Requirements and patch notices
F5 VNF Manager 1.3.1 Red Hat OpenStack Newton Version 10 and Queens Version 13
VMware vSphere ESXi Version 6.5
See previous links for compatible platform requirements.

Open source components

F5 VNF Manager is built with the following open-source components.

Component Description

Nginx is a high-performing Web server. In F5 VNF Manager, it serves two purposes:

  • A proxy for the F5 VNFM REST service and F5 VNFM Console
  • A file server to host F5 VNFM-specific resources, agent packages, and blueprint resources.

File server

The file server served by Nginx, while tied to Nginx by default, is not logically bound to it. Although currently it is accessed directly frequently (via disk rather than via network), we will be working towards having it decoupled from the management environment so that it can be deployed anywhere. The file server served by Nginx, is available at https://{manager_ip}:53333/resources, which is mapped to the /opt/manager/resources/ directory. You must authenticate in order to access the file server. To access subdirectories that include tenant names in their path, you must have privileges on that tenant. These subdirectories include:

  • blueprints
  • uploaded-blueprints
  • deployments
  • tenant-resources

The directories that are stored in snapshots include:

  • blueprints
  • uploaded-blueprints
  • deployments
  • tenant-resources
  • plugins
  • global-resources

Note: The tenant-resources and global-resources directories are not used by F5 VNF Manager; therefore, users can create these directories for storing custom resources.

Gunicorn and Flask Gunicorn is a Web server gateway interface HTTP server. Flask is a Web framework. Together, Gunicorn and Flask provide the F5 VNFM REST service. The REST service is written using Flask, and Gunicorn is the server. Nginx, is the proxy to that server. The F5 VNFM’s REST service is the integrator of all parts of the F5 VNFM environment.

PostgreSQL is an object-relational database that can handle workloads ranging from small single-machine applications to large Internet-facing applications. In F5 VNF Manager, PostgreSQL serves two purposes:

  • Provides the main database that stores the application’s model (for example, blueprints, deployments, runtime properties)
  • Provides indexing, and logs’ and events’ storage
Logstash Logstash is a data handler. It can push/pull messages using inputs, and apply filters and output to different outputs. Logstash is used by F5 VNFM to pull log and event messages from RabbitMQ and index them in PostGresSQL.

RabbitMQ is a queue-based messaging platform. RabbitMQ is used by F5 VNFM as a message queue for different purposes:

  • Queueing deployment tasks
  • Queueing logs and events
  • Queueing metrics

Pika is a pure-Python implementation of the AMQP 0-9-1 protocol. The VNF management worker and the host agents are using pika to communicate with RabbitMQ.

Management worker (or agent)

Both the Workflow Executor and the Task Broker that appear in the diagram are part of the F5 VNFM Management Worker.

  • The Workflow Executor receives workflow execution requests, creates the tasks specified by the workflow, submits the tasks for execution by host agents and the Task Broker, and manages workflow state.
  • The Task Broker executes API calls to IaaS providers to create deployment resources, and executes other tasks specified in central_deployment_agent plugins.

Note: All agents (the management worker, and agents deployed on application hosts) are using the same implementation.


Feature Name Description
Install/Uninstall Installs the target deployment, lifecycle operations, and starts all instances. Uninstalls target deployment, frees resources allocated during install, performs uninstall lifecycle operations, stops/deletes deployments and additional blueprints created during install.
Scale out Adds and installs BIG-IP Virtual Editions (VEs) and VNF instances on demand as your network needs resources based on configurable parameters.
Scale in Removes and uninstalls BIG-IP Virtual Editions on demand as your network reduces its need for resources based on configurable parameters.
Heal VEs and layers Creates a new copy of any BIG-IP VEs, layers, and related objects on demand as your network reports dysfunctional instances.
Purge VEs and layers Uninstalls and removes dysfunctional VEs, VNF layer instance(s), and related objects, which you start manually after heal layer workflow runs and problem investigation is complete.
Upgrade Initiates the upgrade process and sets new software reference data. Disables VEs with lower revision numbers. Scaled and healed VEs are installed using the new software reference data.
Update NSD Updates AS3 declaration pushed to the VE as a part of NSD definition.
High Availability (HA) Provides high availability using a cluster of three F5 VNF Managers.
REST API Provides all VNFM functionality using a REST-based API.

What’s new

The following table describes new functionality added in VNF Manager in the designated version release:

Feature Description
DNS blueprint PREVIEW A PREVIEW version of the standalone F5 DNS solution blueprint that queries and translates names for client requests. This DNS solution translates top-level Internet domains, such as .com, .net, .gov, .edu, and .org.
Upgraded to AS3 extension v3.16.0 The Gi LAN, Gi Firewall, DNS-enabled, and CGNAT-enabled blueprints now use an updated F5 AS3 Extension. A sample AS3 declaration is included in the supported NFV solution inputs files on GitHub.
Support a dark environment for Nagios You can now run F5 VNF Manager with a prebuilt Nagios image NOT connected to the Internet. You must upload the prebuilt Nagios image directly into your VIM environment instead of a CentOS image. Obtain this prebuilt Nagios image with your F5 VNF Manager purchase confirmation email.
Support for BIG-IP VE 14.1.X You can now download BIG-IP VE 14.1.X for use with all F5 NFV solution blueprints.
Management network MTU value (OpenStack)

If the OpenStack/VIO API does not set the MTU value for your management network, then VNF Manager will use 1500 as the default value.

Renamed secrets Renamed all existing VIM-specific secrets appending _default to distinguish the VNFM’s secret from other similar secrets added to accommodate the future support of deploying to multiple, mixed VIMs. Also, added two new OpenStack, keystone secrets; keystone_allow_insecure_default and keystone_ca_cert_default for future multi-VIM support.
SR-IOV capability for OpenStack This release enables you to utilize the single root input/output virtualization (SR-IOV), isolating the PCI Express resources, so you can share the physical PCI Express resources in your virtual OpenStack environment. Consult BIG-IP VE prerequisites for more information about configuring SR-IOV on the hypervisor. This feature introduces a new, required input dictionary called, vnic_binding_type for Gi LAN and Gi Firewall blueprint solutions.

Known issues

The following table lists known issues in the designated version release:

Platform name Description
F5 VNF Manager Version 1.3.1
  • If using VNFM in an OpenStack VIM, you may experience an issue using https for keystone with an internally-signed certificate. Currently, F5 cannot test the support of an OpenStack environment using https.
  • When deploying Gi LAN/F blueprints, occasionally these solutions will fail to deploy because the master node remains active.
  • On failover, connections are reset rather than smoothly transitioned. So application connections will drop and must be reset by the application.
  • If deploying the F5-VNF-BIG-IQ blueprint from a VMware vSphere ESXi VIM, you must NOT use and IP addresses on the same network the F5 VNF Manager is connected, until AFTER you deploy the BIG-IQ blueprint and the BIG-IQ HA pair is online. Once the BIG-IQ HA pair is online, those IP addresses become available.
  • In VMware vSphere ESXi, when using the VNFM REST API, you must set up your networks to use unique port group names, regardless of the directories in which they reside.
  • Intermittently, an incorrect BIG-IP hostname is sent to the BIG-IQ license manager, causing a mismatched BIG-IP instance with their VE representation in VNFM. This issue affects only how the reporting plugin displays BIG-IP VE usage data within a layer, but does not affect any billing information.
  • OpenStack v10 (Newton) has an issue with privileges and connecting devices residing outside the OpenStack environment with those residing inside the OpenStack environment, including F5 VNF Managers. To work around this issue, you must add the VNF Manager to the admin project, or upgrade to OpenStack v13 (Queens).
  • Occasionally, a master or slave node in a VNF group will fail to license due to network congestion, resulting in the NTP server not synchronizing with that BIG-IP. To avoid this issue, use an NTP server that is local to your management network. Or, you can uninstall and reinstall the blueprint.
  • When using BIG-IP v14.1, VNFD deployments can randomly fail during the check_all_services_node execution.
  • One vSphere secret vsphere_template_library_name was NOT renamed using the _default suffix.
  • Currently, in vSphere the BIG-IQ blueprint is not licensing BIG-IP VEs. To workaround this issue, you must already have a BIG-IQ deployed, or deploy a BIG-IQ manually.
BIG-IP or 14.1.X Virtual Edition 13.1.X Issues list or 14.1.X Issues list
BIG-IQ 6.0.1 Issues list
CentOS-7-GenericCloud-1503 Issues list
Red Hat OpenStack Newton Issues list for v10 and Issues list for v13
VMware vSphere ESXi 6.5 Issues list

Fixed issues

The following table lists issues that were fixed in the designated version release:

Platform name Fixed in version Description
F5 VNF Manager 1.3.1
  • The f5_db is now included in VNFM snapshot.
  • The heal vnf/dns layer workflow for all blueprints is now working properly in VMware and OpenStack.
  • In Nagios Layer_group CPU Monitor is now reporting the overall group CPU load correctly.
  • VNFM in an OpenStack VIM now currently supports SR-IOV.
  • You can now deploy Gi LAN and Gi Firewall in a dark environment (without connection to the Internet).
BIG-IP Virtual Edition or 14.1.X 13.1.X Issues list or 14.1.X Issues list
BIG-IQ 6.0.1 Issues list
CentOS-7-x86_64 GenericCloud-1503 Issues list
Red Hat OpenStack 10.0 and 13.0 Issues list for v10 and Issues list for v13
VMware vSphere ESXi 6.5 Issues list

Installation overview

To install F5 VNF Manager, use the link provided in the email, and use the key provided in the email as customer identification, when obtaining customer support. Additionally, you will need the following F5 product license keys:

Platform name Product license
BIG-IP or 14.1.X Virtual Edition or F5-BIG-MSP-LOADV12-LIC
CentOS-7-x86_64-GenericCloud-1503 NA

Upgrade overview

You can upgrade HA clusters two ways:

  • Upgrade on new hosts (recommended method).

  • In-place upgrade (prevents ability to rollback).


    This method works only if you leave the IP, AMQP credentials and certificates unchanged.

What’s Next?

Set up VNFM