F5 p2v Migration


Physical to Virtual migrations are old hat. Everyone’s done them and are now focusing on cool technologies such as Containers and Kubernetes. That said, there’s still tones of physical Networking equipment out there that can and arguably should be virtualised. I worked on an F5 project and thought the physical devices could and should be moved to virtual devices.

Why Migrate?

There are a number of benefits to virtualizing Network equipment:

– Reduction in support costs as there is no need for RMA.
– No software or hardware dependencies regarding EOL (end of life) or EOD (End of Development).
– VMs are protected by virtualisation HA standards.

The project in question had 4 physical F5’s which were becoming EOL/EOD and a hardware refresh was required. I took this opportunity to refresh the hardware, into software, leveraging the companies existing VMware estate.

Different Migration Strategies.

I did my research before diving into the migration and there were 3 main methods of tackling F5 p2v:

– Migration script/tool.
– UCS file migration.
– Fresh build.

I’m not a fan of migration scripts and tools because I’ve been burnt in the past. Therefore, I typically lean away from generic migration scripts/tools if possible. This left me with 2 options, UCS migration or fresh build.

Chosen Strategy.

I chose the UCS file migration path as I thought it would be the easier/quicker of the 2 options. By copying the UCS file from one F5 to the other, the human configuration error factor would be minimized. This didn’t work as expected an with all the config changes I had to make to the UCS file, errors occurred on the VE (Virtual Edition) when rebooting. So I moved onto the ‘Fresh Build’ method.

Important to note, when F5’s are in an HA pair, most of the configuration syncs between the devices. Check out K21259300 to review what is and what isn’t synced. In my scenario, I needed to locally configure the Mgmt. Network, VLANs, VLAN groups and Self IP Address on the new VEs before syncing.

Procedure

– Download the physical F5 UCS file.
– Unzip the UCS file ‘tar zxf filename.ucs‘.
– Check the UCS extracted ‘config/bigip_base.conf’ file for useful configuration. For instance, the VLANs, VLAN groups and Self IPs.
– Halt the physical F5.
– Configure the Mgmt IP address of the VE to replicate the physical.
– SSH to the VE and edit the ‘config/bigip_base.conf’ file.
– Add the configuration as required. Read N.B. below.
– Run the command ‘tmsh load sys config verify’ to check the configuration file has been built correctly.
– Run the command ‘tmsh load sys config’ to apply the config file.
– Check the new configuration file has loaded using either the CLI or GUI.
– Checking the device can handle a reboot.
– Configure the HA parameters using K15496.
– Failover to the VE and test functionality.
– Repeat for the other physical F5 in the cluster.

N.B. I merged the 2 migration methods to make my life easier. Using the text within a UCS file (‘config/bigip_base.conf’) to create a new configuration file, which included the aforementioned unsynced items. Taking only the chunks of config I needed. Building a fresh configuration can be done manually if you feel that works best for you.

Result(s).

My 4 physical F5’s are now virtual F5’s. I’ve lowered my companies annual support cost and added additional HA by using native VMware functions. Success!

This now opens the door for further virtualisation and SDN discussions. Could some of these workloads be migrated to NSX Load Balancers or AVI Load Balancers?

I’ll admit the above procedure is pretty high level, so if you are looking to carry out an F5 p2v and want to chat then reach out on twitter (@UltTransformer). I’m a pretty friendly guy!


Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.