The vQFX is a virtualized version of the Juniper Networks QFX10000 Ethernet switches portfolio. It is a free tool that is not sold and therefore not supported by Juniper. The vQFX offers the same control and data plane features as the physical QFX10000 switches with limited software forwarding performance.
We can use the vQFX to create an instant virtual lab suitable for a proof of concept, script development, configuration validation, network change simulation, training, and much more.
The purpose of this guide is to provide step-by-step instructions for deploying a Juniper vQFX qcow2 image to GNS3. We will use the Qemu hypervisor to run vQFX version 19.4R1.10 on GNS3.
First we need to extract the archive file that contains the vQFX qcow disks.
$ juniper-vQFX-19.4R1.10.zip
They are two disks inside the archive file:
- vqfx-19.4R1.10-re-qemu.qcow2 [644MB]
- vqfx-19.4R1-2019010209-pfe-qemu.qcow [728MB]
As the name implies, the first qcow disk is the vQFX Routing Engine (RE) and the second disk is the Packet Forwarding Engine (PFE).
1. Creating vQFX RE VM
Navigate to Edit-> Preferences-> Qemu VMs and click the New. Choose the the name vQFX-RE for your new QEMU virtual machine and assign 1024 MB RAM to your QEMU VM. Select path to the RE image - vqfx-20.2R1.10-re-qemu.qcow2. Click Finish to create the virtual machine, and then click Edit.
Go to the General settings tab and assign the two vCPUs to the QEMU virtual machine. You can select the switching device from the Category option, but this option is truly optional. Go to the Network tab and change the NIC type from the default Intel Gigabit Ethernet (e1000) to virtio-net-pci. As a final step, increase the number of NICs to 12.
Unlike with vMX, we will actually connect our other topology devices to the Routing Engine (RE) VM, instead of the Packet Forwarding Engine (PFE) VM. Therefore, we have assigned 12 interfaces to vQFX-RE.
Note: Make sure that NIC type virtio-net-pci is selected, otherwise PFE will not be presented.
The Figure 1 summarizes vQFX-RE VM settings.
Figure 1 - GNS3 Qemu Host Settings for vQFX Routing Engine
2. Creating vQFX Packet Forwarding Engine VM
Now we will create a virtual machine vPFE. Go to Edit-> Preferences-> Qemu VMs and click New. Enter the name vQFX-PFE, assign 1024 MB of RAM to the host, and select the path to the image - vqfx-19.4R1-2019010209-pfe-qemu.qcow. Click Finish and Edit. Go to the Network tab and increase the number of NICs to 2. Do not change the NIC type, just use the default e1000 type.
The Figure 2 summarizes vQFX-PFE VM settings.
Figure 2 - GNS3 Qemu Host Settings for vQFX Packet Forwarding Engine
3. vQFX Interconnection
The first interface e0 will connect to an OOB mgmt switch, the second interface e1 will connect to e1 of the PFE VM as an internal interface. For some reason, we cannot use the third interface e2. Our interface scheme now looks like this (Figure 3):
e0 - management interface
e1 - internale interface between RE and PFE
e2 - unused
e3 - xe-0/0/0
e4 - xe-0/0/1
e5 - xe-0/0/2
…
e11 - xe-0/0/8
Figure 3 - vQFX Interconnection
Once vQFX-RE VM is booted up, we need to wait for another few minutes for the PFE to be online. Telnet to vQFX-RE and verify if the FPC 0 and xe interfaces are presented (Figure 4 and 5).
root@vqfx-re> show interfaces xe-0/0/* terse
Figure 4 - Checking Physical Interface Cards installed in FPC
The 10 Gigabit Xe interfaces are also presented in the output of the show command (Figure 5).
Figure 5 - Ten Gigabit Ports
3.1 Correct vQFX Shutdown
Keep in mind that executing the request system power-off, or request system halt command is necessary to preserve proper operation of the device's operating system. This is also how it should be done in production environments with physical devices.
root@vqfx-re> request system power-off
Wait for one minute to let vQFX RE to power off properly. You can then stop the vRE and vPFE virtual machines using the Stop button in the GNS3 GUI.
4. vQFX Testing
We have connected two Linux VMs to ge-0/0/0 (em4) and ge-0/0/1 (em5) interfaces to test connectivity between interfaces. The hosts are located on the same subnet 192.168.2.0/24 (Figure 6).
Figure 6 - vQFX Connecting Linux Core VMs
We need to reconfigure L3 ports to change them to L2 access ports. First of all, delete the configuration for all xe and et ports.
root@vqfx-re> configure
root@vqfx-re# wildcard delete interfaces xe*
root@vqfx-re# wildcard delete interfaces te*
Now, we configure interfaces ge-0/0/0 - ge-0/0/8 as access ports and assign them to the default vlan 1.
root@vqfx-re# wildcard range set interfaces xe-0/0/[0-8] unit 0 family ethernet-switching interface-mode access vlan members default
And finally, we will commit configuration changes:
root@vqfx-re# commit
At this point, the hosts should be presented in the switching table of vQFX (Figure 7).
root@vqfx-re> show ethernet-switching table
Figure 7 - Ethernet-Switching Table of vQFX
And finally, ping should be working between Linux hosts (Figure 8).
Figure 8 - Checking Connectivity Between Linux VMs
End.