Generic Routing Encapsulation - GRE is a tunneling protocol originally developed by Cisco that encapsulates various network protocols inside virtual point-to-point tunnel. It transports multicast traffic via GRE tunnel so it allows passing of routing information between connected networks. As it lacks of security it is very often used in conjunction IP SEC VPN that on the other hand is not capable to pass multicast traffic.
The goal of the tutorial it to show configuration of GRE tunnel on a Cisco router and a device with OS Linux. I have created GNS3 lab consisting of two local networks - 192.168.1.0/24 and 192.168.2.0/24 connected via GRE tunnel. GRE tunnel interface is configured on router R1 (Cisco 7206VXR) and Core Router (Core Linux with Quagga routing daemon installed). The both routers have their outside interfaces connected to a router R3 that is located in the "Internet". To prove that GRE tunnel is working and transporting multicast traffic, the OSPF routing protocol is started on R1 and Core routers and configured on tunnel interfaces and interfaces pointing to local networks.
Note: The Core Linux vmdk image is available for download here. Username/password is tc/tc.
1. Initial Configuration
First we assign hostnames and IP addresses to all devices. Then we will configure static routes on routers R1 and R2 to achieve full connectivity between these routers.
1.1 R3 Configuration
R3(config)#hostname R3
R3(config)#interface g1/0
R3(config-if)#ip address 1.1.1.1 255.255.255.0
R3(config-if)#no shutdown
R3(config-if)#interface gi0/0
R3(config-if)#ip address 2.2.2.2 255.255.255.0
R3(config-if)#no shutdown
1.2 R1 Configuration
R1(config)# hostname R1
R1(config)# interface gi0/0
R1(config-if)# ip address 1.1.1.10 255.255.255.0
R1(config-if)# no shutdown
R1(config-if)# interface gi1/0
R1(config-if)# ip address 192.168.1.1 255.255.255.0
R1(config-if)# no shutdown
R1(config-if)# ip route 2.2.2.0 255.255.255.0 1.1.1.1
1.3 Core Router Configuration
tc@box:~$ sudo hostname "Core Router"
tc@box:~$ exit
tc@Core Router:~$ sudo ip addr add dev eth0 2.2.2.10/24
tc@Core Router:~$ sudo ip addr add dev eth1 192.168.2.1/24
tc@Core Router:~$ sudo ip route add 1.1.1.0/24 via 2.2.2.2
Add the configuration above to /opt/bootlocal.sh in order to run commands during the boot of Core Linux.
tc@Core Router:~$ echo "hostname "Core Router" >> /opt/bootlocal.sh
tc@Core Router:~$ echo "ip addr add dev eth0 2.2.2.10/24" >> /opt/bootlocal.sh
tc@PC2:~$ echo "ip link set dev eth0 up" >> /opt/bootlocal.sh
tc@Core Router:~$ echo "ip addr add dev eth1 192.168.2.1/24" >> /opt/bootlocal.sh
tc@PC2:~$ echo "ip link set dev eth1 up" >> /opt/bootlocal.sh
tc@Core Router:~$ echo "ip route add 1.1.1.0/24 via 2.2.2.2" >> /opt/bootlocal.sh
Save configuration with the command below.
tc@Core Router:~$ /usr/bin/filetool.sh -b
At this point we should have connectivity between routers R1 and R2. Check it with the ping command.
Picture 2 - Testing Connectivity Between Routers
1.4 PC1 and PC2 Configuration
tc@box:~$ sudo hostname PC1
tc@box:~$ exit
tc@PC1:~$ sudo ip addr add dev eth0 192.168.1.10/24
tc@PC1:~$ sudo ip route add default via 192.168.1.1
Issue the commands below in in order to add configuration to /opt/bootlocal.sh.
tc@box:~$ echo "hostname PC1" >> /opt/bootlocal.sh
tc@PC1:~$ echo "ip addr add dev eth0 192.168.1.10/24" >> /opt/bootlocal.sh
tc@PC2:~$ echo "ip link set dev eth0 up" >> /opt/bootlocal.sh
tc@PC1:~$ echo "ip route add default via 192.168.1.1" >> /opt/bootlocal.sh
And finally save configuration.
tc@PC1:~$ /usr/bin/filetool.sh -b
tc@box:~$ sudo hostname PC2
tc@box:~$ exit
tc@PC2:~$ sudo ip addr add dev eth0 192.168.2.10/24
tc@PC2:~$ sudo ip route add default via 192.168.2.1
tc@PC2:~$ echo "hostname PC2" >> /opt/bootlocal.sh
tc@PC2:~$ echo "ip addr add dev eth0 192.168.2.10/24" >> /opt/bootlocal.sh
tc@PC2:~$ echo "ip link set dev eth0 up" >> /opt/bootlocal.sh
tc@PC2:~$ echo "ip route add default via 192.168.2.1" >> /opt/bootlocal.sh
tc@PC2:~$ /usr/bin/filetool.sh -b
2. IP GRE Tunnel Configuration on R1 and Core Router
The Maximum Transmission Unit (MTU) is the largest size of IP packet and it is 1500 Bytes. The 1500B MTU value consists of IP header 20B + TCP 20B + data (payload) 1460B. GRE tunnel adds additional 24B overhead to MTU. For this reason we must reserve 24B for GRE overhead in IP packet and change MTU for tunnel interface to 1476 Bytes.
We also need to set the Maximum Segment Size - MSS for TCP traffic. The maximum segment size is actualy the size of payload (user data) of TCP segment thus 40 bytes lower than IP MTU (1476B MTU minus IP header 20B and TCP header 20B). Therefore the MSS value will be set to 1436 Bytes.
2.1 R1 Configuration
R1(config)# interface tunnel 0
R1(config-if)# description Tunnel to R2
R1(config-if)# ip address 172.16.0.1 255.255.255.0
R1(config-if)# ip mtu 1476
R1(config-if)# ip tcp adjust-mss 1436
R1(config-if)# tunnel source 1.1.1.10
R1(config-if)# tunnel destination 2.2.2.10
2.2 Core Router Configuration
First we load module gre and ip_gre modules to Linux kernel.
tc@Core Router:~$ sudo modprobe gre && sudo modprobe ip_gre
tc@Core Router:~$ echo "modprobe gre" >> /opt/bootlocal.sh
tc@Core Router:~$ echo "modprobe ip_gre" >> /opt/bootlocal.sh
Then we can create GRE tunnel.
tc@Core Router:~$ sudo ip tunnel add tun0 mode gre remote 1.1.1.10 local 2.2.2.10 ttl 255
tc@Core Router:~$ sudo ip link set tun0 up
tc@Core Router:~$ sudo ip addr add 172.16.0.2/24 dev tun0
In order to start tunnel after boot of Core Linux, we need to add commands to /opt/bootlocal.sh.
tc@Core Router:~$ echo "ip tunnel add tun0 mode gre remote 1.1.1.10 local 2.2.2.10 ttl 255" >> /opt/bootlocal.sh
tc@Core Router:~$ echo "ip link set tun0 up" >> /opt/bootlocal.sh
tc@Core Router:~$ echo "ip addr add 172.16.0.2/24 dev tun0" >> /opt/bootlocal.sh
tc@Core Router:~$ /usr/bin/filetool.sh -b
3. OSPF Routing Protocol Configuration on R1 and Core Routers
3.1. R1 Configuration
R1(config)# router ospf 10
R1(config-router)# network 172.16.0.0 0.0.0.255 area 0
R1(config-router)# network 192.168.1.0 0.0.0.255 area 0
To help building OSPF adjacency we will use the command ip ospf network broadcast on a tunnel interface.
R1(config-router)# interface tun0
R1(config-if)# ip ospf network broadcast
R1(config-router)# do write
3.2. Core Router Configuration
There is Quagga routing daemon installed on Core Linux. We will use Quagga shell and configure OSPF protocol as following.
tc@Core Router:~$ sudo vtysh
Core Router# conf t
Core Router(config)# router ospf
Core Router(config-router)# network 172.16.0.0/24 area 0
Core Router(config-router)# network 192.168.2.0/24 area 0
Core Router(config-router)# interface tun0
Core Router(config-if)# ip ospf network broadcast
Core Router(config-router)# do write
Core Router(config-router)# ^Z
Core Router# exit
The MTU on both sides of a tunnel must match in order to establish OSPF adjacency so we need to set MTU for the interface tun0 to 1476 Bytes.
tc@Core Router:~$ sudo ip link set tun0 mtu 1476
tc@Core Router:~$ echo "ip link set tun0 mtu 1476" >> /opt/bootlocal.sh
tc@Core Router:~$ /usr/bin/filetool.sh -b
4. IP tunnel verification on Linux Core
The GRE tunnel on Linux Core has been created by ip command so we use the same command for tunnel verification. Verification of GRE tunnel is done by verification of a tunnel interface tun0. The command ip addr show tun0 displays L2 and L3 information about tun0 interface such as:
<POINTOPOINT,NOARP,UP,LOWER_UP>, the MTU - 1476B, tunnel state - UNKNOWN, tunnel source IP address 2.2.2.10, tunnel destination IP address 1.1.1.10 and the IP address of the tun0 interface - 172.16.0.2/24.
Picture 3 - Characteristics of Interface Tun0 - Working GRE Tunnel
The picture below refers to the output of the ip command when the tunnel interface tun0 is administratively down on Linux Core.
Picture 4 - Characteristics of Interface Tun0 - Non Working GRE Tunnel
Notice that the interface tun0 is not now in UP, LOWER_UP state and a state of the tunnel changed to DOWN.
The command ip -s link show tun0 displays network statistics for interface tun0.
Picture 5 - Network Statistics of Interface Tun0
5. Structure of IP Packet Encapsulated Inside GRE Tunnel
The picture 6 reveals a structure IP packet encapsulated inside GRE tunnel. The SSH traffic is sent from host PC1 (192.168.1.10) to host PC2 (192.168.2.10). The GRE header is added to original IP packet (IP+ TCP + SSH) along with completely new delivery L2 and L3 headers (source IP 1.1.1.10, destination IP address 2.2.2.10). The protocol type inside delivery header is set to 47 (GRE) - it is not shown on the picture.
Picture 6 - Structure of IP Packet Encapsulated Inside GRE Tunnel
Thanks for the great post !!!!
"sudo: vtysh: command not found" when i type tc@corerouter-1:~$ sudo vtysh
I got the same thing . Probably means the Core Linux image your using does not have Quagga daemon installed.
Hi Radovan,
Thanks for the great articles. I have questions about the configuration verification in Linux. For example you have the command "sh int tun0" in Cisco to check the interface, the command "sh ip int br" to see how many GRE interface you have and the command "sh run int tun0" to check the configuration and so on.
But I could not find any verification command in Linux to find: 1) the status of the tunnel and 2) the configuration of the tunnel and 3) show the current GRE tunnels
Assume that there is a Linux box which some GRE tunnels have been already configured on (and you have no idea about the configuration). Now you need to verify the configuration and may configure the new one. :)
Thanks
Added to the tutorial. Also if someone prefers to create a network script instead of the ip command in order to create GRE tunnel on Linux, you can check configuration of GRE tunnel in a file /etc/sysconfig/network-scripts/ifcfg-tun0.
Hi Can you tell more about this? because this tutorial covers one time GRE, if linux restart the config is all lost. How can i make it permanent?
You need to put the commands to the startup script according to your Linux distribution. I use Core Linux in the tutorial and Core uses the startup script /opt/bootlocal.sh. That's why I put those commands into /opt/bootlocal.sh.
Thank you so much for reply, I use Centos 7.
And 1 more thing,
In cisco config 172.16.0.1 you have with /30
But in linux it's /24
I think both of them should be /30 It will be a better if you edit : )
Fixed. Thx.
Man this is really great. If you could also make and share pcaps with the communication that would be great!
why you configure ip ospf network broadcast on tun interface if interface type is "point-to-point" ?