topo
outside
|
|
|
|
|
+-----------------------------------------+---------------------------------------------------------------------+
|ls3 localNet |
| |
| |
| |
| |
| lr1-ls1 |
+------------------------------------------+--------------------------------------------------------------------+
|
|
|
|
|
|
|
|
+------------------------------------------+--------------------------------------------------------------------+
|lr1 lr1-ls3 |
| |
| |
| |
| |
| lr1-ls1 lr1-ls2 |
+-------------------+---------------------------------------------------------------+---------------------------+
| |
| |
| |
| |
| |
| |
| |
| |
+-------------------+---------------------------------+ +------------------------+----------------------------+
|ls1 ls1-lr1 | |ls2 ls2-lr1 |
| | | |
|h0_v0_vnet1, h0_v1_vnet1, h1_v0_vnet1, h1_v1_vnet1| |h0_v0_vnet2, h0_v1_vnet2, h1_v0_vnet2, h1_v1_vnet2|
+-----+-------------+-------------+-------------+-----+ +-----+-------------+-------------+-------------+-----+
| | | |
| | | |
| | | |
| | | |
| | | |
| | +----------------------+ |
| | | |
| | | +-------+
| | | |
| | | |
| | | |
| +----------+ | |
| | | |
| | | |
+-----o------------------------o----------------------+ +----------o----------------------------o--------------+
| | | | | | | |
| +--+---------+ +-+----------+ | | +-------+----+ +-+-------+--+ |
| |v0 | |v1 | | | |v0 | |v1 | |
| +------------+ +------------+ | | +------------+ +------------+ |
| | | |
|h0(server+computing) | |h1(computing) |
+-----------------------------------------------------+ +------------------------------------------------------+
Topo有点乱
使用脚本安装虚拟机
在h0和h1上面各安装两个虚拟机v0和v1。
rhel_version=rhel$(rpm -E %rhel)
# libvirt && kvm
yum -y install virt-install
yum -y install libvirt
yum install -y python3-lxml.x86_64
rpm -qa | grep qemu-kvm >/dev/null || yum -y install qemu-kvm
if (($rhel_version < 7)); then
service libvirtd restart
else
systemctl restart libvirtd
systemctl start virtlogd.socket
fi
# work around for failure of virt-install
chmod 666 /dev/kvm
# define default vnet
virsh net-define /usr/share/libvirt/networks/default.xml
virsh net-start default
virsh net-autostart default
# define vm name and mac
vm_name=v1
mac4vm=a4:a4:a4:a4:a4:a1
# download image
wget http://netqe-bj.usersys.redhat.com/share/vms/rhel8.4.qcow2 -O /var/lib/libvirt/images/$vm_name.qcow2
# install vm
virt-install \
--name $vm_name \
--vcpus=2 \
--ram=2048 \
--disk path=/var/lib/libvirt/images/$vm_name.qcow2,device=disk,bus=virtio,format=qcow2 \
--network bridge=virbr0,model=virtio,mac=$mac4vm \
--boot hd \
--accelerate \
--graphics vnc,listen=0.0.0.0 \
--force \
--os-type=linux \
--noautoconsol
setup on computing node
这里的步骤包括启动ovn、把网卡绑定到虚拟机
systemctl start openvswitch
ovs-vsctl set Open_vSwitch . external-ids:ovn-remote=tcp:177.1.1.1:6642
ovs-vsctl set Open_vSwitch . external-ids:ovn-encap-type=geneve
ovs-vsctl set Open_vSwitch . external-ids:ovn-encap-ip=177.1.1.2
systemctl start ovn-controller
mac_v0_vnet1=02:ac:10:ff:01:94
mac_v0_vnet2=02:ac:10:ff:01:95
mac_v1_vnet1=02:ac:10:ff:01:96
mac_v1_vnet2=02:ac:10:ff:01:97
cat <<-EOF > v0-vnet1.xml
<interface type='bridge'>
<target dev='h1_v0_vnet1'/>
<mac address='${mac_v0_vnet1}'/>
<source bridge='br-int'/>
<virtualport type='openvswitch'/>
<model type='virtio'/>
</interface>
EOF
cat <<-EOF > v0-vnet2.xml
<interface type='bridge'>
<target dev='h1_v0_vnet2'/>
<mac address='${mac_v0_vnet2}'/>
<source bridge='br-int'/>
<virtualport type='openvswitch'/>
<model type='virtio'/>
</interface>
EOF
cat <<-EOF > v1-vnet1.xml
<interface type='bridge'>
<target dev='h1_v1_vnet1'/>
<mac address='${mac_v1_vnet1}'/>
<source bridge='br-int'/>
<virtualport type='openvswitch'/>
<model type='virtio'/>
</interface>
EOF
cat <<-EOF > v1-vnet2.xml
<interface type='bridge'>
<target dev='h1_v1_vnet2'/>
<mac address='${mac_v1_vnet2}'/>
<source bridge='br-int'/>
<virtualport type='openvswitch'/>
<model type='virtio'/>
</interface>
EOF
virsh attach-device v0 v0-vnet1.xml
virsh attach-device v0 v0-vnet2.xml
virsh attach-device v1 v1-vnet1.xml
virsh attach-device v1 v1-vnet2.xml
ovs-vsctl set interface h1_v0_vnet2 external-ids:iface-id=h1_v0_vnet2
ovs-vsctl set interface h1_v1_vnet2 external-ids:iface-id=h1_v1_vnet2
setup on server node
这里的步骤包括启动ovn、把网卡绑定到虚拟机
systemctl start openvswitch
systemctl start ovn-northd
ovn-sbctl set-connection ptcp:6642
ovn-nbctl set-connection ptcp:6641
ovs-vsctl set Open_vSwitch . external-ids:ovn-remote=tcp:177.1.1.1:6642
ovs-vsctl set Open_vSwitch . external-ids:ovn-encap-type=geneve
ovs-vsctl set Open_vSwitch . external-ids:ovn-encap-ip=177.1.1.1
systemctl restart ovn-controller
mac_v0_vnet1=04:ac:10:ff:01:94
mac_v0_vnet2=04:ac:10:ff:01:95
mac_v1_vnet1=04:ac:10:ff:01:96
mac_v1_vnet2=04:ac:10:ff:01:97
cat <<-EOF > v0-vnet1.xml
<interface type='bridge'>
<target dev='h0_v0_vnet1'/>
<mac address='${mac_v0_vnet1}'/>
<source bridge='br-int'/>
<virtualport type='openvswitch'/>
<model type='virtio'/>
</interface>
EOF
cat <<-EOF > v0-vnet2.xml
<interface type='bridge'>
<target dev='h0_v0_vnet2'/>
<mac address='${mac_v0_vnet2}'/>
<source bridge='br-int'/>
<virtualport type='openvswitch'/>
<model type='virtio'/>
</interface>
EOF
cat <<-EOF > v1-vnet1.xml
<interface type='bridge'>
<target dev='h0_v1_vnet1'/>
<mac address='${mac_v1_vnet1}'/>
<source bridge='br-int'/>
<virtualport type='openvswitch'/>
<model type='virtio'/>
</interface>
EOF
cat <<-EOF > v1-vnet2.xml
<interface type='bridge'>
<target dev='h0_v1_vnet2'/>
<mac address='${mac_v1_vnet2}'/>
<source bridge='br-int'/>
<virtualport type='openvswitch'/>
<model type='virtio'/>
</interface>
EOF
virsh attach-device v0 v0-vnet1.xml
virsh attach-device v0 v0-vnet2.xml
virsh attach-device v1 v1-vnet1.xml
virsh attach-device v1 v1-vnet2.xml
ovs-vsctl set interface h0_v0_vnet1 external-ids:iface-id=h0_v0_vnet1
ovs-vsctl set interface h0_v1_vnet1 external-ids:iface-id=h0_v1_vnet1
在server node创建ovn topo
这步用到的mac要与上面两步中的mac一致
mac_h0_v0_vnet1=04:ac:10:ff:01:94
mac_h0_v0_vnet2=04:ac:10:ff:01:95
mac_h0_v1_vnet1=04:ac:10:ff:01:96
mac_h0_v1_vnet2=04:ac:10:ff:01:97
mac_h1_v0_vnet1=02:ac:10:ff:01:94
mac_h1_v0_vnet2=02:ac:10:ff:01:95
mac_h1_v1_vnet1=02:ac:10:ff:01:96
mac_h1_v1_vnet2=02:ac:10:ff:01:97
# add logical switch
ovn-nbctl ls-add ls1 -- add Logical_Switch ls1 other_config subnet=172.16.1.0/24
ovn-nbctl ls-add ls2 -- add Logical_Switch ls2 other_config subnet=172.16.2.0/24
# setup ls ipv6_prefix
ovn-nbctl set Logical-switch ls1 other_config:ipv6_prefix=2001:db8:1::0
ovn-nbctl set Logical-switch ls2 other_config:ipv6_prefix=2001:db8:2::0
# create dhcp_options
dhcp_options1=$(ovn-nbctl create DHCP_Options cidr=172.16.1.0/24 \
options="\"server_id\"=\"172.16.1.254\" \"server_mac\"=\"00:00:00:00:01:00\" \
\"lease_time\"=\"$((36000 + RANDOM % 3600))\" \"router\"=\"172.16.1.254\" \"dns_server\"=\"172.16.1.254\"")
dhcp_options2=$(ovn-nbctl create DHCP_Options cidr=172.16.2.0/24 \
options="\"server_id\"=\"172.16.2.254\" \"server_mac\"=\"00:00:00:00:02:00\" \
\"lease_time\"=\"$((36000 + RANDOM % 3600))\" \"router\"=\"172.16.2.254\" \"dns_server\"=\"172.16.2.254\"")
dhcpv6_options1=$(ovn-nbctl create DHCP_Options cidr="2001\:db8\:1\:\:0/64" \
options="\"server_id\"=\"00:00:00:00:01:00\" \"dns_server\"=\"2001:db8:1::254\"")
dhcpv6_options2=$(ovn-nbctl create DHCP_Options cidr="2001\:db8\:2\:\:0/64" \
options="\"server_id\"=\"00:00:00:00:02:00\" \"dns_server\"=\"2001:db8:2::254\"")
# create logical switch port and setup dhcp_option
lsp_name=h0_v0_vnet1
mac=$mac_h0_v0_vnet1
ovn-nbctl lsp-add ls1 $lsp_name
ovn-nbctl lsp-set-addresses $lsp_name "$mac dynamic"
ovn-nbctl lsp-set-dhcpv4-options $lsp_name ${dhcp_options1}
ovn-nbctl add Logical_Switch_Port $lsp_name dhcpv6_options ${dhcpv6_options1}
lsp_name=h0_v1_vnet1
mac=$mac_h0_v1_vnet1
ovn-nbctl lsp-add ls1 $lsp_name
ovn-nbctl lsp-set-addresses $lsp_name "$mac dynamic"
ovn-nbctl lsp-set-dhcpv4-options $lsp_name ${dhcp_options1}
ovn-nbctl add Logical_Switch_Port $lsp_name dhcpv6_options ${dhcpv6_options1}
lsp_name=h1_v0_vnet2
mac=$mac_h1_v0_vnet2
ovn-nbctl lsp-add ls2 $lsp_name
ovn-nbctl lsp-set-addresses $lsp_name "$mac dynamic"
ovn-nbctl lsp-set-dhcpv4-options $lsp_name ${dhcp_options2}
ovn-nbctl add Logical_Switch_Port $lsp_name dhcpv6_options ${dhcpv6_options2}
lsp_name=h1_v1_vnet2
mac=$mac_h1_v1_vnet2
ovn-nbctl lsp-add ls2 $lsp_name
ovn-nbctl lsp-set-addresses $lsp_name "$mac dynamic"
ovn-nbctl lsp-set-dhcpv4-options $lsp_name ${dhcp_options2}
ovn-nbctl add Logical_Switch_Port $lsp_name dhcpv6_options ${dhcpv6_options2}
# create logical router lr1
ovn-nbctl lr-add lr1
ovn-nbctl lrp-add lr1 lr1-ls1 00:00:00:00:01:00 172.16.1.254/24
ovn-nbctl lrp-add lr1 lr1-ls2 00:00:00:00:02:00 172.16.2.254/24
# connect lr1 and ls1
ovn-nbctl lsp-add ls1 ls1-lr1
ovn-nbctl lsp-set-type ls1-lr1 router
ovn-nbctl lsp-set-addresses ls1-lr1 00:00:00:00:01:00
ovn-nbctl lsp-set-options ls1-lr1 router-port=lr1-ls1
# connect lr1 and ls2
ovn-nbctl lsp-add ls2 ls2-lr1
ovn-nbctl lsp-set-type ls2-lr1 router
ovn-nbctl lsp-set-addresses ls2-lr1 00:00:00:00:02:00
ovn-nbctl lsp-set-options ls2-lr1 router-port=lr1-ls2
# connect to outside
ovn-nbctl ls-add ls3
ovn-nbctl lrp-add lr1 lr1-ls3 00:00:00:00:03:00 172.16.3.254/24
ovn-nbctl lsp-add ls3 ls3-lr1
ovn-nbctl lsp-set-type ls3-lr1 router
ovn-nbctl lsp-set-addresses ls3-lr1 00:00:00:00:03:00
ovn-nbctl lsp-set-options ls3-lr1 router-port=lr1-ls3
ovn-nbctl lsp-add ls3 ls3-localnet
ovn-nbctl lsp-set-type ls3-localnet localnet
ovn-nbctl lsp-set-addresses ls3-localnet unknown
ovn-nbctl lsp-set-options ls3-localnet network_name=outNet
ovs-vsctl add-br br-out
ovs-vsctl add-port br-out ens2f1
ovs-vsctl set Open_vSwitch . external-ids:ovn-bridge-mappings=outNet:br-out
# There are two ways to create l3 gateway
# The one is l3gateway which is applied to router: ovn-nbctl create Logical_Router name=lr1 options:chassis={chassis_uuid}
# And the other one is distributed gateway port which is applied to router port:
ovn-nbctl lrp-set-gateway-chassis lr1-ls3 dfe51fb0-0c76-437c-ad95-709b98998b4f
# You also need to create route to 172.16.2.0/24 on outside system
ip route add 172.16.2.0/24 via 172.16.3.254 dev ens3f1np1
# Then ping from vm to outside
[root@localhost ~]# ping 172.16.3.253
PING 172.16.3.253 (172.16.3.253) 56(84) bytes of data.
64 bytes from 172.16.3.253: icmp_seq=1 ttl=63 time=1.96 ms
64 bytes from 172.16.3.253: icmp_seq=2 ttl=63 time=0.457 ms
# And ping from outside to vm
[root@hp-dl388g8-22 ~]# ping 172.16.2.2
PING 172.16.2.2 (172.16.2.2) 56(84) bytes of data.
64 bytes from 172.16.2.2: icmp_seq=1 ttl=63 time=1.80 ms
64 bytes from 172.16.2.2: icmp_seq=2 ttl=63 time=0.671 ms
discussion
https://www.ovn.org/support/dist-docs/ovn-architecture.7.html
The primary design goal of distributed gateway ports is to allow as
much traffic as possible to be handled locally on the hypervisor where
a VM or container resides. Whenever possible, packets from the VM or
container to the outside world should be processed completely on that
VM’s or container’s hypervisor, eventually traversing a localnet port
instance or a tunnel to the physical network or a different OVN deploy‐
ment. Whenever possible, packets from the outside world to a VM or con‐
tainer should be directed through the physical network directly to the
VM’s or container’s hypervisor.
文档上说各个hv去往外网的traffic尽量在hv处理,然后外网发给hv的traffic也是通过物理网络直接发过去,不经过gateway chassis,它是怎么做到的?
怎么也想不通。
只是查表的工作在各个hv上面做吧,最终的转发还是要经过gateway chassis吧?