OpenShift Summary

redhat doc

install SNO

Install SNO

https://docs.google.com/document/d/1UZAAlbTU7g97vqkWhsfL2i3LpOaPhmMSus3oi6m-cXI/edit#

OpenShift Container Platform on a single node is a specialized installation that requires the creation of a special ignition configuration ISO. 
The primary use case is for edge computing workloads, including intermittent connectivity, portable clouds, and 5G radio access networks (RAN) close to a base station. 
The major tradeoff with an installation on a single node is the lack of high availability.

login to node:(login using the ssh key you specified during installation from the machine on which you generated this key, call it "manage machine")
# ssh core@$ip

login OCP from node or your manage machine:
# oc login

login webconsole: https://console-openshift-console.apps.liali02.baodi123.com/operatorhub/ns/default

Performance Addon Operator

PAO for PTP

1. install from webconsole operator-hub

2. create a profile (this config may be not complete)
apiVersion: performance.openshift.io/v2
kind: PerformanceProfile
metadata:
  name: liali-performanceprofile
spec:
  additionalKernelArgs:
    - nmi_watchdog=0
    - audit=0
    - mce=off
    - processor.max_cstate=1
    - idle=poll
    - intel_idle.max_cstate=0
  cpu:
    isolated: '4-19,30-47' // used by workloads containers
    reserved: '0-3,20-23'  // used by OCP housekeeping containers
  numa:
    topologyPolicy: restricted
  globallyDisableIrqLoadBalancing: true  // irq will not be balanced to isolated cpus if is true
  hugepages:
    defaultHugepagesSize: 1G
    pages:
      - count: 2
        node: 0
        size: 1G
  machineConfigPoolSelector:
    pools.operator.machineconfiguration.openshift.io/master: '' // use to select a machineconfigpool, for SNO, master MCP has this label
  net:
    userLevelNetworking: true  // reduce nic port queue number to pod available cpu number
  nodeSelector:
    node-role.kubernetes.io/master: ''  // select a node that has master role
  realTimeKernel:
    enabled: true  // here can select to use rt-kernel


useful commands:
Get the CPUs that the pod configured for IRQ dynamic load balancing runs on:
# oc exec -it dynamic-irq-pod -- /bin/bash -c "grep Cpus_allowed_list /proc/self/status | awk '{print $2}'"

Ensure the default system CPU affinity mask does not include the dynamic-irq-pod(isolated) CPUs, for example, CPUs 2 and 3.
# cat /proc/irq/default_smp_affinity

Ensure the system IRQs are not configured to run on the dynamic-irq-pod(isolated) CPUs:
# find /proc/irq/ -name smp_affinity_list -exec sh -c 'i="$1"; mask=$(cat $i); file=$(echo $i); echo $file: $mask' _ {} \;

# cat /proc/irq/<irq-num>/effective_affinity

You can view which threads are running on the host CPUs by logging in to the cluster and running the following command:
# lscpu --all --extended

Alternatively, to view the threads that are set for a particular physical CPU core (cpu0 in the example below), open a command prompt and run the following:
# cat /sys/devices/system/cpu/cpu0/topology/thread_siblings_list

Tuned for PTP

[core@dell-per740-87 ~]$ cat tuned.yaml 
apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
  name: performance-patch
  namespace: openshift-cluster-node-tuning-operator
spec:
  profile:
    - name: performance-patch
      # Please note:
      # - The 'include' line must match the associated PerformanceProfile name, following below pattern
      #   include=openshift-node-performance-${PerformanceProfile.metadata.name}
      # - The 'cmdline_crash' CPU set must match the 'isolated' set in the associated PerformanceProfile
      # - When using the standard (non-realtime) kernel, remove the kernel.timer_migration override from
      #   the [sysctl] section and remove the entire section if it is empty.
      data: |
        [main]
        summary=Configuration changes to the ran-du perf profile
        include=openshift-node-performance-liali-performanceprofile
        [bootloader]
        cmdline_crash=nohz_full=4-19,30-47
        [sysctl]
        kernel.timer_migration=1
        [scheduler]
        group.ice-ptp=0:f:10:*:ice-ptp.*
        [service]
        service.stalld=start,enable
        service.chronyd=stop,disable
      name: performance-patch
  recommend:
    - machineConfigLabels:
        machineconfiguration.openshift.io/role: master
      priority: 19
      profile: performance-patch


# oc label node dell-per740-87.rhts.eng.pek2.redhat.com machineconfiguration.openshift.io/role="master" 
# oc create -f tuned.yaml 

Use rt-kernel(Machine Config)

https://docs.openshift.com/container-platform/4.12/post_installation_configuration/machine-configuration-tasks.html#nodes-nodes-rtkernel-arguments_post-install-machine-configuration-tasks

1. Create a machine config for the real-time kernel:
# cat << EOF > 99-master-realtime.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: "master"
  name: 99-master-realtime
spec:
  kernelType: realtime
EOF

2. Add the machine config to the cluster. 
# oc create -f 99-master-realtime.yaml

3. restore old kernel
# oc delete -f 99-master-realtime.yaml

other commands:
# oc get nodes
# oc debug node/ip-10-0-143-147.us-east-2.compute.internal
# oc get mcp worker
# oc describe mcp worker
# oc describe node/ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4
# oc get machineconfigpool
# oc get machineconfigs
# oc describe machineconfigs 01-master-kubelet

Change Kernel Options

1. List existing MachineConfig objects for your OpenShift Container Platform cluster to determine how to label your machine config:
# oc get MachineConfig

2. Create a MachineConfig object file that identifies the kernel argument (for example, 05-worker-kernelarg-selinuxpermissive.yaml)

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker // Applies the new kernel argument only to worker nodes.
  name: 05-worker-kernelarg-selinuxpermissive  // Named to identify where it fits among the machine configs (05) and what it does (adds a kernel argument to configure SELinux permissive mode).
spec:
  config:
    ignition:
      version: 3.2.0
  kernelArguments:
    - enforcing=0 // Identifies the exact kernel argument as enforcing=0.

3. Create the new machine config:
# oc create -f 05-worker-kernelarg-selinuxpermissive.yaml

4. Check the machine configs to see that the new one was added:
# oc get MachineConfig

5. Check the nodes: 
# oc get nodes

[core@dell-per740-87 ~]$ cat 05-kernelarg-rcutree-kthread-prio.yaml 
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: master
  name: 05-worker-kernelarg-rcutree-kthread-prio
spec:
  config:
    ignition:
      version: 3.2.0
  kernelArguments:
    - rcutree.kthread_prio=1

[core@dell-per740-87 ~]$ oc delete -f 05-kernelarg-rcutree-kthread-prio.yaml 
machineconfig.machineconfiguration.openshift.io "05-worker-kernelarg-rcutree-kthread-prio" deleted

[core@dell-per740-87 ~]$ oc create -f 05-kernelarg-rcutree-kthread-prio.yaml 
machineconfig.machineconfiguration.openshift.io/05-worker-kernelarg-rcutree-kthread-prio created

[core@dell-per740-87 ~]$ oc get MachineConfigPool
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
master   rendered-master-903cc3f724210542531c9c43ac3b7cfc   False     True       False      1              0                   0                     0                      42h
worker   rendered-worker-827f7a87dbe0484d613c40cce184c66a   True      False      False      0              0                   0                     0                      42h

Disable NTP Using Machine Config

1. Create the MachineConfig CR that disables chronyd for the specified node role.

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: <node_role> // Node role where you want to disable chronyd, for example, master.
  name: disable-chronyd
spec:
  config:
    ignition:
      version: 3.2.0
    systemd:
      units:
        - contents: |
            [Unit]
            Description=NTP client/server
            Documentation=man:chronyd(8) man:chrony.conf(5)
            After=ntpdate.service sntp.service ntpd.service
            Conflicts=ntpd.service systemd-timesyncd.service
            ConditionCapability=CAP_SYS_TIME
            [Service]
            Type=forking
            PIDFile=/run/chrony/chronyd.pid
            EnvironmentFile=-/etc/sysconfig/chronyd
            ExecStart=/usr/sbin/chronyd $OPTIONS
            ExecStartPost=/usr/libexec/chrony-helper update-daemon
            PrivateTmp=yes
            ProtectHome=yes
            ProtectSystem=full
            [Install]
            WantedBy=multi-user.target
          enabled: false
          name: "chronyd.service"

2. Create the MachineConfig CR by running the following command: 
# oc create -f disable-chronyd.yaml

PTP operator

https://docs.openshift.com/container-platform/4.12/networking/using-ptp.html

install it from webconsole : https://console-openshift-console.apps.liali02.baodi123.com/operatorhub/ns/default

You need to label node with node-role.kubernetes.io/worker=
# oc label node dell-per740-87.rhts.eng.pek2.redhat.com node-role.kubernetes.io/worker=
# oc label node dell-per740-87.rhts.eng.pek2.redhat.com feature.node.kubernetes.io/network-sriov.capable="true"

If the ptpconfig can't be loaded automatically, recreate linuxptp-daemon

# A ptpconfig example:
apiVersion: ptp.openshift.io/v1
kind: PtpConfig
metadata:
  creationTimestamp: '2023-03-09T06:30:56Z'
  generation: 1
  managedFields:
    - apiVersion: ptp.openshift.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        'f:spec':
          .: {}
          'f:profile': {}
          'f:recommend': {}
      manager: Mozilla
      operation: Update
      time: '2023-03-09T06:30:56Z'
  name: ptpconfig-1
  namespace: openshift-ptp
  resourceVersion: '23806'
  uid: b07d4d48-741b-4b85-b799-6281dd26fd2f
spec:
  profile:
    - name: ptpconfig-profile-1
      phc2sysOpts: '-a -r'
      ptp4lConf: |
        [ens4f0]
        masterOnly 0
        [ens4f2]
        masterOnly 1
        [ens4f3]
        masterOnly 1
        [global]
        #
        # Default Data Set
        #
        twoStepFlag 1
        slaveOnly 0
        priority1 128
        priority2 128
        domainNumber 24
        clockClass 248
        clockAccuracy 0xFE
        offsetScaledLogVariance 0xFFFF
        free_running 0
        freq_est_interval 1
        dscp_event 0
        dscp_general 0
        dataset_comparison G.8275.x
        G.8275.defaultDS.localPriority 128
        #
        # Port Data Set
        #
        logAnnounceInterval -3
        logSyncInterval -4
        logMinDelayReqInterval -4
        logMinPdelayReqInterval -4
        announceReceiptTimeout 10
        syncReceiptTimeout 0
        delayAsymmetry 0
        fault_reset_interval 4
        neighborPropDelayThresh 20000000
        masterOnly 0
        G.8275.portDS.localPriority 128
        #
        # Run time options
        #
        assume_two_step 0
        logging_level 6
        path_trace_enabled 0
        follow_up_info 0
        hybrid_e2e 0
        inhibit_multicast_service 0
        net_sync_monitor 0
        tc_spanning_tree 0
        tx_timestamp_timeout 50
        unicast_listen 0
        unicast_master_table 0
        unicast_req_duration 3600
        use_syslog 1
        verbose 0
        summary_interval -4
        kernel_leap 1
        check_fup_sync 0
        #
        # Servo Options
        #
        pi_proportional_const 0.0
        pi_integral_const 0.0
        pi_proportional_scale 0.0
        pi_proportional_exponent -0.3
        pi_proportional_norm_max 0.7
        pi_integral_scale 0.0
        pi_integral_exponent 0.4
        pi_integral_norm_max 0.3
        step_threshold 0.0
        first_step_threshold 0.00002
        max_frequency 900000000
        clock_servo pi
        sanity_freq_limit 200000000
        ntpshm_segment 0
        #
        # Transport options
        #
        transportSpecific 0x0
        ptp_dst_mac 01:1B:19:00:00:00
        p2p_dst_mac 01:80:C2:00:00:0E
        udp_ttl 1
        udp6_scope 0x0E
        uds_address /var/run/ptp4l
        #
        # Default interface options
        #
        clock_type BC
        network_transport L2
        delay_mechanism E2E
        time_stamping hardware
        tsproc_mode filter
        delay_filter moving_median
        delay_filter_length 10
        egressLatency 0
        ingressLatency 0
        boundary_clock_jbod 0
        #
        # Clock description
        #
        productDescription ;;
        revisionData ;;
        manufacturerIdentity 00:00:00
        userDescription ;
        timeSource 0xA0
      ptp4lOpts: '-m -2'
      #ptpClockThreshold:
        #holdOverTimeout: 5
        #maxOffsetThreshold: 100
        #minOffsetThreshold: -100
      ptpSchedulingPolicy: SCHED_FIFO
      ptpSchedulingPriority: 10
  recommend:
    - match:
        - nodeLabel: node-role.kubernetes.io/worker=
          nodeName: dell-per740-87.rhts.eng.pek2.redhat.com
      priority: 50
      profile: ptpconfig-profile-1

useful commands:
# oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container

[core@dell-per740-87 ~]$ oc get pods -n openshift-ptp -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP             NODE                                      NOMINATED NODE   READINESS GATES
linuxptp-daemon-25fbc           2/2     Running   4          42h   10.73.131.77   dell-per740-87.rhts.eng.pek2.redhat.com   <none>           <none>
ptp-operator-5b5f9797fc-rz8ps   1/1     Running   11         42h   10.128.0.81    dell-per740-87.rhts.eng.pek2.redhat.com   <none>           <none>

[core@dell-per740-87 ~]$ oc -n openshift-ptp logs linuxptp-daemon-25fbc -c linuxptp-daemon-container|grep chrt
I0311 00:39:59.108008    6898 daemon.go:286] /bin/chrt -f 10 /usr/sbin/phc2sys -a -r -m -u 1 -z /var/run/ptp4l.0.socket -t [ptp4l.0.config]
I0311 00:39:59.108192    6898 daemon.go:286] /bin/chrt -f 10 /usr/sbin/ptp4l -f /var/run/ptp4l.0.config -m -2
I0311 00:40:00.129150    6898 daemon.go:318] phc2sys cmd: /bin/chrt -f 10 /usr/sbin/phc2sys -a -r -m -u 1 -z /var/run/ptp4l.0.socket -t [ptp4l.0.config]
I0311 00:40:01.108738    6898 daemon.go:318] ptp4l cmd: /bin/chrt -f 10 /usr/sbin/ptp4l -f /var/run/ptp4l.0.config -m -2
I0311 00:43:31.141705    6898 daemon.go:286] /bin/chrt -f 10 /usr/sbin/phc2sys -a -r -m -u 1 -z /var/run/ptp4l.0.socket -t [ptp4l.0.config]
I0311 00:43:31.141779    6898 daemon.go:286] /bin/chrt -f 10 /usr/sbin/ptp4l -f /var/run/ptp4l.0.config -m -2
I0311 00:43:32.142679    6898 daemon.go:318] phc2sys cmd: /bin/chrt -f 10 /usr/sbin/phc2sys -a -r -m -u 1 -z /var/run/ptp4l.0.socket -t [ptp4l.0.config]
I0311 00:43:33.142930    6898 daemon.go:318] ptp4l cmd: /bin/chrt -f 10 /usr/sbin/ptp4l -f /var/run/ptp4l.0.config -m -2

# Troubleshooting
1. Check the Operator and operands are successfully deployed in the cluster for the configured nodes.
 $ oc get pods -n openshift-ptp -o wide

2. Check that supported hardware is found in the cluster.
$ oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io

3. Check the available PTP network interfaces for a node:
$ oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io <node_name> -o yaml

4. Check that the PTP interface is successfully synchronized to the primary clock by accessing the linuxptp-daemon pod for the corresponding node.
$ oc get pods -n openshift-ptp -o wide
$ oc rsh -n openshift-ptp -c linuxptp-daemon-container <linux_daemon_container>
where:
<linux_daemon_container>
is the container you want to diagnose, for example linuxptp-daemon-lmvgn.

In the remote shell connection to the linuxptp-daemon container, use the PTP Management Client (pmc) tool to diagnose the network interface. 
Run the following pmc command to check the sync status of the PTP device, for example ptp4l.
# pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET'

sending: GET PORT_DATA_SET
    40a6b7.fffe.166ef0-1 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET
        portIdentity            40a6b7.fffe.166ef0-1
        portState               SLAVE
        logMinDelayReqInterval  -4
        peerMeanPathDelay       0
        logAnnounceInterval     -3
        announceReceiptTimeout  3
        logSyncInterval         -4
        delayMechanism          1
        logMinPdelayReqInterval -4
        versionNumber           2

SR-IOV Operator

https://docs.openshift.com/container-platform/4.12/networking/hardware_networks/about-sriov.html

Prepare

install it from OperatorHub using webconsole

label node first:
# oc label node <node_name> feature.node.kubernetes.io/network-sriov.capable="true"

For SNO, must To set the disableDrain field to true, enter the following command(or change "sriov operator config" on web):
# oc patch sriovoperatorconfig default --type=merge \
  -n openshift-sriov-network-operator \
  --patch '{ "spec": { "disableDrain": true } }'

configure SR-IOV network device(crate VFs using sriov network node policy config)

You specify the SR-IOV network device configuration for a node by creating an SR-IOV network node policy.
Using SR-IOV network node policy to create VFs. 

使用default config不知道为什么不生效,需要新建一个config, and priority=10

apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
  name: <name> // The name for the custom resource object.
  namespace: openshift-sriov-network-operator // The namespace where the SR-IOV Network Operator is installed.
spec:
  resourceName: <sriov_resource_name> // The resource name of the SR-IOV network device plugin. You can create multiple SR-IOV network node policies for a resource name.
  nodeSelector:
    feature.node.kubernetes.io/network-sriov.capable: "true" //The node selector specifies the nodes to configure. Only SR-IOV network devices on the selected nodes are configured. 
                                                             //The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed on selected nodes only.
  priority: <priority>  // Optional: The priority is an integer value between 0 and 99. A smaller value receives higher priority. 
                        // For example, a priority of 10 is a higher priority than 99. The default value is 99.
  mtu: <mtu> 
  needVhostNet: false 
  numVfs: <num> 
  nicSelector:  // The NIC selector identifies the device for the Operator to configure. You do not have to specify values for all the parameters. 
                // It is recommended to identify the network device with enough precision to avoid selecting a device unintentionally.
                // If you specify rootDevices, you must also specify a value for vendor, deviceID, or pfNames. 
    vendor: "<vendor_code>" 
    deviceID: "<device_id>" 
    pfNames: ["<pf_name>", ...] 
    rootDevices: ["<pci_bus_id>", ...] 
    netFilter: "<filter_string>" 
  deviceType: <device_type> 
  isRdma: false 
    linkType: <link_type> 
  eSwitchMode: "switchdev" 

deviceType:
netdevice driver: A regular kernel network device in the netns of the container
vfio-pci driver: A character device mounted in the container

# example SR-IOV network node policy config
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
  creationTimestamp: '2023-03-10T00:48:12Z'
  generation: 4
  managedFields:
    - apiVersion: sriovnetwork.openshift.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        'f:spec':
          'f:isRdma': {}
          'f:deviceType': {}
          'f:linkType': {}
          'f:needVhostNet': {}
          'f:resourceName': {}
          .: {}
          'f:numVfs': {}
          'f:nicSelector':
            .: {}
            'f:deviceID': {}
            'f:rootDevices': {}
            'f:vendor': {}
          'f:nodeSelector':
            .: {}
            'f:feature.node.kubernetes.io/network-sriov.capable': {}
          'f:priority': {}
          'f:mtu': {}
      manager: Mozilla
      operation: Update
      time: '2023-03-10T00:48:12Z'
  name: liali111
  namespace: openshift-sriov-network-operator
  resourceVersion: '413801'
  uid: 99c9b88f-c6fe-4963-a0ef-6e52514efa33
spec:
  nodeSelector:
    feature.node.kubernetes.io/network-sriov.capable: 'true'
  priority: 99
  nicSelector:
    deviceID: '1593'
    rootDevices:
      - '0000:ca:00.0'
    vendor: '8086'
  mtu: 1500
  deviceType: netdevice
  isRdma: false
  linkType: eth
  needVhostNet: false
  resourceName: liali111
  numVfs: 6

Configuring an SR-IOV Ethernet network attachment(sriov network config)

apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetwork
metadata:
  name: <name>  //A name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name.
  namespace: openshift-sriov-network-operator  //The namespace where the SR-IOV Network Operator is installed.
spec:
  resourceName: <sriov_resource_name> // The value for the spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network.
  networkNamespace: <target_namespace>  // The target namespace for the SriovNetwork object. Only pods in the target namespace can attach to the additional network.
  vlan: <vlan> 
  spoofChk: "<spoof_check>" 
  ipam: |- 
    {}
  linkState: <link_state> 
  maxTxRate: <max_tx_rate> 
  minTxRate: <min_tx_rate> 
  vlanQoS: <vlan_qos> 
  trust: "<trust_vf>" 
  capabilities: <capabilities> 

注意resourceName要是The value for the spec.resourceName parameter from the SriovNetworkNodePolicy object

# example config
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetwork
metadata:
  annotations:
    operator.sriovnetwork.openshift.io/last-network-namespace: default
  resourceVersion: '469620'
  name: vf1
  uid: 670c07e6-88a9-446b-bb52-9684e470d2d8
  creationTimestamp: '2023-03-10T02:50:25Z'
  generation: 3
  managedFields:
    - apiVersion: sriovnetwork.openshift.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        'f:spec':
          .: {}
          'f:ipam': {}
          'f:networkNamespace': {}
          'f:resourceName': {}
          'f:spoofChk': {}
          'f:trust': {}
      manager: Mozilla
      operation: Update
      time: '2023-03-10T02:50:25Z'
    - apiVersion: sriovnetwork.openshift.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            .: {}
            'f:operator.sriovnetwork.openshift.io/last-network-namespace': {}
          'f:finalizers':
            .: {}
            'v:"netattdef.finalizers.sriovnetwork.openshift.io"': {}
      manager: sriov-network-operator
      operation: Update
      time: '2023-03-10T04:36:39Z'
  namespace: openshift-sriov-network-operator
  finalizers:
    - netattdef.finalizers.sriovnetwork.openshift.io
spec:
  ipam: |
    {
      "type": "static",
      "addresses": [
        {
          "address": "191.168.1.1/24"
        }
      ]
    }
  networkNamespace: default //只有default namespace中的pod可以用这个vf
  resourceName: liali111
  spoofChk: 'on'
  trust: 'on'

Add VF to pod

create a pod and specify which VF to use using "k8s.v1.cni.cncf.io/networks: vf1"

# config
apiVersion: v1
kind: Pod
metadata:
  name: centos
  labels:
    app: centos
  namespace: default
  annotations:
    k8s.v1.cni.cncf.io/networks: vf1 // SR-IOV Ethernet network attachment name(set in metadata)
spec:
  restartPolicy: OnFailure
  containers:
    - name: centos
      image: 'docker.io/library/centos'
      command: ["/bin/sleep", "3650d"]
      ports:
        - containerPort: 8080

Use iperf3 pod

1. install
apiVersion: v1
kind: Pod
metadata:
  name: iperf3
  labels:
    app: iperf3
  namespace: default
  annotations:
    k8s.v1.cni.cncf.io/networks: vf2
spec:
  restartPolicy: OnFailure
  containers:
    - name: iperf3
      image: 'docker.io/networkstatic/iperf3'
      command: ["/bin/sleep", "3650d"]
      ports:
        - containerPort: 5201


2. start "iperf3 -s -D" from node terminal(in web console)

iperf3 with PAO

apiVersion: v1
kind: Pod
metadata:
  name: iperf3-1
  labels:
    app: iperf3-1
  namespace: default
  annotations:
    k8s.v1.cni.cncf.io/networks: ens4f2-vf2
    cpu-load-balancing.crio.io: "disable"
    irq-load-balancing.crio.io: "disable"
spec:
  restartPolicy: OnFailure
  containers:
    - name: iperf3-1
      image: 'docker.io/networkstatic/iperf3'
      command: ["/bin/sleep", "3650d"]
      ports:
        - containerPort: 5201
      resources:
        limits:
          memory: "200Mi"
          cpu: "14"
        requests:
          memory: "200Mi"
          cpu: "14"
  runtimeClassName: performance-liali-performanceprofile