Title
Create new category
Edit page index title
Edit category
Edit link
Enable Platform9 DHCP
Platform9 DHCP is the recommended DHCP (vs whereabouts ) for KubeVirt installations. When leveraging whereabouts there is an issue that occurs during Live Migrations where the Virtual Machine IP address will change when migrated to the target host.
What is Platform9 DHCP
Platform9 created an alternative to whereabouts for the KubeVirt use case by having a DHCP server running inside the pod/vm to cater to the DHCP requests from the virtual machine instance (not the pod in the case of Kubevirt). Multus Network Attachment Definition will use the DHCP server, and there is no need to specify the IPAM plugin. The client/consumer VM will need the dhclient for sending DHCP requests.
Prerequisites
- Kubevirt
- Advanced Networking Operator (Luigi)
- Kubemacpool
Enabling the Platform9 DHCP Addon:
- Using the Advanced Networking Operator (Luigi) add-on, enable dhcpController addon by creating a Network Plugin resource as shown:
apiVersion: plumber.k8s.pf9.io/v1kind: NetworkPluginsmetadata: name: networkplugins-sample-nosriovspec: plugins: ... dhcpController: {}- This creates the
dhcp-controller-systemnamespace with the controller for the dnsmasq. This also installs Kubemacpool indhcp-controller-systemnamespace. - Create
HostNetoworkTemplate&NetworkAttachmentDefinition.
apiVersion: plumber.k8s.pf9.io/v1kind: HostNetworkTemplatemetadata: name: ovs-br03-configspec: # Add fields here ovsConfig: - bridgeName: "ovs-br03" nodeInterface: "ens5"---apiVersion: "k8s.cni.cncf.io/v1"kind: NetworkAttachmentDefinitionmetadata: name: ovs-dnsmasq-test-woipam annotations: k8s.v1.cni.cncf.io/resourceName: ovs-cni.network.kubevirt.io/ovs-br03spec: config: '{ "cniVersion": "0.3.1", "type": "ovs", "name": "ovs-dnsmasq-test-woipam", "bridge": "ovs-br03" }'NetworkAttachmentDefinitionused with Platform9 IPAM omits the ipam section in the config, which is present when whereabouts is used as IPAM. Platform9 IPAM also supports OVS-DPDK networks.- Create a DHCP Server
apiVersion: dhcp.plumber.k8s.pf9.io/v1alpha1 kind: DHCPServermetadata: name: dhcpserver-samplespec: networks: - networkName: ovs-dnsmasq-test-woipam interfaceIp: 192.168.15.14/24 leaseDuration: 10m vlanId: vlan1 cidr: range: 192.168.15.0/24 range_start: 192.168.15.30 range_end: 192.168.15.100 gateway: 192.168.15.1- About the fields
a . Name: Name of the DHCPServer. Configurations of dnsmasq will be generated in a Configmap with the same name
b. networks: list of all networks that this pod will serve
- networkName: Name of NetworkAttachmentDefinition. Should not have dhcp plugin enabled.
- interfaceIp: IP address that the pod will be allocated. Must have prefix to ensure proper routes are added.
- leaseDuration: Duration the leases offered should be valid for. Provide in valid formats for dnsmasq (eg: 10m, 5h, etc). Defaults to 1h
- vlanId: Dnsmasq network identifier. Used as an identifier while restoring IPs.
- cidr:
range_start, range_end, gatewayare optional. range is compulsory. If range start and end are provided, they will be used in place of the default start and end.
- A configmap is generated based on the DHCPServer. This is a conf file for dnsmasq. It can be overridden by creating a valid configmap with the same name as that of the DHCPServer.
For any specific configurations, you can provide your own configmap. Create a configmap with valid dnsmasq.conf parameters. Along with this, dhcp-range must be in one of these two formats
- dhcp-range=<start_IP>,<end_ip>,<netmask>,<leasetime>
- dhcp-range=<vlanID>,<start_ip>,<end_ip>,<netmask>,<leasetime>
- Sample VM yaml to apply
apiVersion: kubevirt.io/v1kind: VirtualMachinemetadata: name: test-ovs-interface-1spec: running: true template: metadata: labels: kubevirt.io/size: small kubevirt.io/domain: testvm spec: domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: default masquerade: {} - bridge: {} name: ovs-br03 resources: requests: memory: 1024M hostname: myhostname1 networks: - name: default pod: {} - name: ovs-br03 multus: networkName: ovs-dnsmasq-test-woipam volumes: - name: containerdisk containerDisk: image: quay.io/kubevirt/fedora-cloud-container-disk-demo:latest - name: cloudinitdisk cloudInitNoCloud: userData: |- #cloud-config password: fedora chpasswd: { expire: False }- Sample StatefulSet YAML to apply.
---apiVersion: v1kind: Servicemetadata: name: headlessspec: ports: - port: 80 name: web selector: app: test clusterIP: None---apiVersion: apps/v1kind: StatefulSetmetadata: name: test2spec: replicas: 1 selector: matchLabels: app: test2 serviceName: headless template: metadata: labels: app: test2 annotations: k8s.v1.cni.cncf.io/networks: '[ { "name" : "ovs-dnsmasq-test-woipam"} ]' spec: securityContext: runAsUser: 1000 runAsGroup: 3000 containers: - name: test2 image: alpine ports: - containerPort: 80 name: web command: ["/bin/sh"] args: - -c - >- udhcpc -i net1 tail -f /dev/null securityContext: runAsUser: 0 #allowPrivilegeEscalation: false capabilities: add: ["NET_ADMIN"]- An IPAllocation is made for every lease stored in the server. It is used to restore leases back to the DHCPServer. Leases are only restored to the vlanID mentioned. Lease will expire at
leaseExpiry. Sample of how an IPAllocation looks
apiVersion: dhcp.plumber.k8s.pf9.io/v1alpha1kind: IPAllocationmetadata: creationTimestamp: "2022-11-09T12:18:58Z" generation: 1 name: 192.168.15.90 namespace: default resourceVersion: "189858" uid: 70ee31e1-d3b4-47f0-be92-0eb03cd33d57spec: entityRef: test-ovs-interface-1 leaseExpiry: "1667998138" macAddr: 1e:8d:f0:c4:6c:8e vlanId: vlan3