Wuff, that was a long time ago. Let me see how I can log in to that machine at all.
$ ssh homer
That worked, but I need to be superuser as well.
[matelakat@homer ~]$ sudo -i
Ok, worked like a charm. Let me see what VMs are running on it.
[matelakat@homer ~]$ virsh list --all
Id Name State
------------------------
1 home1 running
- trial shut off
I believe I have a tunnel to that. Let's check the firewall:
[matelakat@homer ~]$ sudo firewall-cmd --list-all
public (default, active)
target: DROP
ingress-priority: 0
egress-priority: 0
icmp-block-inversion: no
interfaces: wired0
sources:
services: dhcpv6-client ssh
ports:
protocols:
forward: yes
masquerade: yes
forward-ports:
source-ports:
icmp-blocks:
rich rules:
rule family="ipv4" destination address="192.168.0.94" forward-port port="2222" protocol="tcp" to-port="22" to-addr="192.168.114.120"
Oh yeah, so I can log in to that VM directly. I wonder if that is a static IP, or was allocated by dhcp...
In my ssh config, I have:
host home01
HostName homer
Port 2222
So I can log in to that directly. Also because I have a static entry in my hosts file:
[matelakat@mw lakat.eu]$ grep homer /etc/hosts
192.168.0.94 homer
What a horrible setup I made! Anyways, I am not here to judge now. But one more thing, is the VM getting the
IP from my router DHCP? Ah, no, it cannot, as it is living in the 192.168.114.x
subnet. But do we have a
reservation for the VM in dnsmasq?
First investigate the leases file:
[root@homer ~]# cat /var/lib/misc/dnsmasq.leases
1757876946 52:54:00:c5:b2:f5 192.168.114.120 home01 ff:56:50:4d:98:00:02:00:00:ab:11:bc:8f:1a:08:97:1a:32:3b
Now grep for the ip int the config:
[root@homer ~]# grep 120 /etc/dnsmasq.conf
Ouch, nothing. Let's jump on the VM and understand its configuration.
The hypervisor's IP is indeed coming from the router's DHCP sever. That is another issue. Ok, time to jot down what is wrong, for future reference.
Now I need to carry on, and update the VM. So let's log in:
[matelakat@mw ~]$ ssh home01
Last login: Sun Sep 14 11:42:45 2025 from 192.168.114.1
[matelakat@home01 ~]$ sudo -i
[root@home01 ~]#
All looks good. Now my home automation scripts are running in tmux, so let me jot them down here:
On first terminal this was running:
(.venv) [matelakat@home01 AdamLampa]$ python -m lakathome.kidroom adam
On second:
(.venv) [matelakat@home01 AdamLampa]$ python -m lakathome.corridor corridor
On third:
(.venv) [matelakat@home01 AdamLampa]$ python -m lakathome.kidroom hedi
Each of them like so:
(.venv) [matelakat@home01 AdamLampa]$ type python
python is hashed (/home/matelakat/.venv/bin/python)
(.venv) [matelakat@home01 AdamLampa]$ pwd
/home/matelakat/AdamLampa
I might want to create systemd scripts for them.
Ok, now that all is stopped, time to update the computer, but before doing so, I would want to take a snapshot just to be on the safe side. For this I am shutting down the VM:
[matelakat@home01 ~]$ sudo halt -p
And now I can create a snapshot.
[matelakat@homer disks]$ sudo -u libvirt-qemu qemu-img info home1.img
image: home1.img
file format: qcow2
virtual size: 20 GiB (21474836480 bytes)
disk size: 3.72 GiB
cluster_size: 65536
backing file: home1-base.img
backing file format: qcow2
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false
extended l2: false
Child node '/file':
filename: home1.img
protocol type: file
file length: 3.72 GiB (3995009024 bytes)
disk size: 3.72 GiB
So I need to create another snapshot. But for that what I'm going to do is to first create
a new base image home1-newbase.img
by merging the previous layers:
[matelakat@homer disks]$ sudo qemu-img convert -O qcow2 home1.img home1-newbase.img
[matelakat@homer disks]$ sudo qemu-img info home1-newbase.img
image: home1-newbase.img
file format: qcow2
virtual size: 20 GiB (21474836480 bytes)
disk size: 4.4 GiB
cluster_size: 65536
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false
extended l2: false
Child node '/file':
filename: home1-newbase.img
protocol type: file
file length: 4.4 GiB (4719116288 bytes)
disk size: 4.4 GiB
Ok, and now move away the previous node by renaming it, and create a new snapshot:
[matelakat@homer disks]$ sudo mv home1.img home1-backup.img
[matelakat@homer disks]$ sudo qemu-img create -f qcow2 -b "home1-newbase.img" -F qcow2 "home1.img"
[matelakat@homer disks]$ sudo chown libvirt-qemu:libvirt-qemu ./*
And start the VM up for upgrade to see if it comes up. But before that, I'm going to kill the leases file and add a static entry:
[matelakat@homer disks]$ grep home01 /etc/dnsmasq.conf
dhcp-host=52:54:00:c5:b2:f5,home01,192.168.114.120
[matelakat@homer disks]$ sudo rm /var/lib/misc/dnsmasq.leases
[matelakat@homer disks]$ sudo systemctl restart dnsmasq
[matelakat@homer disks]$ virsh start --domain home1
It started succeccfully, so then I shut it down, and went ahead updating the hypervisor.
Then re-booted it, but the VM was not up and running, I need to fix that.
[matelakat@homer ~]$ virsh autostart --domain home1
I then re-started, and everything was fine, up and running. Time to update the VM.
I got this warning:
warning: /etc/zigbee2mqtt/configuration.yaml installed as /etc/zigbee2mqtt/configuration.yaml.pacnew
Doing some journalctl maintenance:
[matelakat@home01 ~]$ sudo journalctl --vacuum-time=2d
Looks like zigbee2mqtt now needs the adapter type, so I fixed that one:
serial:
port: tcp://192.168.0.220:6638
adapter: zstack
I then moved over, and installed uv
and using that in the home automation project. Also
added a readme file so that I won't forget it next time.