As my Home Assistant setup has become increasingly complex, I’ve started to see the limitations of the Raspberry Pi platform. Graphs are slow, and the history and logbook are basically unusable to me. Everything works, but I’d like to be able to use these things with a lot more data and still have everything be snappy in the frontend. My other home server needs have grown as well, so I decided to get an Intel NUC and migrate all of my current servers to either Docker containers or virtual machines with Proxmox.
I’ve been wanting to learn more about virtualization so this seems like a good way to accomplish that. I have acquired an Intel NUC 5i5MYHE with these specs:
- i5-5300U 2.3Ghz Processor (Passmark: 3784)
- 16GB RAM (upgradeable to 32gb)
- 512GB SSD
- Gigabit ethernet / USB 3.0 & 2.0
Very overkill for Home Assistant, but I also plan to run Plex Media Server (which currently is on an Odroid XU4) so the CPU was important. I want to be able to run other VMs for things like development and testing, and this was one of the few models confirmed to be upgradeable to 32GB of RAM. All of my file serving is handled by a Synology NAS, so storage wasn’t really a concern here. This model does support both a M.2 SSD and 2.5″ hard disk. Eventually I’d like to run all the VMs off the SSD and store persistent Docker data on the spinning disk.
One of the reasons I chose the NUC is that it is relatively low power. By running Proxmox as the host OS, I could eventually cluster multiple NUCs and expand my existing resources easily without have jet engine servers in my apartment.
So the plan is to install Proxmox and have a virtual machine that runs nothing but Docker. Then I will install Hass.IO and it’s related services inside Docker containers.
INSTALL PROXMOX ON THE NUC
First things first, update the BIOS to get that out of the way. Here’s the instructions (PDF) from Intel. I am going to use the “F7 Method” since I don’t have a Windows PC to use their utility.
Next I made a bootable USB image of Debian. Just download the ISO and write it to my USB stick:
sudo dd if=debian-9.4.0-amd64-netinst.iso of=/dev/disk4
I did a standard, headless install of Debian. After a quick install and a reboot, I was at a Linux terminal.
Following the Proxmox-over-Debian instructions from their wiki, I made sure I had a static IP on my network that resolved to my hostname. Then I added the Proxmox repositories and installed Proxmox:
echo "deb http://download.proxmox.com/debian/pve stretch pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list wget http://download.proxmox.com/debian/proxmox-ve-release-5.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-5.x.gpg apt update && apt dist-upgrade apt install proxmox-ve postfix open-iscsi
This replaces the Debian kernel with the Proxmox virtualization capable one and installs everything needed to make it work. After rebooting, the login prompt now says “Welcome to the Proxmox Virtual Environment”. Now I can login via the web interface at https://IP:8006/
CREATING THE VIRTUAL MACHINE
In order for the virtual machine to have access to the network, I need to set up a bridge. I did this manually by editing the /etc/network/interfaces file
source /etc/network/interfaces.d/* auto lo iface lo inet loopback allow-hotplug enp0s25 iface enp0s25 inet manual auto vmbr0 iface vmbr0 inet static address 192.168.0.2 netmask 255.255.255.0 gateway 192.168.0.1 bridge_ports enp0s25 bridge_stp off bridge_fd 0
This gives a static IP of 192.168.0.2 to the host NUC on my LAN and bridges the VMs to the network.
INSTALLING THE VM
In order to install some new VMs from an ISO, copy those ISO files to /var/lib/vz/template/iso on the host NUC. I used SFTP. The images will appear in the “Create VM” dialog:
I select my Debian ISO, gave it 128GB of hard disk space, and set the CPU to share all 4 cores. For RAM I have it set to use a range of 4-10GB.
Hit start on the VM and open the console – now I am installing Debian for the second time today.
INSTALLING DOCKER AND HASS.IO
Now I need to get Docker and the HA dependencies installed. The Home Assistant info for this procedure is here. Install what it needs:
apt-get install jq curl dbus socat bash avahi-daemon apt-get install -y apt-transport-https ca-certificates wget software-properties-common wget https://download.docker.com/linux/debian/gpg sudo apt-key add gpg echo "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee -a /etc/apt/sources.list.d/docker.list apt-get update apt-get -y install docker-ce
Now, install Hass.IO in Docker with the intel-nuc option selected.
curl -sL https://raw.githubusercontent.com/home-assistant/hassio-build/master/install/hassio_install | bash -s -- -m intel-nuc
And it’s up:
Z-WAVE USB PASS THROUGH
I have a HUZB-1 Z-Wave/Zigbee stick. In order to pass it through to the VM, I connect to the NUC and:
brad@nuc:~$ lsusb Bus 001 Device 002: ID 8087:8001 Intel Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 002 Device 003: ID 10c4:8a2a Cygnal Integrated Products, Inc. Bus 002 Device 002: ID 1c4f:0002 SiGma Micro Keyboard TRACER Gamma Ivory Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
I’m interested in the Cygnal device, which has an ID of 10c4:8a2a.
brad@nuc:~$ nano /etc/pve/qemu-server/100.conf
100 is the ID of my VM. Add this line at the bottom:
Reboot the VM and it will now have access to my USB stick as if it were plugged in locally.
So now I’m almost done: I have a host server running Proxmox, and have a virtual machine dedicated to running Docker. Home Assistant (Hass.IO) runs in this Docker. For easy management of the containers, I’m going to install Portainer as a web GUI.
docker volume create portainer_data docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
With Hass.IO in Docker, I have two containers: homeassistant itself and the Hass.IO supervisor. If I install any addons within Hass.IO, they run the same as any other Docker containers and can be managed here as well. You can see I’ve enabled Samba, MQTT, and Node-Red as addons from within Hass.IO. There is no real difference between running an add-on or a standard Docker, add-ons are just Docker containers tailored to Hass.IO.
Now I just need to migrate my HA installation over. Unfortunately, copying a full Hass.IO backup did not work (different architectures?). So I simply copied my HA and Node-Red config folders over to the new Hass.IO via Samba and that got me 90% of the way there. A little tweaking of my databases and getting my other network servers up, and I will be fully migrated over.
AND THE PERFORMANCE?
I am seeing a very dramatic increase in responsiveness. Not just in the web UI (I can now use the logbook and history!) but in the responsiveness of all of my devices. Everything from MQTT triggers to scenes to Z-Wave devices all are noticeable much, much faster. And the load on my NUC, with 3 VMs going and all the above Dockers turned on:
Guess I need to find something for all those extra CPU cycles to do!
I’m about to tackle this exact task: moving from Hass.IO on an RPi3 to a small Skylake Core i3 box I’m building on my own. I’d also like to run pfSense on it in another VM to act as my router. I see from one of your screenshots that you downloaded that ISO. Will a future article address that?
Yeah I will write that up eventually, I have just been tinkering with pfSense and haven’t fully switched my network over to it yet. For the NUC I am using a dual gigabit USB3 adapter so that pfSense has it’s own WAN/LAN connection independent of my virtualization host.
Cool! How are you giving the pfSense VM access to the adapter? VirtIO or passthrough?
For my adapter, has to be USB pass through.
[…] If I had better internet I would just run this in a VM on my Proxmox Intel NUC host! See this post for more info on setting up a NUC as low power home server with virtualization. […]
[…] Setting up Proxmox on an Intel NUC […]
Awesome write up!! PFSense is amazing use it at home and work. One question, docker is new to me, I was able to get Portainer up and running, but after a reboot I can no longer access it. When trying to run commands to –restart-always I get an error that the container name is already in use by another container. Kind of stuck here, I dont really need the gui as everything is up and running but it would be nice to be able to see it. Docker Contaner LS shows the containers up and running. I’m sure I’m missing something small, but again docker is new to me 🙁
Hey Brandon, to re-create the container you will need to delete the existing one and then re-run the command. Docker containers have to have unique names, and changing their basic settings requires them to be recreated. So you need to remove the one you already made first before re-running the command. This shouldn’t change any settings as any persistent data for Portainer is either stored on disk (if you mapped a local volume) or within a Docker volume (default behavior for Portainer, I believe).
When you have Portainer up and running, you’ll see that editing a container and hitting “Recreate” does this for you (removes existing container, re-launches it with new command).
B-RAD!! you da man!!
Keep up the great work and awesome video’s. Between you and DrZz’s you keep giving great ideas, and awesome help with DIY home automation. Next up, is mastering node-red a little more.
After installing hass.io, where am I supposed to see the “Preparing Hass.io” screen? In the console, all I see is “[Info] Run Hass.io”
At the web UI – http://whateverip:8123
I was great with the instructions up until “Now, install Hass.IO in Docker with the intel-nuc option selected.” I’m thinking the Hassio install goes onto the proxmox virtual?
In the VM you wish to install Hassio in, not on the host.
Thank for this guide. I have Lenovo 93p (Tiny computer) with 8GB RAM. I want to run hass.io (currently on Pi 3+ and is slowing down) and Blue Iris software -10 cameras (currently on Surface Pro Gen1 with 4GB RAm/Cire 1g Gen 3 -CPU load is only 20-25% CPU). The Lenovo 93P has 8GB Ram and Core i5 Gen 4. I am not going to run more VMs in this unit on permanent basis (may be have a test bench only that will be on off after each testing).
May I ask you opinion- should I run promox VE in my case ( I need windows 7 or 10 to run Blue Iris and one linux version for HA) or should I be better off just installing ubuntu server and add docker to run Hass.io and install virualbox or vmware player and run windows for blue iris on that? I am asking because my hardware is not as good as yours
Second set of question relates to your method as I would like to use your guide to do it and I am not a very linux savvy person
a) I have seen few videos where people install Promox VE USB bootable image and it would be much easy for me. Would the rest of the steps work if I do that?
b) install Hass.IO in Docker with the intel-nuc option selected. and you have used the following command. How do I need to modify that to my machine as I am not running on NUC
curl -sL https://raw.githubusercontent.com/home-assistant/hassio-build/master/install/hassio_install | bash -s — -m intel-nuc
c) You have “In order for the virtual machine to have access to the network, I need to set up a bridge. I did this manually by editing the /etc/network/interfaces file”. Cann’t I just log in to Proxmox VE via a broser and do it in GUI? I have seen this being done in GUI and what can I not do that your method would achieve?
Thanks for the great write up. I have an intel NUC and I installed proxmox . After installation, I realized that the installation assumes you would be using wired (eth) NIC . Unfortunately, am trying to use my wireless as the primary NIC to manage the proxmox host. Also, use it as a bridge interface for my guests. I understand this requires routed configuration. But, am completely lost on how to do that. I dont see anything on the guide either. Any guidance on this would be much appreciated. Thanks