Author Archives: puckpuck

Server with rails in a 2-post rack

A long time ago I bought a house, and almost immediately I put a simple 2-post rack in it to host some basic networking and home audio equipment. Then some stuff ensued, and I decided I wanted a server for my home. Well servers are big and heavy, come with rails and need a 4-post rack.

hmmm….

My lucky ass, just happened to of positioned my 2 post rack, about 34 inches away from an open framed wall. Given that standard rack depths are 31.5 inches I had something to work with.

First I had to modify my 2 post rack to actually accept the frame rails. For strength my rack post are in U shape and not standard 90-degree corners you see on a 4-post rack. So I had to cut away a portion of metal for the rails to fit into.

looking from the back at the cutout

Once the posts were ready, I put the rails in, got a level and square out to make sure they were set to the right spot, and screwed them into strategically placed pieces of 2×4 wood on my open framed wall.

2 post is now 4

Finally put the server in, and crack open a beer.

racked up and ready to go

The server slides in/out of the rack with ease, even a few months later. So I guess you can call me lucky for putting the 2-post rack where I did.

Conquering DNS in a Kubernetes homelab

When it doesn’t work we blame DNS. DNS is hard. Add Kubernetes to the mix and your eyes get watery, heart rate increases, palms get sweaty… and you just walk away before you risk losing anymore time in your life.

Then you came across this blog post, and had some hope. Hoping that someone, somewhere solved this problem.

Well I hope this is that blog post. The problem I solved is having DNS entries for load balanced services and ingress resources automatically be made available to my homelab network, outside of Kubernetes.

I already solved the standard homelab DNS problem. Any VM or new device that connects to my home network and advertises a hostname, will be immediately discoverable by name. Now I wanted to take it a step further. I like Kubernetes, and its easier for me to push new things to my homelab K8s cluster than creating a new VM. So it was natural that I wanted to solve this problem so I can put things in K8s and hit it by name without having to do anything else on my network.

In comes external-dns for Kubernetes… a well known optional component for Kubernetes. Its role is to update DNS records inside an external provider based on available ingress and load balanced service resources. Every minute it will scan your cluster for new or updated resources, check against an internal cache, and update your DNS provider.

Perfect, now I just need a DNS provider supported by external-dns. I decided to use PowerDNS for this. This DNS service uses MySQL for zone setup, has an API, and community supported GUIs for that API. Nothing fancy, and it can all be controlled without too much pain… as a Kubernetes resources. Yes that’s right. I’m going to run my DNS provider for external-dns, inside of K8s itself… because why not?

The first part was to cobble together all the Kubernetes manifests required to install PowerDNS. Helm to the rescue. I found a community created chart. It came close to everything I needed but still missed a few things. So I cloned it and made my own Helm chart. The chart installs PowerDNS with a single pod MariaDB and persistence storage disabled. The idea being if my PowerDNS deployment falls apart, external-dns will re-sync it within 60s of restart, so we don’t need any form of real persistence.

To make this all work, PowerDNS needs to have knowledge of the domain(s) it’s going to manage entries for. This domain should end up being a sub-domain of your main network. For this post, I’m going to assume your main network router is configured for a domain like mydomain.house and all your home devices use your router for DNS resolution. Knowing that we are going to setup PowerDNS to manage a domain called k8s.mydomain.house.

The Helm chart I created takes a list of domains you want to have PowerDNS manage and it will configure them on startup. Since there are some api keys and passwords involved in all this, you also need to set a few more details, and in the end, you have a values file for the Helm chart that looks like this:

powerdns:
  api:
    key: SOMETHING_ANYTHING
  initDomains: 
    - k8s.mydomain.house 

service:
  annotations:
    metallb.universe.tf/allow-shared-ip: powerdns  
  type: LoadBalancer
  ip: 192.168.1.200

mariadb:
  rootUser:
    password: A_PASSWORD

So a couple of things to note in that block, particularly the service section. I have an annotation specified. This is for MetalLB (which I use as a LoadBalancer) to allow the same IP to share multiple service resources on the same port, which is needed for TCP and UDP resolution on DNS.

With that yaml saved in a file you can install it all like this:

helm repo add puckpuck https://puckpuck.github.io/helm-charts
helm install powerdns puckpuck/powerdns --values my-values.yaml

PowerDNS doesn’t have a UI. I have an HTML file that sits on my hard drive, when I open it, I type in the URL for PowerDNS and my API key, and that’s my UI. I got the file from here: https://github.com/james-stevens/powerdns-webui You can find the actual html file in the htdocs folder of that repo. One day I might actual add an NGiNX pod with this file as part of my PowerDNS Helm chart, but alas, here I am writing a blog about it instead.

Now that we have PowerDNS setup, next is for external-dns. Luckily, the fine folks at Bitnami have created such a Helm chart, and it does exactly what we need. Here’s what I used as my values yaml file.

provider: pdns
domainFilters:
  - k8s.mydomain.house
txtOwnerId: k8s

pdns:
  apiUrl: http://powerdns-api
  apiPort: 8081
  apiKey: SOMETHING_ANYTHING

Then to install the chart I ran this

helm repo add bitnami https://charts.bitnami.com/bitnami
helm install external-dns bitnami/external-dns --values external-dns-values.yaml

Now we have external-dns going out on a periodic basis (every minute) finding all your services and ingress resources, checking to see if they have the external-dns annotation, then syncing that list with PowerDNS.

Almost done.

Now we need to configure your primary DNS to send any request for your new subdomain off to PowerDNS for resolution. If you read this far and don’t have an Ubiquiti router…. I’m sorry. If you have an Ubiquiti EdgeRouter you’re in luck because you only need to do one more setting 🙂

Inside the EdgeMax UI, go to the Config Tree tab, then expand service -> dns -> forwarding. From here you will click the Add button for options and set the new option to the following. Note the IP address here should match what you setup when you configured PowerDNS.

server=/k8s.mydomain.house/192.168.1.200

Click Preview on the bottom of the screen then Apply for the changes to take effect.

If you followed my instructions on how to properly setup home DNS, then all devices should get DNS resolution configured via DHCP, meaning your router is the only DNS server your home devices look for. With that being the case, when you try to DNS resolve anything under the k8s.mydomain.house domain (or whatever you configured) it will be sent to PowerDNS for resolution.

With all this in place you should now be able to network reach any Kubernetes ingress or service configured with the external-dns annotation, from anywhere in your network.

To test this out you can set the hostname property on an Ingress rule, or add this annotation to any load balanced service: external-dns.alpha.kubernetes.io/hostname

The hostname specified for the Ingress rule or the Service annotation should fall within your configured sub domain. For example: foo.k8s.mydomain.house

Making backups less boring

Backing up data in your home is important. What if your computer crashes, what happens to all those photos on it? Here’s a harder one. What if your home catches fire. What happens to the photos now? Mix in a homelab and this is a big ball of nasty.

If you pay for home fire insurance, you should also pay for a backup solution. If you have a homelab, you need a storage and backup solution cause well, shit will hit the fan with a homelab. Backups don’t need to be boring or annoying.

Doing a home based backup makes a lot of sense. You get the speed and reliability with frequent updates at a low operational cost. Doing a cloud based backup makes a lot of sense in addition to this so you can have the security if things really go bad. Doing both is the best of all worlds.

For home based I run a Synology setup. For the features and functions you get with Synology’s Disk Station Manager UI and plugins, nothing else beats the platform.

I have 2 volumes setup. One large volume formatted using btrfs file format. This format gives lots of great reliability and backup features including snapshots of files. You can also create shared folders in btrfs with size quotas, which MacOS will property detect with Time Machine. So we have multiple systems keep their Time Machine backups on the same volume.

The other and smaller volume is formatted using ext4. This has slightly better write throughput, which I use for persistent storage in my homelab. On this volume I have multiple Kubernetes persistent volumes using iSCSI, and a mounted shared to my vSphere homelab using NFS.

All systems in the home backup regularly to the NAS. As mentioned earlier, when you use Synology btrfs you can create file shares with size quotas. Meaning we can use a single volume, with multiple shares each for a Time Machine. Setup the shares on the “smallish” side at first, and expand as we need independently. Before this would require a dedicated volume, which does not offer the flexibility to grow/shrink on demand.

My homelab setup also gets backed up nightly to the main volume using a plugin for Synology called Active Backup. This is the only not so great moment in my world with Synology. To do this, I had to create a connection to my server (easy), but then I had to create a separate backup task for each VM. Ideally this would just backup all on the server, and/or query my server for available VMs and prompt me for which ones to backup in a quick operation. Active Backup runs nightly at 2am on my NAS.

So all your data is on your NAS. You have a backup of it, so if anything crashes you can recover, and do so quickly. That’s important. We are all humans and the majority of backup requests is not because something failed, but because you as a human fucked up. That’s fine, the backed up copy is right there. And if you are using advanced backup features (ie: Time Machine) you can even go back to a specific point in time.

Then a freak accident happened, and the whole fucking thing burned to the ground.

This is why we do cloud based backups as well. The entire contents of your local NAS should get backed up offsite on a regular (daily) basis. Ideally and because it exists, the offsite backup is the cloud. Now I’m not talking about the consumer cloud backup products. You got this far in a blog, you don’t need to understand that. I mean using an enterprise class cloud based backup setup like Google Cloud Storage or Amazon Web Services S3. Going this route will save you plenty of money and might even be easier to setup.

I’m going to make it easy for you: use Google Cloud Storage. I used to use AWS S3 glacier. It worked. My cost was about $12 / month. But if I need a file… fuck it and wait. Want to clean up something cause you know homelabs and humans and shit. Yeah right. Cleanup is running some script to get your “archive list”, then another arcane script to delete each entry in the list… one at a time. Fuck that noise. Use Google Cloud Storage and be happy.

Today my bill is about $6 / month for GCS, and that includes storage and nightly backups from my NAS. What’s great about GCS is you can also move to different storage classes at will. I use coldline storage classes in GCS for all the backups. Because backups aren’t strictly additive (they also delete/update) this had the best cost ratio of storage classes.

Doing a backup to GCS on Synology is done using the Cloud Sync plugin. Nothing fancy here, just setup a task for each shared folder you want to backup. However I will add an additional caveat to this. Because of Time Machine, and it doing backups once per hour. You would be better served to schedule a narrow time slot in which the Cloud Sync plugin will operate. I do a 4am-7am window. Typically the sync is done within 30 minutes, but just in case I moved lots of large files into my NAS, the extra time will be handy.

I also configured Cloud Sync to sync upto 10 files at a time. This helps alot with bandwidth, especially if you have lots of small files. On initial sync I configured 20 files at a time, and it would saturate my entire 500 MiB/s fiber connection, when moving larger files.

All in all, I continue to recommend a Synology platform for NAS. It may cost a little more than others, but the time saved in setting up the NAS, and adding additional features is well worth it, and paid back dividends many times over.

Making home(lab) storage easy

Building a homelab, you need to store data. Own a home with kids, you need to store data.

I have a homelab, so my storage needs are a little bigger, but no matter what, if you need to store data, you need a solution. You have 3 choices. Build your own, buy a QNAP, or buy a Synology.

  1. Build your own is cheap, can be done with FreeNAS, will likely be a pain in the ass in some way. I don’t recommend this for anyone unless you know what the fuck you are doing.
  2. Buy QNAP. This is a “good” solution. It will work. You can do whatever you want to fucking do with it, but you need to click a bunch of buttons, and install some shit, and configure that thing. If your phone is Android, this is the NAS you want.
  3. Buy Synology. This will just work for 95% of the things you need with 2-3 mouse clicks. If your phone is an iPhone, this is the NAS you want.

So knowing the above, I went Synology. Did I pay a Synology Tax? Yes. But time is also money, Synology made my family and homelab needs easy.

Picked a platform, now to figure out the size, and growth. This is where I failed. I bought a DS218+ instead of a DS718. Maybe at first I didn’t understand or realize why, but most certainly I should of plucked down the few extra bucks and got the 718. I say this because I will be upgrading soon.

Silver lining: my homelab aspirations have grown since I bought the NAS, and I would much prefer an RS819 instead… something I wouldn’t have considered 6 months ago. So all is not a wash, and soon I will move up to a rack mount NAS chassis.

So why don’t I care about the upgrade? More of the Synology magic. Some people call this the Synology tax, but it’s a reason why you drop a few extra dollars on your setup. In my case, my setup is done using Synology SHR across the drives. This allows me to add additional capacity without needing to match drive size. I can grow my RAID cluster without having to keep my drives the same, as my budget allows. With this, I can slowly eliminate smaller drives while growing the entire footprint, without having to take a shock to my budget so I can do a new RAID setup.

With Synology, I get a bunch of easy to use out of the box features, but more importantly, I can go from small, to whatever the fuck size I want, without the need to redo my setup. Just keep adding drives, of any size I want, and like the rest of the Synology experience… it will just work.

Easy Home DNS

At home, you add a new system, maybe a Raspberry Pi, or a new VM, or a new home PC. You give it a name. Now you want to network reach that new thing by name. Not IP, by name. So you edit /etc/hosts add an entry and off to the races you go. Oops, wasn’t in sudo mode, let me try that again. There now it works.

But why does this have to be? Why can’t home networking just fucking work with hostnames?

That’s because we are doing it wrong. First you need to get yourself a good router. I have tried a few in the past, failed each time. Then I got an Ubiquiti EdgeRouter 4, and my dreams were answered. You need to get a few settings inside of your router set, then each time a new system comes online and gets an IP from the router, you will be able to hit it by hostname, without needing to configure anything special beyond standard DHCP on each host/VM/thing in your network.

Main Rule: Stop putting 1.1.1.1 or 8.8.8.8 or whatever your favorite public DNS is on every single client, and in every single alternate DNS configuration option in your network. I seen a vSphere VM customization policy wreck havoc on this because it specified 1.1.1.1 as a DNS server and that VM couldn’t ping by name internally. You might think you are helping, but all you are doing is masking something broken upstream. So fucking stop it!

Properly working DNS should delegate and forward your requests upstream where it makes sense. Like if you request something.com and your local router doesn’t have something.com as its domain then it should forward the request upstream. So the only spot you should configure the 1.1.1.1 entry is inside your local DNS server… also known as your router.

Finally your router will need to know which domain, so it can find entries. That domain should also match the default search for DHCP. Once all that is setup, DNS will magically work for you both internal, and external.

So back to making this work on an Ubiquiti EdgeRouter. Do these steps (and nothing more) and things will work. Doing more may not break things now, but in the future you may want to do some more DNS magic and get tripped up.

Set your system domain

In the EdgeOS UI, click on the System tab at the bottom of the screen, then set your System domain-name. This can be anything you really want it to be, though I recommend you spend the $15 and actually buy the name too. I set mine to a funky .house top level domain. Then I went out and bought it.

Setup DHCP domain name

To make sure everything lines up, and your router will actually service your requests when you try to ping by simple hostname we need to have DHCP communicate the domain name properly. To do this, go to the top level Services tab, then the DHCP Server sub tab. From here you should see all the DHCP servers you have configured (1 per interface). On the right side of the screen click Actions, then View Details. Set the domain name here to the same you specified above. Repeat this for each DHCP server.

Set your DNS forwarding servers

Now you likely configured these settings already when you setup your router initially, so we are just going to confirm a few things this time around. On the bottom of the screen, click the System tab. In this screen on the right side you will see a Name Server configuration option. This should only have a single entry, which is your router’s IP. That’s it. Nothing fucking else!

Next we are going to expose one of the small issues with EdgeOS. The fact that they don’t have a graphical way to give you DNS forwarding outside of the tree editor. However before we get to the tree editor, we need to make sure all our interfaces will have forwarding enabled. So go to the top level Services Tab then select the DNS sub tab. From here you should see all your connected interfaces listed. If they are not, add them now. I have 3 interfaces hooked up in my world so it looks like this for me.

Now that we have the interfaces setup for DNS forwarding, we need to tell EdgeOS where to forward the requests. Like I mentioned earlier, you may have already done this when you setup your router, but let’s double check. You need to go to the Config Tree tab, then expand service -> dns -> forwarding. Here you will see the public DNS servers configured. If not, add them as name-servers. You can add more than 1 name-server. This right here, is the only place you configure the public DNS. Don’t do it in your VMs, don’t do it in your vSphere networking policies, don’t do it on your local systems, don’t do it on any other fucking device you have that connects to the internet via this router. Nowhere else!

Now you are setup. You will be able to ping everything by simple hostname, or with the domain name suffix you specified. Any new system or VM that comes online using DHCP to connect will get proper DNS rules and just work. If you need to configure something with a static IP in the client (should not do this), then make sure the only DNS entry is for the router/gateway itself.

Happy home networking by name.

Auto add IDs and links to headings in WordPress

Seems simple, but I just want to add a small little working link icon like this next to each heading in my blog. I thought it was going to be a super easy thing, like install a plugin and be done. I thought wrong.

So this is multiple things in one. I want all headings to automatically get an HTML anchor, and I want each one to get a link icon that only shows up on hover to link to that HTML anchor.

Luckily Jeroen Sormani wrote a blog post that got me going in the right direction. But his blog fell short in a few places. It used older bootstrap icons, which no longer ship with WordPress. It also only worked with headings that you didn’t already specify an id on, existing headings that had an id wouldn’t get the fix. Finally, and this is more just a me thing, the hover showed up to the left of the heading, instead of to the right.

But all these things are easy fixes.

Get Font Awesome

First get yourself the Font Awesome WordPress plugin so you can get the link icon (and a bunch of other handy ones). It’s free, and only adds a few static resources to your pages from the official Font Awesome CDN, so your users likely already have it cached.

Add auto-id function

You will need to add a php function to your site theme to find all headings, and modify their output to include the HTML anchor as well as a clickable link icon. To do this in wp-admin, go to Appearance, then Theme Editor. Next you will select Theme Functions and add this to the php file.

/**
 * Automatically add IDs to headings such as <h2></h2>
 */
function auto_id_headings( $content ) {
    
    $content = preg_replace_callback( '/(\<h[1-6](.*?))\>(.*)(<\/h[1-6]>)/i', function( $matches ) {
        if ( ! stripos( $matches[0], 'id=' ) ) {
            $heading_link = '<a href="#' . sanitize_title( $matches[3] ) . '" class="heading-link"><i class="fas fa-link"></i></a>';
            $matches[0] = $matches[1] . $matches[2] . ' id="' . sanitize_title( $matches[3] ) . '">' . $matches[3] . $heading_link . $matches[4];
        } else {
            $startpos = stripos( $matches[2], 'id="') + 4;
            $endpos = stripos( $matches[2], '"', $startpos);
            $length = $endpos - $startpos;
            $title = substr( $matches[2], $startpos, $length);
            
            $heading_link = '<a href="#' . $title . '" class="heading-link"><i class="fas fa-link"></i></a>';
            $matches[0] = $matches[1] . '>' . $matches[3] . $heading_link . $matches[4];
        }
        
        return $matches[0];
        
    }, $content );
    
    return $content;
    
}
add_filter( 'the_content', 'auto_id_headings' );

The function above adds onto the original function by Jeroen. This one will also add the icon and link to headings which already had an ID as well.

Add CSS Styling

Finally we need to add some CSS styling to all this to get it to work and look proper. You may want to play with the margin settings here to your specific requirements. To add this, inside of wp-admin, go to Themes then Edit CSS. Add the following to your CSS styles.

/* hover links on headings */
h1 a.heading-link,h2 a.heading-link,h3 a.heading-link,h4 a.heading-link,h5 a.heading-link,h6 a.heading-link {
	opacity: 0;
	position: absolute;
	margin-left: 0.25rem;
}

h1:hover a.heading-link,h2:hover a.heading-link,h3:hover a.heading-link,h4:hover a.heading-link,h5:hover a.heading-link,h6:hover a.heading-link {
	opacity: 1
}

That’s it. Now all your existing and new headings will get auto-ids with little linkable icons next to them.

Replace Bell HomeHub with Ubiquiti EdgeRouter

I have to say the Bell HomeHub 3000 is a fucking piece of shit, and I pity anyone that is forced to use it. I replaced mine, and it was easier than I thought, and got much better speeds out of my internal network and internet speeds.

Doing it was quite easy, and because, I struggled to find the exact steps to do this online, I decided to write and blog them out here. Warning I only cover Fibe Internet. I know you can also do Fibe TV this way, but I’m not clear on the exact steps to enable Fibe TV as well. If you also have phone, unplug it and optional get a VoIP service.

Physically, to connect the fiber line, I used the SFP module provided by Bell inside the HomeHub 3000. I took this and plugged it into the SFP port on my EdgeRouter. I did not have any compatibility issues with this.

Step 1

You need to find your Bell Internet credentials. This is an account name that starts with a “b” and a password. You may have written them down when you setup you Bell Fibe service. You can also get it by login into your Bell account and looking at your Internet service. Here you will find your account name, and you’ll get an option to change your password.

Note the Account name and password we will need them in a later step.

Step 2

Assuming your router is configured and working on your network at least locally, get into the EdgeRouter’s UI main Dashboard. Here you get a list of all your ethernet devices and how they are connected. On an EdgeRouter 4 (my model) the eth3 port is the SFP port, your model may be slightly different here, but the steps should all be the same.

I already configured mine with names, and you can see I also use my other ports for reasons.

Step 3

To make internet work with Bell we need to connect on vlan 35 using PPPoE. So first we add a vlan interface. Click on the Add Interface button, then Add VLAN. From here setup your vlan with an ID of 35 on your SFP port (eth3). I gave mine a name of Internet (PPPoE) because that’s what this is going to be used for. Leave MTU to 1500 and No Address.

Step 4

Next we need to add the PPPoE interface. Click on the Add Interface button, then Add PPPoE. Set PPPoE ID to 0. The Interface is the vlan interface we created in Step 3 (eth3.35). Fill in your Bell Internet credentials (starts with b) that you found in Step 1. Finally set MTU to 1492. This is required for large file transfers with Bell.

Step 5

Verify it all works. Your setup should look something like this and have Connected on the PPPoE interface.

Once this is all setup and done you should have full speed at your router and anything connected to it. Take that HomeHub and make it a paper weight. I recommend against throwing it out. I’m sure Bell would want it back.

I have been running this setup for several months now without any issues. I have gone through multiple power outages and device restarts, as well as prolonged periods without any power interruption. We never had issues with internet connectivity.

Ubiquiti UniFi controller across subnets

If you are like me you have you some UniFi devices on a different subnet than your UniFi controller. Trying to provision an access point that doesn’t automatically discover is still possible using the Ubiquiti Discovery Tool but there is another and much easier option.

To make this happen all you need to do is enable L2 network discovery option in your UniFi controller. Log into your UniFi controller and go to the Controller Settings section. Depending on if you are using the classic (old) or new UI this will be in a slightly different spot.

In the classic (old) UI you will find this right in the middle of the screen for Controller Settings Section.

classic (old) UI

In the new UI you will find this in Controller Settings, then go to the Advanced Configuration section.

new UI

Once enabled and saved, you can return to your device discovery section and give it a few minutes for your devices in different subnets to appear. That’s it!

Homelab network with Ubiquiti

A couple months ago when the need came to fix some home networking issues, I turned to a company that I often heard a co-worker talk abut: Ubiquiti. I started humble with just an EdgeRouter 4 as the entry point and main routing for my home. Then when the urge to build out a full homelab + network setup came, I turned to Ubiquiti again.

I’m doing 2 things. 1) Setting up a more reliable and wider range wifi network for my home. 2) Setting up a more robust network for homelab connectivity.

Ubiquiti offers the UniFi line that can do all of this. The UI for the UniFi controller is very good. However I also found the UniFi line to be more expensive / performance than the EdgeMax for anything you put in a rack (switch, router, etc). For WiFi mesh, UniFi does this very well, and offers multiple different types of access points based on what you need it for. The user interface is also much better for UniFi, using a single controller for all your devices. You can kinda do something similar with UNMS (EdgeMax in the cloud), but it’s not the same as you get with UniFi (which also has cloud management).

For price reasons, and because network should not be something you need to configure much after initial setup, I decided to use the EdgeMax line for anything in my rack, and the UniFi line for WiFi. This is probably the only issue with my setup. Half of my network is EdgeMax managed and the other half is UniFi.

My house is already well connected, and I have cat5 running to quite a few rooms already, so going with powered over ethernet (PoE) access points made this easier. I try to hide the access points but they still aren’t awful to look at when mounted to the wall or ceiling. We need 3 access points to cover the house and backyard properly. So noted I need a switch with at least 3 PoE (maybe a 4th).

I also have a few other “devices” if you will that need to connect into the home network. An NVR for security cameras, and a couple of audio sinks that we can use as input sources for the whole home audio system. So that’s an EdgeRouter 10XP that we will use for all devices.

I have a 24-port Gigabit switch which has been my tried and true for several years now. That will remain to connect most of all the things that in the house. TVs, set top boxes, my office, things like that.

Finally the future home lab, and anything we call part of it (ie: storage). For this I will have 1 server with dual ethernet ports. In the future I may add a second server. Servers also have management ports, for Dell this is iDRAC. So that’s 3 ports per compute server. Looking at storage I have just a single ethernet port NAS today, but potential upgrade to one with dual. With all of this I also need a switch that can do link aggregation to take advantage of dual networks. I can solve this with an EdgeSwitch 10X.

Note: I can solve both devices and compute with a single 16-port switch, but the cost was more than $100 difference, and I have plenty of rack room.

Each switch gets it’s own port from my EdgeRouter 4, each on their own subnet. This means I get 750+ addresses in my house divided into 3 groups. All of this handled without having to do anything special. It’s all just default plug in everything together and turn it on.

Setting up the UniFi access points to create a WiFi mesh is easy as well. Plug in your devices into a network switch with PoE, then configure it using the UniFi Controller software. The software will discover your endpoints, provision them, and configure to any wifi / network settings you specified in the controller setup wizard. You can tweak your wifi settings anytime, including many different radio optimizations.

Note: If everything is on the same subnet this is a piece of cake, and that’s how I did it on the initial setup. Later in life I moved my controller software to its own VM running on a different subnet. Doing this requires a few extra steps to be completed that I mention here.

Homelab – start to finish

So I did it. Writing this post is actually me saying it’s finished. That’s my homelab, the one with the end goal to have my puckpuck.com domain pointed to a WordPress site running inside Kubernetes, setup in a vSphere environment, served up from the closet in my home over SSL for the world to consume.

Just 2 months ago I had a simple-ish home network setup. ISP modem/router -> My WiFi router -> Switch for wired devices. Also included was a NAS device plugged into the switch that we used for home backups and photos.

When we had this home built 10 years ago we dreamed of having it well connected. During construction, with some friends, I pulled over a mile of cabling through the walls. Mostly ethernet, but it also included coax, speaker wires, and even a few hdmi runs. Most of the cabling terminates in a single room I call the “LAN closet”, though the rest of the family calls “the brain”.

So with all the cable pulled through we ended up with a fairly extensive whole home audio setup, a bunch of splitters in daisy chain for antenna TV, and a 2 post rack to kinda hold it all together. Everything terminated directly into components, no real organization, but it worked and it was reliable. That stayed in place for almost 10 years with just a few minor swap-in updates done.

it wasn’t pretty but it worked

In its last incarnation before the “redo” you can see we have a Bell branded router for fiber termination in our home. This router was the initial reasoning behind doing this. I can’t say enough bad things about this router. Port forwarding didn’t work, Wifi was awful at near range, devices constantly dropped. I even had Bell replace the router, and was still plagued with the same issues.

So finally out of frustration I set out eliminate the Bell supplied piece-of-shit router and terminate the fiber connection on my own device. This was my fist use of an Ubiquiti device. I got an EdgeRouter to terminate the fiber, and plugged my wifi router into it. Just doing this alone was a significant boost in internet speeds, especially for wired devices. Setting the router up was easy, and though the EdgeMax UI isn’t elegant, I didn’t have an issue finding what I needed.

Then it bit me, the homelab bug. Maybe it’s because I started following /r/homelab, or maybe I just got sick and tired of looking at my tired rack, and really wanted to finish it.

So I set out to plan out what I wanted this homelab of mine to do, and how I was going to achieve with it. Took an inventory of what I needed, laid out what the rack would look like from top to bottom, figured out each cable, where it would terminate on a patch panel, and ultimately onto the switch or rack device afterwards. I had multiple spreadsheets and diagrams to help me figure this all out.

Stickers are fun!

Nothing was spared in planning, and I can’t stress how important this is. Equally important is to only create a plan that you can actually commit to. This stuff isn’t cheap. Sure you want to expand to your dream setup, but don’t short yourself today for something you will do in 2+ years. Technology evolves quickly, and 2 years is a long time in this space.

I approached the setup in 3 phases: network, storage, and compute. The phases were repeated for both the home and my Kubernetes setup.

The home network is especially important. Though I want to run all the new techy stuff and configure it ad-nauseam to have fun. Ultimately this needs to just work, especially for my wife and kids. Since wifi was sketchy before, especially in our back yard and deck area, I was able to convince my wife on the notion that I was gonna build a better network. She obliged and I set my plan in action buying all the network gear needed.

My network is all redone now, and except for a 24-port switch I carried over from before, it’s all Ubiquiti now too. That experience with the EdgeRouter convinced me that Ubiquiti is enterprise grade networking that you can use in the home. Perfect for the prosumer.

On the storage front I have my trusted 2-bay Synology NAS that I carried over. Though I have dreams of needing more storage and to dynamic flex that storage, this all costs money, and the existing storage setup I had would be at about 50% capacity (though it would need a re-configure). Later in life, I’ll get a bigger (and rack mountable) Synology NAS.

Finally for home compute I found a Dell PowerEdge R420 on ebay with 600GB of SAS 15k storage. There might be some Dell/VMware bias since I used to work for a VMware, but generally speaking this is a well supported and popular homelab setup for compute. I was getting a recent enough system including dual socket compute without breaking the bank.

After putting it all together I’m very pleased with the physical results of my LAN closet.

#labporn

I go over in detail why I did what for each phase (network, storage, compute) for both the home/homelab as well as Kubernetes inside so I could run WordPress and reach my final goal for this setup… well at least my current final goal 😜

Inventory List:

  • Network
    • Ubiquiti EdgeRouter 4 (main router)
    • Ubiquiti EdgeSwitch 10XP (devices switch)
    • Ubiquiti EdgeSwitch 10X (services switch)
    • TrendNet TEG-S240g (home switch)
    • Ubiquiti AP-AC-LR (x3 for Wifi mesh)
  • Storage
    • Synology DS218+
  • Compute
    • Dell PowerEdge R420
  • Security
    • Amcrest NV4108E-HS (NVR for 5 cameras)
  • Power
    • APC Back-UPS 425
    • Rockville 9-port Power Strip
  • Audio
    • HTD MA-1235 (amplifier – main)
    • HTD MC2-86 (controller for main amp)
    • HTD MC-66 (controller for amp in back yard)
    • Denon AVR-X1000 (a/v receiver for tv and speakers in adjacent room)
    • Apple AirPort Express (AirPlay audio sink used as input source)