Ok, so I built plugmon a while ago. It worked great. I loved it – super reliable, none of the fiddly nonsense I’ve had to work through with my vibration sensor-based dryer monitoring solution even. Sadly, the Etekcity smart plugs I used before, which used the (really nice) VeSync API no longer seem to be able to be purchased, easily at least.
So, what to do? If the code is to be useful long-term, we’ll need to change the platform to something that’s actually able to be purchased. Without question there’s no shortage of smart plugs available. So, what are desirable features that I’m after when looking for a different platform?
Naturally, I’m after some sort of way to easily talk to the plugs to read data. Of course, it also (should at least) goes without saying that we need a plug that offers the ability to monitor power use, as well as exposing some sort of API to allow use to get at that data without using the vendor’s app directly.
There’s a ton of options out there, but eventually, I landed with the Kasa (formerly TP-Link) plugs. Why? Two things really pushed them over the top. First of course was remote API-style access to the plug’s power monitoring data. But with the Kasa plugs one didn’t even need to go outside the home LAN to capture the data.
At the end of things, I kept the majority of code from jcostom/plugmon, grabbed a few bits of code from other projects I’ve worked on, and in about 30 minutes, the Washerbot was born.
It’s a shame that the VeSync plugs are now so difficult (impossible?) to come by. The API was reasonably easy to work with, and they weren’t terribly expensive either. I’m hopeful that TP-Link / Kasa Smart will be around for longer. I really like the “no outside connectivity” needed part of the python-kasa module as well.
Some may point out that there was a brief dust-up a couple of years ago with TP-Link, when they announced their intention to stop allowing local access to their devices, and you’d be right to do so. Fortunately, TP-Link was smart enough to take the not-so-subtle hints from the community, and walked that change back.
Without further ado, head over to GitHub and check out the new Washerbot code & container. Obviously, you’ll need one of the TP-Link plugs that provides energy use stats, like the KP115.
Ok, not really. But think about it. Business cards kind of suck, right? You go through some sort of re-org, the company does a branding change, your role changes, whatever. But that box of cards you got? I bet you didn’t get half way through it before something changed and the cards were rendered useless to some degree.
Maybe a phone number changed and you found yourself scribbling on the cards with a pen and writing the new number in there. Maybe the company’s logo changed and marketing has strictly embargoed use of all the old branding. Whatever, you find yourself, once again, getting rid of a stack of business cards. In my case, there’s another thing I find completely annoying – the cost of shipping the things. My company has worked out some kind of spectacular deal with the company they buy business cards from. But the shipping? Yow. So, 500 cards is like $7, but UPS Ground for that same order is $25. Multiply that times how many people and how often roles and branding changes, and that’s a lot of money and paper.
So, what could we do instead? For the past many years, most mobile phones, whether iOS or Android have had NFC (Near Field Communication) capabilities built in. So what’s NFC do? Without going into the nuts & bolts, it’s a protocol that makes communication between 2 things as easy as bringing them near to each other. It’s how things like tap-to-pay systems work. Like the one in your American Express card, or Apple/Google Pay, etc. The great thing about NFC tech? You can use it to store tons of different types of data and share it between devices.
Ok, so now that I’ve hooked you, how do we save the environment while impressing everyone with our amazing command of technology? If you’re the DIY type (like me) maybe you just program a URL on an NFC device and let folks scan that. Maybe you want something more packaged/turnkey and are willing to cough up some cash to pay for it – there are business models for both. I’ll spend the bulk of the rest of this article talking through the DIY model. If you really want to go down the packaged route – look at something like Popl. They’ve got a bunch of stuff ranging from QR code stickers to a variety of NFC devices coupled with a service that comes in free and subscription versions. The free version frames your content and lacks flexibility, while the pay-for version offers a lot more options. I’m not a fan of sticking a third party in the middle of any interactions I’m having with people I’m sharing my contact info with, so I ruled them out immediately.
Step one – you need something to point folks at. It could be a site like Beacons or Linktree, both of which come in free versions, it could be a social media profile page, like your LinkedIn profile or maybe your Instagram, or perhaps it could be a link to a website you stand up specifically for this purpose. In my case, I went for that last one.
I’m not much of a web designer, though I can do a decent job of modifying someone else’s design to suit my needs. So, I came upon the lovely html5up.net site, where one can find a bunch of great templates to work from. I settled on the Aerial template, swapped out the background for something that had more of a “networking” vibe, ripped out the Font Awesome v5 bits, replaced them with the latest v6 bits, tweaked some bits, created a profile pic using the super cool AI-driven https://thispersondoesnotexist.com/ site, and generated a vCard using the macOS Contacts app. In a few minutes, this demo was ready to roll. Honestly, the demo has the most impact on your phone, since when you hit the link on the far right it launches the vCard in your Contacts app.
Hosting? Free, fast and easy. GitHub Pages. Get yourself a GitHub login if you don’t already have one. Read up on how to turn a simple GitHub repo into a website here. It’s so easy you’re practically done before you’re started. I’m not kidding. Your URL will look something like https://youruser.github.io/.
Ready? Program that into the NFC thingy of your choosing as a URL object. Cool. So, what’s the NFC thingy I’m choosing? Great question. You’ve got options. I’ve got a couple of things myself. My first thing was a metal business card. Yes, metal (who’s making metal fingers right now with me? Yeah, I know.) I got it from Tyler at TapTag. He’s such a good dude. He’ll answer all of the stupid questions bouncing around in your head right now. I know he answered the ones I had – I’m sure he’ll keep going with you too. He’s such a great guy, patient too. Send him your business. Prices are good too.
Another interesting option is an NFC sticker. I picked up some black NTAG 215 stickers from Amazon, and popped one on the back of my phone case. I’ve got 2 actual pages like above. One’s for business use – that one’s linked from the actual “business card” that I walk around with in my wallet and wave around at business functions, and the other is linked from the sticker on the back of my phone case. The business card tag also has a QR code printed on the back side that goes to the same URL for folks who are either NFC challenged, or just plain refuse to scan a tag.
So, do your part, stop wasting all that paper and get with the program.
Many of you already use Let’s Encrypt certificates in various capacities to provide secure connectivity to applications and devices. Most of the time, these apps and devices automatically reach out, get certs issued, installed and everything just works. That’s cases like traefik, or certbot with apache/nginx, etc.
Then there are those “other” use cases you’ve got. Like say, a custom certificate for a Plex server, or maybe even something more exotic like a certificate for an HP printer. How do you take care of those in an automated, “hands-off” sort of way? How do you make it work so that you’re not having to set reminders for yourself to get in there and swap out certs manually every 3 months? Because you know what’s going to happen right? That reminder’s going to go off, you’re going snooze it for a couple of days, then you’ll tick that checkbox, saying, “yeah, I’ll do it after I get back from lunch” and then something happens and it never gets done. Next thing you know, the cert expires, and it becomes a pain in the rear at the worst possible moment.
That’s where deploy-hooks come into play. If you’ve got a script that can install the certificate, you can call that script right after the cert has been issued by specifying the --deploy-hook flag on the certbot renew command. Let’s look at an example of how we might add this to an existing certbot certificate that’s already setup for automatic renewal. Remember, automatic renewal and automatic installation are different things.
First, we’ll do a dry-run, then we’ll force the renewal. It’s really that easy. Check it:
Once this process is completed, the automatic renewal configuration for printer.mynetwork.net will include the deploy-hook /usr/local/sbin/pcert.sh. But, what does that really mean? Upon successful renewal, that script will execute, at which point, you’re (presumably) using the script to install the newly refreshed certificate. In this case, the script is unique to that particular certificate. It’s possible to have deploy-hooks that are executed fro EVERY cert as well, by dropping them in the /etc/letsencrypt/renewal-hooks/deploy directory.
For some examples, check out the ones I’m using. Especially interesting (to me at least) is the HP Printer script. That one took a bit of hackery to get working. I had to run the dev tools, and record the browser session a couple of times to get all the variable names straight, and so forth, but once I had it down, it was a snap. Now when the Let’s Encrypt cert updates, within a few seconds, I’ve got the latest cert installed and running on the printer!
[Any Amazon Links below are Non-Affiliate Links that just go to Amazon Smile]
So, if you think back a bit, you may recall that I was using a Pi 4 for my IoT project that monitored the dryer, shooting out Telegram group messages to the whole family when the dryer was done with the laundry.
Times being what they are, it’s pretty difficult to come by a new Raspberry Pi these days, as I’m sure many of you know. I needed the power of the Pi 4 for something else, at least on a temporary basis. Meanwhile, back at the ranch, a couple of months prior, I’d received a ping from the Micro Center about 45 minutes away informing me that they had a handful of Pi Zero 2 W’s on hand. Those little suckers are super hard to find, so I snapped up my max of 2, along with the GPU I’d been dying to lay hands on for the longest time. For those who care, I finally got an EVGA 3080. Pandemics and supply-chain constraint conditions suck, by the way, in case you were wondering my position on that issue.
So, having my Pi Zero 2 W in the drawer ready to roll, I unscrewed the box from the way that housed the Pi 4, fitted the sensor I had directly onto the Pi Zero 2 W, and scaled down from a 2-project-box solution down to 1 box. Sadly, it sucked. But, it wasn’t the hardware’s fault. In reality it was totally a self-inflicted condition.
I modified (slightly) the pins on the old 801s sensor I had, fitted it onto that new Pi Zero 2W (since it didn’t have any GPIO pin headers soldered on), and sort of Rube-Goldberged it together using 3M VHB tape inside the project box. Total hack job. I thought about using a bunch of hot glue, but then I thought better of it. Why not solder? Honestly? I suck at soldering. One of these days I’ll get around to getting good at it. But that’s not today.
It was wildly unstable. The sensor kept on moving, losing contact with the side of the GPIO holes, it was awful. I all but gave up. I had a brief flirtation with the Aqara Smart Hub and one of their Zigbee Vibration sensors, and believe me, when I say brief, I mean like 12 hours. It just wasn’t fit for the job.
My grand plan with that was to mimic what I was doing over on the washer – write some Python code and run it in a container to query an API somewhere in the cloud every X seconds to see if the thing was vibrating or not, then based on that, work out the state of the dryer to determine if the dryer had started or stopped and then act accordingly. But alas, since step 2 in this plan was a klunker, steps 3 through infinity? Yeah, those never happened.
So, back to the drawing board. I found that I couldn’t easily lay hands on a new 801s again, and the project for the Pi4 was now finished, so I had that back. I did find a new vibe sensor – the SW-420. 3 pins instead of 4, but it’s still a digital output that works fine with the Pi, and my existing code worked as-is, so who cares, right? Yeah, I classed the thing up quite a bit more this time too. This time, instead of shoving the Pi inside a project box that’s mounted on the wall running from the SD card, I opted to run in one of those snazzy Argon One M.2 SSD cases booting Ubuntu 22.04 from an M.2 SSD in the basement of the case. I’ve got that sitting on a lovely little shelf mounted just above and behind the dryer, with my 3 GPIO leads running out of the top of the case, directly into the small project box that’s attached to the front of the dryer, inside which is the sensor, which is stuck to the inside of the box using 3M VHB tape. The box itself is stuck to the dryer using VHB tape as well.
In the end, all’s well that ends well. I’ve had to do a good bit more tuning on the SW-420 sensor. It’s been a bit more fiddly than the old 801s was. That one was definitely a plug and play affair. This has required a bit of adjustment on the little potentiometer that’s built into the sensor. Not too bad though. I’ve invested probably a total of 15 minutes of time standing next to the dryer, staring at telemetry, while the dryer is running, or not. But in the end, it’s all working, and the notifications are happening once again.
Summer’s been absolutely nuts. Between work stuff, family stuff, running here and there, and of course, the odd project or two, I’ve been just plain stretched for time.
Stay tuned. I’ll be coming back around shortly. I’m working on some things. Preview?
Well, Remember how Logitech decided that the Harmony Remote, one of the best things ever to happen to the world of universal remotes was going to be taken out back and killed? Yeah, I was pretty mad about that too. So, I went looking for something else to solve some automation challenges with that. So, that’s coming.
What else? Tried to buy a Raspberry Pi lately? Heh. Yeah, me too. I decided to try a different fruit for a change. So far, so good. More on that later.
More still? There’s an update on that printer situation. The dryer too.
How about a Raspberry Pi-based network console server for my network equipment?
For years I’ve been volunteering at a non-profit – and for quite some time the folks working in one particular spot have been looking for a printer. It was never really a dire need, so we never ran out and bought one for this location. Recently, we were cleaning out an office and found an old HP LaserJet P1505, and a new toner cartridge, still sealed in the box. Of course, that’s a USB-only printer, and it was more than a little dirty. So, I brought it home, and put in an hour or so cleaning it up.
I wanted to park this printer in a building where a small handful of folks would be able to print to it, so sharing is of course a must. Since it’s USB-only, that means something’s got to be connected to it full-time, sharing it to clients on the network. The big question – what to connect for that?
As luck would have it, I had a spare Raspberry Pi Zero W in the drawer. It’s starting to show its age – it doesn’t run more current 64-bit Linux releases, but it does have a pretty up-to-date Raspberry Pi OS (formerly called Raspbian) based on Debian Bullseye in the 32-bit armhf flavor. I used the standard Raspberry Pi imager tool from their site, and dropped the latest “OS Lite” image on an SD Card I was ready to roll. Once upon a time, networking used to have be configured after the fact in a text file, and there was a pre-defined user (pi) with the password raspberry on the device. These days, you can set all those parameters before you image the card, including a custom username, password, and even, hostname. SO. MUCH. NICER.
So, I grabbed my roll of 3M VHB tape, some cable management ties and sticky things, and got to work. You can see the results up above. Configuration was pretty easy. Just a few commands to get things installed, and before I knew it, I had smartened up this fairly dumb printer.
Raspberry Pi OS (really Debian) installs a pretty reasonable default CUPS configuration, with only minimal changes needed to do remote administration to it. Once that stuff is done, you can even flip the configs right back if you like. To get things up and going…
At this point, you should log out, and log back in to refresh your group assignments. Once logged back in, if your printer isn’t plugged in and turned on, now’s the time. You can check to make sure it’s seen by issuing the lsusb command. In my case, with the HP LaserJet P1505, I required the HPLIP drivers, which in turn require the proprietary HP modules, to be downloaded from HP. The hplip package comes with a tool to do this, called hp-setup. I recommend the simplest process here – just invoke it interactively – sudo hp-setup -i. The tool will see your printer, reach out to HP, figure out what to grab, and offer to do the rest automatically. The defaults are sane, and you can pretty much just let it do its thing. Once the tool has downloaded everything, you can proceed to CUPS configuration.
There are only 2 lines to change, and 2 to add in the default CUPS configuration in /etc/cups/cupsd.conf. The changes are on lines 18 and 22, and the additions are found around line 34.
Change From This:
# Only listen for connections from the local machine.
# Show shared printers on the local network.
To This (note highlights):
# Only listen for connections from the local machine.
# Show shared printers on the local network.
Change/Add From This:
# Restrict access to the server...
# Restrict access to the admin pages...
To This (note highlights):
# Restrict access to the server...
# Restrict access to the admin pages...
Alternatively, you could do something like ssh tunnel traffic to the host, but that’s a bit of a pain if you’re going to manage this longer term. If you want/need to lock this down tighter, don’t use the @LOCAL macro, be more specific in those Allow statements. Once you’ve made these changes, go ahead and restart CUPS with a sudo service cups restart.
At this point, you should be able to browse to http://ip.addr.of.pi:631/admin, and setup your printer. Go ahead and add that printer. You may be presented with multiple options for your printer. Make sure you pick the right one, or at least test it. For me, the HPLIP one makes the most sense, and works best (i.e. at all, in my case). With the printer configured and shared in CUPS, the system automatically installed avahi as a dependency while installing everything earlier. What’s the big deal? Well, you’ve now got automatic setup available for Windows 10, 11, macOS, iOS, and iPadOS. That sounds like a pretty good deal to me! AirPrint works like a champ on an iPad, without any trouble either.
Armed with all this, you should be able to smarten up pretty much any USB-only printer. Just add whatever drivers you need, add the printer to CUPS, share it within CUPS, and you’re golden. Get on your PCs and Macs, add them as network printers. They should just show up because of Bonjour/Zeroconf, since avahi got auto-installed and configured with CUPS. AirPrint should also “just work” here too. Have fun!
Ok, so I opened Pandora’s box by starting to talk about DNS. I figure I should probably do a proper job of completely murdering the topic and kill it off for good. So – Unbound was the call. I don’t need to run an authoritative server at home any longer. Otherwise, I’d probably have stuck with BIND, honestly. I know it, it’s a pain at times, anything in it related to DNS over HTTPS or TLS is totally experimental, not ready for client-side, but the devil you know, right?
So, Unbound it is. I did a bit of a read-up, and between the Arch Linux wiki, the official docs, and a couple of random config snippets, and I had a config. As I mentioned in the other post, I used certbot to generate my DNS over TLS cert. I’m actually using the same cert for DNS over HTTPS, but the clients don’t really get to see that cert. Why? Well, these hosts also serve up other apps via https, so traefik is installed, and thus is bound to tcp/443, and I didn’t feel like messing with multiple IPs and binding different services to different IPs, so I just tied Unbound’s DNS-o-HTTPS to tcp/1443, and created a traefik service to front it with HTTPS, so it all works out in the end. Clients are none-the-wiser. Yes, there’s a little extra config, and this definitely flies in the face of my mantra of discarding technical debt. But what’s the greater debt – moving a port and making a reverse proxy entry, or setting up a whole new IP and playing around with all sorts of service bindings and ensuring that the right ports are bound the right IPs in the right places? Yeah, you’re seeing it now.
So, on to the config. It’s not the exact config, but it’s close enough. I’ve changed some bits to protect the sanctity of the innards of my home net. On my systems, I’m running Ubuntu Jammy (that’s 22.04 LTS), so I just installed the unbound package via apt, then dropped this in /etc/unbound/unbound.d/server.conf (the file doesn’t exist – you create it). There’s a handy syntax checker called unbound-checkconf, which helps you figure out where you’ve managed to fat-finger things in your config and mess it up. Ask me how I know how useful it is…
I, like many, hate DNS. I tolerate it. It’s there because, well, I need it. There’s just only so many IP addresses one can keep rattling around inside one’s head, right? So, it’s DNS.
For years, I ran the old standard, BIND under Linux here at home. My old BIND config did a local forward to dnscrypt-proxy, which ran bound to a port on localhost, and then in turn pushed traffic out to external DNS servers like Cloudflare’s 18.104.22.168 or IBM’s 22.214.171.124. I didn’t think my ISP was entitled to be able to snoop on what DNS lookups I was doing. They still aren’t entitled to those, so I didn’t want to lose that regardless of what I ended up doing.
Out in the real world, my domain’s DNS was hosted by DNS Made Easy. They’ve got a great product. It’s reliable, and it’s not insanely expensive. It’s not nothing, but we’re not talking hundreds a year either. I think it’s about $50 a year for more domains and queries than I could possibly ever use. But, like many old schoolers, they’ve lagged behind the times. Yes, they’ve got things like a nice API, and do support DNSSEC, but DNSSEC is only available in their super expensive plans that start at $1700+ a year. That’s just not happening. So, I started looking around.
I landed on Cloudflare. They’ve got a free tier that fits the bill for me. Plenty of record space, a nice API, dare I say, a nicer API even. DNSSEC included in that free tier at no cost even. How do you beat free? I was using a mish-mash of internal and external DNS with delegated subdomains for internal vs external sites as well. It was (again) complicated – and a pain in the rear.
So, I registered a new domain to use just for private use. I did that through Cloudflare as well. As a registrar, they were nice to work with too. They pass that through at cost. Nice and smooth setup. So, internal stuff now consists of names that are [host/app].site.domain.net. Traefik is setup using the Cloudflare dns-01 letsencrypt challenge to get certs issued to secure it all, and the connectivity, as discussed before in the other post is all by Tailscale. The apps are all deployed using Docker with Portainer. The stacks (ok, they’re just docker-compose files) in Portainer are all maintained in private GitHub repos. I’ll do a post on that in more detail soon.
Ok, so what did I do with the DNS at home? Did I just ditch the resolver in the house entirely? I did not. In the end I opted for dumping BIND after all these years and replacing it with Unbound. I had to do a bit of reading on it, but the configuration is quite a bit less complex, since I wasn’t configuring zone files any more. I was just setting up a small handful of bits like what interfaces did I want to listen to, what did I want my cache parameters to look like, and what did I want to do with DNS traffic for the outside world, which pretty much everything is? In my case, I wanted to forward it to something fast and secured. I was already crushing pretty hard on Cloudflare, so 126.96.36.199 and 188.8.131.52 were easy choices. I’m also using IBM’s 184.108.40.206 as well. All of those are forwarding out using DNS-over-TLS, and DoT, or sometimes DOT. It worked for me first try.
Then I grabbed the Ubuntu certbot snap and told it to grab a cert for dns.home.$(newdomain).net, which is attached to this moon. After I got the cert issued, it was a piece of cake to turn up both DNS over HTTPS and DNS over TLS, and DoH and DoT.
It was fairly easy to get DoH working on a Windows 11 PC. It was also super easy to craft an MDM-style config profile for DoT that works great on IOS and iPadOS devices. Microsoft has Apple beat cold in this department. Well, in the Apple wold, if you configure a profile for DoT (the only way you can get it in there) you’re stuck with it until you get rid of it – by uninstalling and reinstalling.
On Windows? It was as easy as setting your DNS servers to manual, then crack open a command prompt as Administrator and then (assuming your DNS server is 10.10.10.10)…
netsh dns add encryption 10.10.10.10 https://my.great.server/dns-query
Once you’ve done that, you’ll be able to choose from a list under where you punch in DNS settings in the network settings and turn on Encryption for your DNS connection. It’s working great!
So, Since I just posted the other day about dumping my pile of Python scripts and IPsec VPNs and moving to Tailscale for my personal use case, several folks have sparked conversations with me about the topic.
In my case, it made complete sense to do something else. I was using a solution that was the essentially held together with bubblegum, duct tape, and baling wire. It was fragile, it kept breaking, and let’s be real – I was bending the solution into a shape it wasn’t designed to be used in – which is why it kept breaking in the first place.
You see, IPsec tunnels are intended to work when you’ve got stable, fixed endpoints. Over time, things have been done so that endpoints can become dynamic. But typically just one endpoint. Suddenly, with two dynamic endpoints, results become… Unpredictable. I think that’s a kind way of putting it even. That right there, explains my repeated breakage problems.
So, if you’re still using traditional firewall & VPN in a more traditional use case, then yes, keep doing things more traditionally – keep on using IPsec VPNs. It’s quite honestly the best tool in the bag for what you’re hoping to accomplish in terms of securing data that’s in motion, provided you’re able to meet the bars of entry in terms of hardware support as well as supported feature set.
So, get rid of your firewalls? Not a chance. Get rid of my SRX firewalls and EX switches? No way, no how. You can have my Junos stuff when you pry it from my cold, dead hands. Heck, I make my living with Junos. But just like the whole story of the guy who only has a hammer and thinks everything is a nail, sometimes you’ve just got to use a different tool to do the job right.
But taking the time to think about how to break up with complexity and technical debt? Yeah, that’s totally worth your time. Sometimes that means saying goodbye to old friends, even when you forced them into places where they didn’t quite fit.
So, in the end the whole square-peg-round-hole thing? Stop doing that.
I work in networking. I’ve been doing that for a long time now. Along that journey, I’ve also had occasional detours into worlds like generic IT and data security as well. I also do volunteer work at a nonprofit. Plus, like many of you who work in tech, there’s stuff that lives at the home(s) of relatives that you maintain because you’re that sort of person.
Sometimes, you do it cheap, sometimes you do it right, and sometimes you do it somewhere in-between. Like where you’ve got DHCP-assigned WAN interfaces everywhere because everywhere has home-user type Internet services, or less-expensive business-class occasionally. Anyhow, you can’t always count on having the same IP in the same place twice. BUT, you want things to be secured, and you don’t just want wide-open port forwards with plain old Dynamic DNS.
You’ve got some Juniper SRX firewalls you’ve bought for lab work & study previously, you want to make use of them with IPsec VPNs, but to do it right, you really need static IPs. So, what do you do? You fake it. You just pretend you’ve got static IPs on the tunnel endpoints and configure it up. The tunnels come up, you post up your BGP sessions between your st0.0 IFLs, announce some routes, put some reasonable security policies in place. Yes, I did have security policies in there. I was born at night, but it wasn’t last night, guys. But how did I keep it working with IPs changing all the time?
Here’s how I was solving that problem up until fairly recently. I’ve been hacking away at my DNS-o-Matic and DNS Made Easy updaters for a while now. The DME updater was much better, IMHO, as it directly updated a single, private zone that only I ever cared about rather than rely on someone else to sit in the middle and do the updates for me. Plus, I wrote the whole thing from the ground up using DME’s API docs, so I knew exactly how it worked, inside & out. No excuses for it doing anything I didn’t understand, and honestly, I’m really happy with how well it’s been working. It’s been a great opportunity to get better at Python, in particular doing things in a more “Pythonic” way, rather than trying to “just get it done”, or worse, trying to make it work the way I used to do things in Perl or PHP years ago. Is it iconic? Not even close, but it does works pretty darn well.
So, with these containers all ran on Intel NUCs under Ubuntu Linux at each site. There were 1 more container on each of these NUCs as part of this operation. I had a set of Telegram Bots that talked to each other as part of this network to inform each other of site IP changes. So, if HOME changed its IP, the bot at HOME sent a message to the group that included the bots for NONPROFIT and INLAWS. Those bots saw note that the IP had changed and they should go find out the new IP of HOME, so they can update their tunnel endpoints. This in turn fired off a function that used the Junos PyEZ API module to update the IPsec tunnel endpoint IPs.
Did it all work? Yes, believe it or not, this actually all worked. Was it pretty fragile and not for the faint of heart? Oh yeah, for sure. Would I recommend doing it? Not a chance. So much so that I’m not even going to share the code, apart from the DDNS updaters. The other stuff is definitely hackjob territory. So, since it was so fragile and had the tendency to break, what did I do? Well, the first few times, I drove and fixed. Which frankly, sucked. After that, I installed an OpenVPN container at each of the locations. Later, I replaced those with linuxserver/wireguard containers. But, after it all broke like twice in about a month, I’d just about had enough. I cried Uncle and decided I was going to look for some other way to do this.
And that’s when my old pal Bhupen mentioned Tailscale to me. I was already into Wireguard. So making it easier, faster, and more useful were all on my short list. Drop the tailscale client on the NUC, get it logged in announcing the local subnet into the tailnet (their name for the VPN instance), making it a “subnet router”, approve the route announcement in the portal and it’s going. I’ve got control over key expiry too. Security policy (naturally) moved from the SRX down to the tailscale gateways, but their ACL language wasn’t too difficult to wrangle. It’s all JSON, so it’s reasonably straightforward.
So, with all the scripts gone and the IPsec stripped away, what’s it all look like? Well, we also added 1 more site into the mix as well – the in-laws vacation place. They bought a place and I stuck a Raspberry Pi up there for future IOT use. Not entirely sure about the “what” yet, but they just updated the HVAC, and it’s all smart stuff, so I expect there will be instrumentation. Maybe something that spits out time series info to Influxdb or somesuch. Who knows? Or Perhaps HomeKit/Homebridge stuff. Time will tell.
In the time since I made the diagrams and wrote this up, things have also changed slightly on the homefront.. I’ve deployed a 2nd subnet router at Home. In the Tailscale docs, they say all over the place not to deploy two subnet routers with the same IP space, and generally speaking, it’s with good reason – traffic destined for those prefixes announced by those routers will be round-robin’d back and forth between them. In my case, since they’re on the same physical subnet, this is essentially ECMP routing, so no big deal. I haven’t validated if they’re really getting the hashing correct, but haven’t really noticed any ill effects yet, so I haven’t shut off the 2nd subnet router yet.
So, by dropping all the BGP sessions, IPsec tunnels, Python scripts, Telegram bots, and Docker containers, things have become much simpler, and much more stable. I’m really happy with Tailscale. So much so that I ended up subscribing at the Personal Pro tier. Great bunch of folks – can’t help but recommend them.