One Crazy Summer

Hey automators!

Summer’s been absolutely nuts. Between work stuff, family stuff, running here and there, and of course, the odd project or two, I’ve been just plain stretched for time.

Stay tuned. I’ll be coming back around shortly. I’m working on some things. Preview?

Well, Remember how Logitech decided that the Harmony Remote, one of the best things ever to happen to the world of universal remotes was going to be taken out back and killed? Yeah, I was pretty mad about that too. So, I went looking for something else to solve some automation challenges with that. So, that’s coming.

What else? Tried to buy a Raspberry Pi lately? Heh. Yeah, me too. I decided to try a different fruit for a change. So far, so good. More on that later.

More still? There’s an update on that printer situation. The dryer too.

How about a Raspberry Pi-based network console server for my network equipment?

Hang in there family, it’s coming.

Smartening Up An Old Printer

Pi Zero W on back of printer
Pi Zero W on Back of the Printer

For years I’ve been volunteering at a non-profit – and for quite some time the folks working in one particular spot have been looking for a printer. It was never really a dire need, so we never ran out and bought one for this location. Recently, we were cleaning out an office and found an old HP LaserJet P1505, and a new toner cartridge, still sealed in the box. Of course, that’s a USB-only printer, and it was more than a little dirty. So, I brought it home, and put in an hour or so cleaning it up.

I wanted to park this printer in a building where a small handful of folks would be able to print to it, so sharing is of course a must. Since it’s USB-only, that means something’s got to be connected to it full-time, sharing it to clients on the network. The big question – what to connect for that?

As luck would have it, I had a spare Raspberry Pi Zero W in the drawer. It’s starting to show its age – it doesn’t run more current 64-bit Linux releases, but it does have a pretty up-to-date Raspberry Pi OS (formerly called Raspbian) based on Debian Bullseye in the 32-bit armhf flavor. I used the standard Raspberry Pi imager tool from their site, and dropped the latest “OS Lite” image on an SD Card I was ready to roll. Once upon a time, networking used to have be configured after the fact in a text file, and there was a pre-defined user (pi) with the password raspberry on the device. These days, you can set all those parameters before you image the card, including a custom username, password, and even, hostname. SO. MUCH. NICER.

So, I grabbed my roll of 3M VHB tape, some cable management ties and sticky things, and got to work. You can see the results up above. Configuration was pretty easy. Just a few commands to get things installed, and before I knew it, I had smartened up this fairly dumb printer.

Raspberry Pi OS (really Debian) installs a pretty reasonable default CUPS configuration, with only minimal changes needed to do remote administration to it. Once that stuff is done, you can even flip the configs right back if you like. To get things up and going…

sudo apt update
sudo apt install cups hplip
sudo usermod -aG lpadmin <your username>

At this point, you should log out, and log back in to refresh your group assignments. Once logged back in, if your printer isn’t plugged in and turned on, now’s the time. You can check to make sure it’s seen by issuing the lsusb command. In my case, with the HP LaserJet P1505, I required the HPLIP drivers, which in turn require the proprietary HP modules, to be downloaded from HP. The hplip package comes with a tool to do this, called hp-setup. I recommend the simplest process here – just invoke it interactively – sudo hp-setup -i. The tool will see your printer, reach out to HP, figure out what to grab, and offer to do the rest automatically. The defaults are sane, and you can pretty much just let it do its thing. Once the tool has downloaded everything, you can proceed to CUPS configuration.

There are only 2 lines to change, and 2 to add in the default CUPS configuration in /etc/cups/cupsd.conf. The changes are on lines 18 and 22, and the additions are found around line 34.

Change From This:

# Only listen for connections from the local machine.
Listen localhost:631
Listen /run/cups/cups.sock

# Show shared printers on the local network.
Browsing No
BrowseLocalProtocols dnssd

To This (note highlights):

# Only listen for connections from the local machine.
Port 631
Listen /run/cups/cups.sock

# Show shared printers on the local network.
Browsing On
BrowseLocalProtocols dnssd

Change/Add From This:

# Restrict access to the server...
<Location />
  Order allow,deny

# Restrict access to the admin pages...
<Location /admin>
  Order allow,deny

To This (note highlights):

# Restrict access to the server...
<Location />
  Order allow,deny
  Allow @LOCAL

# Restrict access to the admin pages...
<Location /admin>
  Order allow,deny
  Allow @LOCAL

Alternatively, you could do something like ssh tunnel traffic to the host, but that’s a bit of a pain if you’re going to manage this longer term. If you want/need to lock this down tighter, don’t use the @LOCAL macro, be more specific in those Allow statements. Once you’ve made these changes, go ahead and restart CUPS with a sudo service cups restart.

At this point, you should be able to browse to http://ip.addr.of.pi:631/admin, and setup your printer. Go ahead and add that printer. You may be presented with multiple options for your printer. Make sure you pick the right one, or at least test it. For me, the HPLIP one makes the most sense, and works best (i.e. at all, in my case). With the printer configured and shared in CUPS, the system automatically installed avahi as a dependency while installing everything earlier. What’s the big deal? Well, you’ve now got automatic setup available for Windows 10, 11, macOS, iOS, and iPadOS. That sounds like a pretty good deal to me! AirPrint works like a champ on an iPad, without any trouble either.

Armed with all this, you should be able to smarten up pretty much any USB-only printer. Just add whatever drivers you need, add the printer to CUPS, share it within CUPS, and you’re golden. Get on your PCs and Macs, add them as network printers. They should just show up because of Bonjour/Zeroconf, since avahi got auto-installed and configured with CUPS. AirPrint should also “just work” here too. Have fun!

It’s DNS. Again.

Oh, hello there…

Ok, so I opened Pandora’s box by starting to talk about DNS. I figure I should probably do a proper job of completely murdering the topic and kill it off for good. So – Unbound was the call. I don’t need to run an authoritative server at home any longer. Otherwise, I’d probably have stuck with BIND, honestly. I know it, it’s a pain at times, anything in it related to DNS over HTTPS or TLS is totally experimental, not ready for client-side, but the devil you know, right?

So, Unbound it is. I did a bit of a read-up, and between the Arch Linux wiki, the official docs, and a couple of random config snippets, and I had a config. As I mentioned in the other post, I used certbot to generate my DNS over TLS cert. I’m actually using the same cert for DNS over HTTPS, but the clients don’t really get to see that cert. Why? Well, these hosts also serve up other apps via https, so traefik is installed, and thus is bound to tcp/443, and I didn’t feel like messing with multiple IPs and binding different services to different IPs, so I just tied Unbound’s DNS-o-HTTPS to tcp/1443, and created a traefik service to front it with HTTPS, so it all works out in the end. Clients are none-the-wiser. Yes, there’s a little extra config, and this definitely flies in the face of my mantra of discarding technical debt. But what’s the greater debt – moving a port and making a reverse proxy entry, or setting up a whole new IP and playing around with all sorts of service bindings and ensuring that the right ports are bound the right IPs in the right places? Yeah, you’re seeing it now.

So, on to the config. It’s not the exact config, but it’s close enough. I’ve changed some bits to protect the sanctity of the innards of my home net. On my systems, I’m running Ubuntu Jammy (that’s 22.04 LTS), so I just installed the unbound package via apt, then dropped this in /etc/unbound/unbound.d/server.conf (the file doesn’t exist – you create it). There’s a handy syntax checker called unbound-checkconf, which helps you figure out where you’ve managed to fat-finger things in your config and mess it up. Ask me how I know how useful it is…

    port: 53
    tls-port: 853
    https-port: 1443
    verbosity: 0
    num-threads: 2
    outgoing-range: 512
    num-queries-per-thread: 1024
    msg-cache-size: 32m
    rrset-cache-size: 64m
    cache-max-ttl: 86400
    infra-host-ttl: 60
    infra-lame-ttl: 120
    access-control: allow
    access-control: allow
    username: unbound
    directory: "/etc/unbound"
    use-syslog: yes
    hide-version: yes
    so-rcvbuf: 4m
    so-sndbuf: 4m
    do-ip4: yes
    do-ip6: no
    do-udp: yes
    do-tcp: yes
    log-queries: no
    log-servfail: no
    log-local-actions: no
    log-replies: no
    extended-statistics: yes
    statistics-cumulative: yes
    tls-service-key: /etc/letsencrypt/live/
    tls-service-pem: /etc/letsencrypt/live/
    http-endpoint: "/dns-query"
    http-nodelay: yes
    private-domain: ""
    do-not-query-localhost: yes
    tls-cert-bundle: "/etc/ssl/certs/ca-certificates.crt"
    local-zone: "" transparent
    local-data: "   600 IN PTR"
    local-data: "   600 IN PTR"
    local-data: "   600 IN PTR"
    local-data: "   600 IN PTR"
    local-data: "   600 IN PTR"
    local-data: "   600 IN PTR"
    local-data: "  600 IN PTR"
    local-data: "  600 IN PTR"

    control-enable: yes
    control-port: 953
    control-use-cert: "yes"
    server-key-file: "/etc/unbound/unbound_server.key"
    server-cert-file: "/etc/unbound/unbound_server.pem"
    control-key-file: "/etc/unbound/unbound_control.key"
    control-cert-file: "/etc/unbound/unbound_control.pem"

    name: "."
    forward-tls-upstream: yes

Embracing Simplicity. Again. This time, it’s DNS.

Public Enemy #1

I, like many, hate DNS. I tolerate it. It’s there because, well, I need it. There’s just only so many IP addresses one can keep rattling around inside one’s head, right? So, it’s DNS.

For years, I ran the old standard, BIND under Linux here at home. My old BIND config did a local forward to dnscrypt-proxy, which ran bound to a port on localhost, and then in turn pushed traffic out to external DNS servers like Cloudflare’s or IBM’s I didn’t think my ISP was entitled to be able to snoop on what DNS lookups I was doing. They still aren’t entitled to those, so I didn’t want to lose that regardless of what I ended up doing.

Out in the real world, my domain’s DNS was hosted by DNS Made Easy. They’ve got a great product. It’s reliable, and it’s not insanely expensive. It’s not nothing, but we’re not talking hundreds a year either. I think it’s about $50 a year for more domains and queries than I could possibly ever use. But, like many old schoolers, they’ve lagged behind the times. Yes, they’ve got things like a nice API, and do support DNSSEC, but DNSSEC is only available in their super expensive plans that start at $1700+ a year. That’s just not happening. So, I started looking around.

I landed on Cloudflare. They’ve got a free tier that fits the bill for me. Plenty of record space, a nice API, dare I say, a nicer API even. DNSSEC included in that free tier at no cost even. How do you beat free? I was using a mish-mash of internal and external DNS with delegated subdomains for internal vs external sites as well. It was (again) complicated – and a pain in the rear.

So, I registered a new domain to use just for private use. I did that through Cloudflare as well. As a registrar, they were nice to work with too. They pass that through at cost. Nice and smooth setup. So, internal stuff now consists of names that are [host/app] Traefik is setup using the Cloudflare dns-01 letsencrypt challenge to get certs issued to secure it all, and the connectivity, as discussed before in the other post is all by Tailscale. The apps are all deployed using Docker with Portainer. The stacks (ok, they’re just docker-compose files) in Portainer are all maintained in private GitHub repos. I’ll do a post on that in more detail soon.

Ok, so what did I do with the DNS at home? Did I just ditch the resolver in the house entirely? I did not. In the end I opted for dumping BIND after all these years and replacing it with Unbound. I had to do a bit of reading on it, but the configuration is quite a bit less complex, since I wasn’t configuring zone files any more. I was just setting up a small handful of bits like what interfaces did I want to listen to, what did I want my cache parameters to look like, and what did I want to do with DNS traffic for the outside world, which pretty much everything is? In my case, I wanted to forward it to something fast and secured. I was already crushing pretty hard on Cloudflare, so and were easy choices. I’m also using IBM’s as well. All of those are forwarding out using DNS-over-TLS, and DoT, or sometimes DOT. It worked for me first try.

Then I grabbed the Ubuntu certbot snap and told it to grab a cert for dns.home.$(newdomain).net, which is attached to this moon. After I got the cert issued, it was a piece of cake to turn up both DNS over HTTPS and DNS over TLS, and DoH and DoT.

It was fairly easy to get DoH working on a Windows 11 PC. It was also super easy to craft an MDM-style config profile for DoT that works great on IOS and iPadOS devices. Microsoft has Apple beat cold in this department. Well, in the Apple wold, if you configure a profile for DoT (the only way you can get it in there) you’re stuck with it until you get rid of it – by uninstalling and reinstalling.

On Windows? It was as easy as setting your DNS servers to manual, then crack open a command prompt as Administrator and then (assuming your DNS server is…

netsh dns add encryption https://my.great.server/dns-query

Once you’ve done that, you’ll be able to choose from a list under where you punch in DNS settings in the network settings and turn on Encryption for your DNS connection. It’s working great!

So, You Should Dump IPsec, Right?

Wrong. Probably.

So, Since I just posted the other day about dumping my pile of Python scripts and IPsec VPNs and moving to Tailscale for my personal use case, several folks have sparked conversations with me about the topic.

In my case, it made complete sense to do something else. I was using a solution that was the essentially held together with bubblegum, duct tape, and baling wire. It was fragile, it kept breaking, and let’s be real – I was bending the solution into a shape it wasn’t designed to be used in – which is why it kept breaking in the first place.

You see, IPsec tunnels are intended to work when you’ve got stable, fixed endpoints. Over time, things have been done so that endpoints can become dynamic. But typically just one endpoint. Suddenly, with two dynamic endpoints, results become… Unpredictable. I think that’s a kind way of putting it even. That right there, explains my repeated breakage problems.

So, if you’re still using traditional firewall & VPN in a more traditional use case, then yes, keep doing things more traditionally – keep on using IPsec VPNs. It’s quite honestly the best tool in the bag for what you’re hoping to accomplish in terms of securing data that’s in motion, provided you’re able to meet the bars of entry in terms of hardware support as well as supported feature set.

So, get rid of your firewalls? Not a chance. Get rid of my SRX firewalls and EX switches? No way, no how. You can have my Junos stuff when you pry it from my cold, dead hands. Heck, I make my living with Junos. But just like the whole story of the guy who only has a hammer and thinks everything is a nail, sometimes you’ve just got to use a different tool to do the job right.

But taking the time to think about how to break up with complexity and technical debt? Yeah, that’s totally worth your time. Sometimes that means saying goodbye to old friends, even when you forced them into places where they didn’t quite fit.

So, in the end the whole square-peg-round-hole thing? Stop doing that.

Ditching Technical Debt. Embracing Simplicity.

I work in networking. I’ve been doing that for a long time now. Along that journey, I’ve also had occasional detours into worlds like generic IT and data security as well. I also do volunteer work at a nonprofit. Plus, like many of you who work in tech, there’s stuff that lives at the home(s) of relatives that you maintain because you’re that sort of person.

Sometimes, you do it cheap, sometimes you do it right, and sometimes you do it somewhere in-between. Like where you’ve got DHCP-assigned WAN interfaces everywhere because everywhere has home-user type Internet services, or less-expensive business-class occasionally. Anyhow, you can’t always count on having the same IP in the same place twice. BUT, you want things to be secured, and you don’t just want wide-open port forwards with plain old Dynamic DNS.

How things used to work, in the IPsec days…

You’ve got some Juniper SRX firewalls you’ve bought for lab work & study previously, you want to make use of them with IPsec VPNs, but to do it right, you really need static IPs. So, what do you do? You fake it. You just pretend you’ve got static IPs on the tunnel endpoints and configure it up. The tunnels come up, you post up your BGP sessions between your st0.0 IFLs, announce some routes, put some reasonable security policies in place. Yes, I did have security policies in there. I was born at night, but it wasn’t last night, guys. But how did I keep it working with IPs changing all the time?

Here’s how I was solving that problem up until fairly recently. I’ve been hacking away at my DNS-o-Matic and DNS Made Easy updaters for a while now. The DME updater was much better, IMHO, as it directly updated a single, private zone that only I ever cared about rather than rely on someone else to sit in the middle and do the updates for me. Plus, I wrote the whole thing from the ground up using DME’s API docs, so I knew exactly how it worked, inside & out. No excuses for it doing anything I didn’t understand, and honestly, I’m really happy with how well it’s been working. It’s been a great opportunity to get better at Python, in particular doing things in a more “Pythonic” way, rather than trying to “just get it done”, or worse, trying to make it work the way I used to do things in Perl or PHP years ago. Is it iconic? Not even close, but it does works pretty darn well.

So, with these containers all ran on Intel NUCs under Ubuntu Linux at each site. There were 1 more container on each of these NUCs as part of this operation. I had a set of Telegram Bots that talked to each other as part of this network to inform each other of site IP changes. So, if HOME changed its IP, the bot at HOME sent a message to the group that included the bots for NONPROFIT and INLAWS. Those bots saw note that the IP had changed and they should go find out the new IP of HOME, so they can update their tunnel endpoints. This in turn fired off a function that used the Junos PyEZ API module to update the IPsec tunnel endpoint IPs.

Did it all work? Yes, believe it or not, this actually all worked. Was it pretty fragile and not for the faint of heart? Oh yeah, for sure. Would I recommend doing it? Not a chance. So much so that I’m not even going to share the code, apart from the DDNS updaters. The other stuff is definitely hackjob territory. So, since it was so fragile and had the tendency to break, what did I do? Well, the first few times, I drove and fixed. Which frankly, sucked. After that, I installed an OpenVPN container at each of the locations. Later, I replaced those with linuxserver/wireguard containers. But, after it all broke like twice in about a month, I’d just about had enough. I cried Uncle and decided I was going to look for some other way to do this.

And that’s when my old pal Bhupen mentioned Tailscale to me. I was already into Wireguard. So making it easier, faster, and more useful were all on my short list. Drop the tailscale client on the NUC, get it logged in announcing the local subnet into the tailnet (their name for the VPN instance), making it a “subnet router”, approve the route announcement in the portal and it’s going. I’ve got control over key expiry too. Security policy (naturally) moved from the SRX down to the tailscale gateways, but their ACL language wasn’t too difficult to wrangle. It’s all JSON, so it’s reasonably straightforward.

The new Tailscale VPNs
The new Tailscale VPNs

So, with all the scripts gone and the IPsec stripped away, what’s it all look like? Well, we also added 1 more site into the mix as well – the in-laws vacation place. They bought a place and I stuck a Raspberry Pi up there for future IOT use. Not entirely sure about the “what” yet, but they just updated the HVAC, and it’s all smart stuff, so I expect there will be instrumentation. Maybe something that spits out time series info to Influxdb or somesuch. Who knows? Or Perhaps HomeKit/Homebridge stuff. Time will tell.

In the time since I made the diagrams and wrote this up, things have also changed slightly on the homefront.. I’ve deployed a 2nd subnet router at Home. In the Tailscale docs, they say all over the place not to deploy two subnet routers with the same IP space, and generally speaking, it’s with good reason – traffic destined for those prefixes announced by those routers will be round-robin’d back and forth between them. In my case, since they’re on the same physical subnet, this is essentially ECMP routing, so no big deal. I haven’t validated if they’re really getting the hashing correct, but haven’t really noticed any ill effects yet, so I haven’t shut off the 2nd subnet router yet.

So, by dropping all the BGP sessions, IPsec tunnels, Python scripts, Telegram bots, and Docker containers, things have become much simpler, and much more stable. I’m really happy with Tailscale. So much so that I ended up subscribing at the Personal Pro tier. Great bunch of folks – can’t help but recommend them.

UPDATE: This ended up sparking a bunch of sidebar conversations. Go read what I had to say as a followup

Data Visualization and You…

Sometimes there’s data. You’ve got a bunch of it, you need to work out how to represent it in a way that not only makes sense to you, but is also appealing in some fashion. I’m going to talk about a couple of different use cases in this post, each with their own unique data presentations. First, the sensors.

I’ve got a couple of SwitchBot Meter Plus sensors around the house. One is in my office, and the other is in the garage. There isn’t much to them, small little things, battery powered. Pretty much it’s a little monochromatic LCD screen with a temp/humidity sensor and a bluetooth radio. That won’t do, on its own, of course. So, I added SwitchBot’s Hub Mini to the party. It’s a little bridge device that plugs into the house’s AC mains, and has both BT and WiFi radios inside. While I haven’t cracked it open, the device shows up with a MAC address that suggests it’s little more than an ESP32 or ESP8266 microcontroller inside. With the hub in place, connecting the sensors to the SwitchBot cloud, a really important thing happens – the sensors become accessible via SwitchBot’s REST API. So, I’m using some custom-written Python code that runs under Docker to read the sensors. Turns out it was all surprisingly easy to put the pieces together. It was also a pre-cursor to another project I went on to do, where I helped a friend using a similar sensor to control a smart plug to operate a space heater.

So, what does one do with a sensor like this? You read it, naturally. You keep reading it. Over and over at some sort of fixed interval. In my case, I’m reading it every 5 minutes, or 300 seconds, and storing the data in a database. This type of data isn’t particularly well-suited to living in a SQL database like MariaDB, Postgres, etc. This is a job for a time-series database. So, I called on InfluxDB here. It’s relatively small, lightweight, and very well understood. The Python modules for it are pretty mature and easy to work with even, so it was easy to implement as well. Total win. So, read sensor (convert C to F, since I’m a Fahrenheit kind of guy), store in database, sleep(300), do it again. Lather, rinse, repeat. Just keep on doing that for roughly the next, forever. Or until you run out of space or crash. That’s the code right there, in a nutshell.

Sensors Data Visualization
Sensors Data Visualization

So, what are we visualizing? At the right, you can actually see what I’m graphing. The InfluxData team were nice enough to include some visualization tools right there in the box with InfluxDB, so I’m happy to take advantage of them. Many folks would prefer to use something a bit more flashy and customizable like Grafana, and that’s totally cool. I’ve done it too, even with this same dataset, and the data looks just as good. Heck, probably even looks better, but for me, it was just one more container to have to maintain with little extra value returned. The visualization tools baked into InfluxDB are good enough for what I’m after.

LibreNMS WAN Metrics
LibreNMS WAN Metrics

Next up? Keeping an eye on what’s up with my WAN router’s Internet-facing link. Here at the homestead, I’m running LibreNMS to keep an eye on things. Nothing nearly as custom here. It’s more off the shelf stuff here. It all runs (again) in Docker containers, and as you’d likely expect, uses SNMP to do the bulk of its monitoring duties. at the right, you can see some sample graphs I’ve got stuck to the dashboard page that give a last 6-hours view of the WAN-facing interface of my Internet router, a Juniper SRX300. You see the traffic report as well as the session table size. Within LibreNMS, I’ve got all sorts of data represented, even graphs of how much toner is left in the printer and the temperature of the forwarding ASIC in the switch upstairs in the TV cabinet. All have their own representations, each unique to the characteristics of the data.

Bottom line? Any time you’re dealing with data visualization, there is no one-size-fits-all. Spend the time with the data to figure out what makes the most sense for you and then make it so!

A Journey To A Smarter Dryer

This one was much more difficult. A lot more difficult.

If you recall from my post about the washer, I was able to pull off some fairly useful stuff without a ton of effort. Read a smart plug’s API to see how much power the washer is using to figure out when it turned on, then wait for it to turn off again, then let the fam know that the washer finished, and go take action so that the laundry doesn’t sit around for days, get funky and need to get re-washed. This was of course pretty easy simply because we were able to rely on the fact that the 120V motor in the washer draws well under 15A, the top end of the smart plugs I’m using, the Etekcity ESW15.

Sadly, when we moved into our house, we had an electric dryer. We’ve got natural gas in the house. Heck, in the same room even for the furnace and water heater even. But, back when the last washer sprang a leak and we needed a new washer in a hurry, and unfortunately at the time it was going to be months to get the matching gas dryer back in stock, so we just punted and stuck with the electric model. Sadly, this means for us this means we can’t take the same approach we did with the washer, since nobody makes a smart plug that works on 240V AC 30A circuits.

Unwilling to settle for relying on setting timers with Alexa, having to remind the kids to set timers, or just plain forgetting to do it, I started Googling about, looking for ways to go about monitoring the dryer. Monitoring energy use is the natural fit. When the dryer is in use, it’s consuming loads of energy, and when the clothes are dry, the energy use falls right off. This really shouldn’t be that hard to figure out, right? Right? Sadly, it was.

My next move was to play around with a split core current transformer clamp, and build a circuit with a burden resistor, reading the thing with a microcontroller. I read about the whole process in a handful of places online and it didn’t seem to ridiculous to build the circuit, so I sourced the parts. I got a little breadboard, some jumper wires, the resistors, capacitors, and the CT sensor clamp, and a sacrificial extension cord, which I’d use for my proof of concept test. You see, the CT clamp goes around a single conductor, not the whole cable assembly, so I needed to modify the cable slightly. Relax, the real cable was one of those “flat, side-by-side” types, so it would only mean peeling them apart, not really cutting anything. Sadly, I never made it to that phase. During my POC phase, I was able to get readings back from the sensor, but they never made sense. I was using an ESP32 microcontroller with MicroPython, so maybe that’s related. Or maybe I had a bum CT clamp. Or something else was wrong. We’ll never know, since I gave up after several evenings of bashing my head against the desk.

Failing at the “point” solution of energy monitoring, I moved on to looking at whole-house power monitoring. Hey, if we can’t kill this fly with a newspaper, let’s try a missile, right? Sense landed at the top of the pile. It had the API I was after, though they sort of keep that on the DL. Not in love with that, since those sometimes disappear. If I’m going to drop a couple of hundred bucks on something to use the API for something, it better not just disappear on a whim someday. Plus, our panel is in my home office, recessed in the wall, and there’s not exactly a clean way to get the Sense WiFi antenna back out without it looking really weird. I could make it clean, but then there’d just be a random RP-SMA antenna sticking out of my wall. Interesting decor choice. Sure to be a selling point when we sell the house some day.

Which brings me to the vibration sensor. I was reading one day, still searching, and I came across Shmoopty, and my problems were (half) solved. Sure, I had a Pi Zero W already laying around and I could have just built exactly what he had done, but what’s the fun in that? Remember, I’m already invested. It’s overkill time. So, I ordered up a couple of those 801s vibration sensors and got to work. You know, it was surprisingly hard to get one that met my needs at the time. Why? Most of the 801s units out there are analog-only. Since I’m using a Raspberry Pi, I wanted a digital output, so I didn’t need to mess around with the need for extra ADC (Analog to Digital Conversion) circuitry, just to read a simple sensor. So, I had to order from AliExpress and wait the long wait for shipping from China.

After my sensors finally turned up, I worked out the arrangement of my project boxes and so forth in the laundry room. I landed on a wall-mounted box for the Pi with a 1m pair of wires connected to the sensor, with the sensor inside another small box, which is stuck to the top of the dryer using a little strip of 3M VHB tape. Shmoopty’s Python made it easy to figure out how to read the sensor, so I was happy to be able to draw my inspiration from that. His approach is to keep it small, run on a Pi Zero W, even make it renter-friendly, while mine is more of a “go big” approach – building a Docker container to run it inside of.

Well, at the end of it all, it shares a lot of common philosophy with the plugmon tool, in that it loops infinitely, looking for start and stop conditions. Instead of watching power consumption, it’s watching for the dryer to start vibrating a lot. When that starts an appreciable amount of time, the dryer is declared to be “on”. Once it transitions back to “off”, it fires an event that causes a Telegram message to get sent to the family group chat, again, much like when the washer finishes!

Well, if you’ve made it this far, you’re ready to go check it out. So, get going and get reading. Smarten up that laundry room, report back what you did, and how you did it!

Cutting the Cord: How We Did It.

Ok, so we’re going to take a break from our usual geekery to talk about watching TV without having a traditional Cable TV subscription. There’s this recurring discussion that keeps happening on our town Facebook group and I keep ending up going through parts of what I’m writing here.

Verizon FiOS Bill
A Sample FiOS Bill From Someone in My Town

My mind was absolutely blown this morning when I saw a picture of one person’s FiOS bill. It’s shown at the right. So that’s almost $2600/year.

Many folks will suggest you just go with Internet-only packages and a streaming service like YouTube TV, Hulu Live TV, Sling, or whatever, with Smart TVs or set-top devices like Apple TV, Roku, or Fire TV, and it’s tough to argue with the simplicity of that solution. It’s pretty much a turnkey and go type of solution. Plug it all in, turn it on, login and you’re watching TV in a few minutes, even local channels with sports. Checking out our neighbor’s use case here and substituting in what many in town are paying, she could move to an Internet-only plan for about $75 a month, and use one of the many streaming services for about $55-65/month, depending on who you picked, and you just turned $2600 a year into $1700 a year. What could you do with an extra $900 in your budget? Only thing though, we made some assumptions here, like equipment – we’re assuming that you’ve got all Smart TVs that have the ability to accommodate whatever streaming service(s) you’re planning to use. Otherwise, you’re buying a device to hook up to the TV. That’s going involve some up-front cost, which will eat into that $900, at least for the first year.

The solution I’ve built out at our house involves more up-front expense, but involves no recurring costs apart from our normal Internet access costs. In our case, this is the Verizon FiOS Gigabit plan. We’ve had FiOS Internet since we moved into our current house, about 15 years ago, back when it was 25/5 service. How the times have changed!

We’ve got a roof-mounted antenna that we installed a couple of years ago when we made the big switch. Not being the “climbing on the roof” type, I contracted this part out. I found a local company that did the job, including the antenna (a Channel Master CM-3016, now called the Advantage 45). Rather than connecting to a TV, the antenna is connected to a networked TV tuner device. In our case, we’re using an HD HomeRun (an HDHomeRun Connect 4K). This unit has 4 digital/ATSC tuners, 2 of which are ATSC 3.0 capable, and connected to our network using a wired Ethernet cable (not WiFi). ATSC 3.0 is the new standard that’s rolling out that supports 4K over-the-air broadcast TV.

So, that’s got local TV signal from the airwaves into the house and onto our network! How do we watch it? There are a couple of ways. SiliconDust (the folks who make the HDHomeRun box) offer apps for the major streaming devices and Smart TVs that let you access the device, decide what channel to tune, watch TV, even pause live TV. What’s not there though is all the DVR capabilities you’d want to have. No ability to record shows. So let’s talk about how I’m getting that.

To get DVR functions. I’m using the Plex Media Server. I’m running Plex on a custom-built server on our network that runs the Ubuntu Linux operating system, with our media files stored on a Synology Network Attached Storage array. We’re using the Network File System (or NFS, if you’re a Unix type) to mount the drive on the Plex system from the NAS. It works really well. Plex has great support for the HD HomeRun devices too. You go into the Plex settings, tell it to take a look around your network, it finds the tuner, scans for available channels, you pick the ones you want to make available, then Plex goes and grabs guide data, just like you’re probably used to having on a cable box.

Plex Program Guide
Plex Program Guide

Once you’ve reached this point, you’ve got access to a pretty normal looking programming guide, and even have familiar looking DVR features like recording single episodes, or all upcoming episodes. If you decide to do the latter, you’ve got even more options at your disposal, like the ability to record only new episodes, or choose how many episodes to retain, how long to retain episodes, if you prefer HD episodes, or only want HD episodes. Maybe you only want to record from a specific channel, or only at a certain time. Those options are useful when it’s a show that’s syndicated and on multiple channels at multiple times. Then there’s extra fun stuff like built-in channel skip. You can mark for channel skip or let Plex take the wheel and try its hand at just chopping the commercials out totally on its own. I’ve had a mix of experiences both good and bad letting it go on its own. Sometimes it’s great, and other times, I’ve missed Final Jeopardy during a tight match. So, I don’t typically go beyond “Mark for Skip”. Honestly, it’s pretty accurate.

Our Home Plex Environment
Our Home Plex Environment

So, now that I’ve described how the bits fit together, let’s actually take a look at a diagram (Click to enlarge). It’s more of a logical diagram, really. I work in networking, so I prefer to keep network elements like my routers, switches and WiFi Access Points separated, so those are all individual pieces in our house. In yours, they’re probably not. You may be used to a converged device that’s a Router/Switch/and WiFi Access Point all-rolled-into-one, and that’s alright. You do you! But, this is how I’ve built what we’ve got. The different colors indicate different types of connections. Black is the coax from the antenna to the HD HomeRun. Red is the fiber optic cable from the street the Verizon Optical Network Terminal (aka ONT) on the side of the house (in real life, it’s probably yellow, but yellow on a white background? really hard to see!). Blue is plain old Ethernet cable.

So, if you were going to build something like this, what would you want to buy, what would it cost, and when would you start saving money compared to getting taken to the cleaners by Verizon? Great questions. The answers really depend on what you like to watch for TV. If all you watch are local network shows, you’ll recoup your investment in about a year. If you watch other stuff, it might take longer. If your mobile phone plan includes streaming services for free, as many do, that’s a bonus. For example, we’re a T-Mobile house. We get free Netflix thrown in with our family plan. Some Verizon customers get Disney+ for free. There are tons of deals from carriers, figure out what works best for you and exploit it to your advantage.

If you want a simpler more “turnkey” type experience, I’d suggest you look toward the Synology NAS as a solution. Plex runs on Synology, works great, and your storage is built right in there. There are trade-offs, but honestly, if you’re savvy enough to know that you’re impacted by those trade-offs, you’ll also know if the value proposition is there for you to spend more to work around it. You could shave some $$ by building your own small server out of something like a BeeLink Mini PC, available from Amazon, install something like Ubuntu Linux on there and run Plex on it yourself, but there’s a much steeper learning curve involved in that path.

Converged Plex Environment
Converged Plex Environment

If you’re interested in pursuing this idea, check out the diagram at the right for the view of the simplified version of what we’re running at our house. You’ll note the use of the Verizon-provided FiOS router, and the Synology NAS, or the small server. But the rest is pretty much the same. Now of course, you don’t have to use that same Channel Master antenna I did, but I’m here to tell you that it’s pretty difficult to beat their performance, especially at their price point. Speaking of price points… On to what you’ve all been waiting for. The numbers. First, some assumptions I made in my analysis… I looked at my neighbor’s use case up above. 7 TVs, with a DVR requirement. I kicked the Internet speed up from 200M to Gigabit using what I’m paying per month on the autopay discount to do the cost modeling. The full comparison is a 3-Year Total Cost of Ownership (TCO) Model. First up, building with the Synology NAS, with redundant 6TB of redundant storage, then the Small Server build, followed by keeping the FiOS Triple Play Bundle at $214.99 a month. That’s the easiest one to calculate, of course, since it’s just 36 months of service at $214.99.

ItemLinkUnit PriceQtyTotal
Hard Drives$129.992$259.99
Plex Software$119.991$119.99
Antenna w/InstallNJ-based Installer Company I used$250.001$250.00
HD HomeRun$199.991$199.99
Roku Stick 4K+$69.007$483.00
FiOS Gigabit$75.0012$900.00
Converged Plex, Synology Build, Year 1 Total Cost of Ownership

ItemLinkUnit PriceQtyTotal
BeeLink U59$250.001$250.00
2TB SSD$187.661$187.66
Plex Software$119.991$119.99
Antenna w/InstallNJ-based Installer Company I used$250.001$250.00
HD HomeRun$199.991$199.99
Roku Stick 4K+$69.007$483.00
FiOS Gigabit$75.0012$900.00
Converged Plex, Small Server Build, Year 1 Total Cost of Ownership

3-Year Total Cost of Ownership Model

As you can see in the graph shown at the right, the Small Server and the Synology NAS builds are quite close in the 3-Year TCO model, landing about $200 apart across the entire time. Given this, unless you’ve got serious reasons to pursue the Small Server route, I’d recommend going down the Synology NAS route. You get built-in drive redundancy, more storage space, and a turnkey web-driven management interface that falls out of the box ready to go.

Please, don’t forget, there’s also nothing wrong with just simply dropping the Triple-Play and picking up a streaming TV service. You’re going to cut your monthly expenses in half here. But, if you want to go deeper, and learn a few things along the way, you can. This is the way.

The Robots are Speaking…

So many applications and processes involve notifications. As we begin to automate processes, it becomes increasingly important to know if tasks we’ve initiated succeeded or not. That’s where automated notifications come into play. In my case, often times, these take the form of chatbots that send messages to let me know that something happened, or that a process either succeeded or failed.

I’ve settled on Telegram as my platform of choice for this. What’s attractive about Telegram? It’s available for a wide variety of platforms – iOS/iPadOS, Android, macOS, Linux, Windows, there’s even a web client.

So, how does one create a bot? You have a conversation with the BotFather. BotFather is a Telegram bot that creates bots (how meta, right?) for you. The bot walks you through the whole process. When the process is complete, you’re given a token – this is a secret value that serves as the “keys” to the bot. Once you’re done this process, you’re almost ready to start using the bot in your automations. Next, you start having a chat with the bot, just like you would with another human. Send the /start command to the bot to start it up. Once you’ve done that, you want to figure out the chat_id for that chat. This is pretty straight forward to do, fortunately.

To get the chat_id value for a private chat with your bot, you visit this URL:[your-bot-token]/getUpdates

Want to use the bot to notify multiple people? Create a Group Chat and add the bot to it. Send a message to the group and reload that URL up above. Check the updates for the Group Chat messages, and look for the chat_id for the Group Chat in there. How can you tell the difference between a regular chat with the bot vs a Group Chat that the bot belongs to? Easy. All chat_id values are 32-bit integers, but Group Chats ID values are negative, while individual chats are positive. Easy, right?

Ok, so you’ve got the Token, and a chat_id value. It’s time to put those bits to work. Here’s an example, in Python, using the python-telegram-bot module, which you can install using pip install python-telegram-bot.

#!/usr/bin/env python3

import telegram

myToken = '123456789:ABC-123456-foobarbaz9876543210'
chatID = -123456789

bot = telegram.Bot(token=myToken)
bot.sendMessage(chat_id=chatID, text="Hello World!")

This is just one of many ways possible to send notifications. Find the one that works best for you and your workflows and go with it! If you want to take a shot at using Telegram, give their docs a read before you start.