jonw's mayhem academy

Canadiana. Tech. Dogs.

I consider the internet to be one of humankind’s greatest inventions. The notion of networking the entire globe together is incredibly ambitious, fraught with technical and political issues, requires unprecedented global cooperation, and, quite frankly, isn’t working out all that well.

Image credit:[AlexasFotos](https://pixabay.com/users/Alexas_Fotos-686414/)_

My career in internet technology now spans decades and in that time I’ve come to realize that there is one single truth about the internet that trumps every other characteristic of it:

The fact that the internet works at all is a flat out miracle.

Read more...

A Primer On Berkeley Packet Filters (BPF)

image courtesy of pixabay.com

I was recently tasked to investigate Berkeley Packet Filters (BPF) as a possible replacement for our iptables firewall system. I had never heard of BPF before, but that has never stopped a professional sysadmin before and it wasn’t going to now. I dutifully started searching for BPF, what it was, and what we might be able to do with it. I found lots of information, but it was mostly geared towards someone who already knew what BPF was which definitely was not me. It took me a while to get a grip on the subject matter because I could not find a simple primer to bootstrap my knowledge. So, I wrote one and here it is.

This will not be a deep-dive into BPF. Rather, it will be an overview of its history, some information on what it is and how to use it, and a pointer to some tools to get you started.

Who names this stuff?

BPF has been around a long time and the first confusing thing I ran across was the naming history. The original BPF was named just that: BPF. It was introduced into the kernel in version 2.1.75 in 1997. A few years later, BPF was rewritten to be more capable and started showing up in kernel versions 3.16 onwards, depending on the architecture. Being more capable, it was only natural that it was given the monicker “eBPF” for ‘extended’ BPF. The original BPF then became referred to as ‘classic’ BPF, or “cBPF”. These days, cBPF has been completely abandoned and eBPF is backwards compatible with any cBPF code out there, so the name has just gone back to simply BPF. That explanation alone will save you hours or searching and trying to make sense of the different names used in the many historical documents about BPF.

Got that?

BPF —> cBPF eBPF —> BPF

What is it?

BPF started life as a network packet filter. It allows sysadmins to write filter code and pass it to the kernel. The kernel would return packets that matched the code, effectively dropping those packets that did not. These days, BPF is capable of much, much more than just network filtering.

BPF programs are written in bytecode that is compiled and run by the kernel. The kernel contains a JIT (Just In Time) compiler and a virtual machine to execute the code. BPF programs are simple step-by-step programs that evaluate some condition, and return a value.

At first glance, allowing user code to run in the kernel sounds like a terrible idea. If a user space program crashes, that’s annoying but fixable. If the kernel crashes, you have a bigger problem. To mitigate this, the compiler ensures that the program cannot crash the kernel by reviewing the code to make sure it cannot get stuck or run away. It does this by enforcing some simple rules such as making sure that BPF program can only leap forward during execution, never backwards, to avoid runaway loops.

What can I do with it?

That’s the million dollar question. It’s kind of like asking “what can I use C for?” There’s probably no functional limit to the possible uses of BPF. It can make current things you do much faster, and it can introduce new functionality that you can’t do now at all. Some examples are probably in order here and the bcc github project comes stocked with a tools directory full of python scripts to get you up and running with some quick wins. More on that later.

A faster iptables

This is the door through which I entered the BPF arena. Many organizations that process large amounts of network traffic end up one day realizing that iptables isn’t performant enough at scale. When that happens, the next logical technology is something that can do the same job, but with less load on the system. BPF excels at this.

IPTables runs in user space. This means that every packet coming in on a network card has to be copied to user space before it can be processed by IPTables. This process takes almost no time at all so it is unnoticeable until a certain level of scale is achieved.

BPF is faster than iptables because the packets can be inspected and dropped by the kernel which is much earlier than iptables. Earlier processing means faster processing. When you’re looking to drop 10 million packets a second, you need to look at BPF.

This is such a common use for BPF that iptables comes with an extension that supports BPF bytecode. For example, this iptables rule:

/sbin/iptables -A INPUT -p udp -j DROP -m comment —comment “Drop UDP packets”

Becomes this BPF rule, and we can still use iptables to inject it into the kernel if the BPF extension is enabled:

/sbin/iptables -A INPUT -m bpf —bytecode “15,48 0 0 0,84 0 0 240,21 0 5 96,48 0 0 6,21 8 0 17,21 0 8 44,48 0 0 40,21 5 6 17,48 0 0 0,84 0 0 240,21 0 3 64,48 0 0 9,21 0 1 17,6 0 0 65535,6 0 0 0” -j DROP -m comment —comment “Drop UDP packets”iptables comes with an extension that supports BPF bytecode

Bonus tip: You will want to ensure you write excellent comments because the bytecode is much less readable than normal iptables syntax.

A better system analyzer

There are a number of user space tools like top and ss that can give a sysadmin information on what is happening on a system at any given time. However, they generally only run periodically and it is easy to miss short-lived processes or quick block i/o issue that cause problems, but don’t get picked up by these tools. BPF sees everything because it runs in the kernel.

execsnoop and opensnoop are excellent examples of BPF tools that can tell you every single process that executes on your system, no matter how quickly.

# ./opensnoop.py PID COMM FD ERR PATH22771 pickup 12 0 maildrop22905 opensnoop.py -1 2 /usr/lib64/python2.7/encodings/ascii.so22905 opensnoop.py -1 2 /usr/lib64/python2.7/encodings/asciimodule.so22905 opensnoop.py 12 0 /usr/lib64/python2.7/encodings/ascii.py22905 opensnoop.py 13 0 /usr/lib64/python2.7/encodings/ascii.pyc1 systemd 13 0 /proc/577/cgroup1 systemd 13 0 /proc/802/cgroup

Another tool, biolatency, has a funny sounding name, but it looks at Block I/O Latency (see the name, now?) and it can help identify disk read and write issues.

A complete network tracer

BPF can be used to track kernel calls to network functions like connect() and syscall() and print those connections to the terminal. You will not miss a single TCP connect this way.

# ./tcpconnectPID COMM IP SADDR DADDR DPORT1479 telnet 4 127.0.0.1 127.0.0.1 231469 curl 4 10.201.219.236 54.245.105.25 801469 curl 4 10.201.219.236 54.67.101.145 801991 telnet 6 ::1 ::1 23

How do I get started?

There are a couple of git repos full of tools that will help you get started. They’re primarily filled with example python scripts, but those scripts are so useful they may be all you need. Or, at least, they can provide a strong springboard to customize them for your needs instead of starting from scratch. There’s also some tools you’re probably already familiar with that can help, and of course good old school reading is always effective.

Sample tool repos

The two main projects you’ll likely end up using are bcc and bpftrace. They are in a github project named IOVisor which has a number of related interesting projects in it as well.

These tools are heavily used in Brendon Gregg’s recently released book entitled BPF Performance Tools. This book is almost entirely sure to be overkill because Gregg goes into detail about everything BPF can be used for which is a pretty large knowledge surface. However, the book is laid out so that you can also jump around to the things you want to know right now.

Cloudflare tools

Cloudflare publishes a lot of their tools for others to use. The tools the Cloudflare developers have released for BPF are concerned mostly with the ability to handle DNS packets. As such, they provide a decent baseline to learn some things from, but unless you’re also in the DNS business, the tools won’t be an exact match for you. I found it was easier to use the bcc and bptrace tools than it was to modify the Cloudflare tools.

Existing tools

Believe it or not, you can learn a lot about generating BPF compatible bytecode from that old standby tcpdump.It has a -d option which causes tcpdump to generate bytecode.

# tcpdump -ddd 'port 443 and tcp'2240 0 0 1221 0 7 3452548 0 0 2021 17 0 13221 0 16 640 0 0 5421 13 0 44340 0 0 5621 11 12 44321 0 11 204848 0 0 2321 9 0 13221 0 8 640 0 0 2069 6 0 8191177 0 0 1472 0 0 1421 2 0 44372 0 0 1621 0 1 4436 0 0 2621446 0 0 0

Unfortunately, this bytecode can’t be jammed into BPF as-is because tcpdump starts examining the packet further along than BFP does. But, you can still see the basic bytecode.

Fun fact, a single -d will get you almost readable bytecode which is a great learning tool. This is the same filter, just written out line by line, and you can see where it “jumps” (jt) to different lines in the code depending on how the evaluation of that line turned out. Also note that the jumps are always forward which is part of the kernel’s safety enforcement that I mentioned earlier.

# tcpdump -d 'port 443 and tcp'(000) ldh 12 jeq #0x86dd jt 2 jf 9(002) ldb 20 jeq #0x84 jt 21 jf 4(004) jeq #0x6 jt 5 jf 21(005) ldh 54 jeq #0x1bb jt 20 jf 7(007) ldh 56 jeq #0x1bb jt 20 jf 21(009) jeq #0x800 jt 10 jf 21(010) ldb 23 jeq #0x84 jt 21 jf 12(012) jeq #0x6 jt 13 jf 21(013) ldh 20 jset #0x1fff jt 21 jf 15(015) ldxb 4*([14]&0xf)(016) ldh x + 14 jeq #0x1bb jt 20 jf 18(018) ldh x + 16 jeq #0x1bb jt 20 jf 21(020) ret #262144(021) ret #0

More information

My main interest in BPF is for networking stuff. As such, I need to know how packets are constructed so that I can write code that loads the correct bits of a packet during execution to ensure that my evaluations work. Jeff Stebelton’s primer on BPF from a networking perspective (PDF) helped me a lot.

Happy filtering!

my shorter content on the fediverse: https://the.mayhem.academy/@jdw


Sometimes I write short stories…

There wasn’t always magic. It’s hard to believe this burned out shell of a planet was actually a pretty happening place just a few years ago. About 10, or so. About 10 years ago is when the first one was born and things changed. The first child with abilities we hadn’t seen before. She heralded the beginning of something great.

Nobody really believed that magic had come to the world unless they had seen her with their own eyes. And where there was one, there were more. A few new magic babies at first in the western world, a handful more on other continents that we knew about. Who knows how many exist in places nobody every goes. And within a few years they were almost a normal occurrence. An entire generation of babies that could bring down houses with their screams and cause beautiful flowers to sprout and grow in just a few minutes with their innocent giggles. It was a chaotic time, to say the least.

Read more...

Where we’re going, we don’t need gas.

I’m very happy we’ve reached the point where I can delete the Twitter app from my phone again. Twitter has never had the pull on me that Facebook has. I can easily leave Twitter for months, perhaps years, without missing it. Until there’s someone like Trump in office. Twitter is basically useless outside of the US. Or, at least, there are many other better ways to get non-US news than Twitter. But when that heady mix of a POTUS that can’t stop tweeting, coupled with the ludicrously idiotic nature of those tweets…ah, well…what can I say? I’m not a rock. I love a good Twitter train wreck as much as the next person. But the next 2 years should be relatively quiet in the US so seeya later Twitter and welcome back mental health.

Where’s the passion gone?

I work in technology because I love it. I literally followed that old maxim “do what you love and the money will follow”. Well, I am here to tell you that it can take a hell of a long time for the money to follow. I had some extremely lean years yelling “no, L-I-N-U-X, like Linus but with an X!!” into the phone at puzzled recruiters. But eventually the money showed up. Not everyone in tech is here because they love it. In my class at college, when formal IT was fairly nascent, there were people in my class who chose the program because they thought it was going to be a lucrative career. They’re right, but probably not for them. IT is basically black magic and if you don’t have an aptitude or passion for it, you’re going to find yourself turning into a toad.

Passion is a big part of this industry. In the early aughts, I was an angry GNU/Linux user. Angry because the Microsoft devil had taken all our freedoms away, and openly showing it by using the “GNU” in front of Linux. Many of you won’t recognize that anyone calling Linux “GNU/Linux” is definitely a radicalized Linux user who you should not accompany into dark spaces alone.

Every generation begets another wave of angry GNU/Linux users, as well as a bigger and better-behaved batch of regular old Linux users. It’s like Eternal September for operating systems, and as I age into my profession I find I have less time for the GNU heads. It’s not that they’re wrong – Open Source Software (OSS) is preferable. It’s just that they can’t conceive that the right tool for a particular task may not, in fact, be the OSS tool. I once watched a GNUey colleague rack up 4 extra billable hours to a client fiddling around with OpenVPN on his Linux, sorry – his GNU/Linux – laptop in order to connect to the client site to do some work. I, also having work to do on this client’s site, fired up a Windows VM to use their native VPN client and was in and out in an hour.

Experienced people will always choose the right tool for the job. It’s the rookies who flail about, desperately trying to prove that their OSS tool is the best tool and potentially waste tons of time and money in the process. But nothing is less sexy than budgets and timesheets, so at some point, you have to choose a path: do I want to be underemployed (or unemployed) because I’m tied to making emotional decisions? Or do I want to take on increasingly complex roles? Either is fine with me. You do you, and current-day me being me will always try to choose the right tool from all the available tools.

I’ll have 15,000 burritos, please.

Here’s my one-liner backgrounder on Bitcoin:

Bitcoins are “mined” by solving complex math problems that take tons of computing power to solve.

In 2020, it took 741-kilowatt hours to mine a single Bitcoin. What’s a kilowatt-hour? Who cares. What’s more fun is to look at what those 741-kilowatt hours could have been used for instead of silly cryptocurrency shenanigans.

According to the Northern Virginia Electrical Cooperative, you can run a ceiling fan for almost 2 years on what it takes to mine a single Bitcoin. Or:

  • Blend about 14,000 smoothies

  • Microwave at least 15,000 frozen burritos

  • Trim more than 2,000 miles of weeds

  • Make over 44,000 quarts of ice cream

  • Toast almost 12,000 slices of bread

You get the point.

Virginia power rates are $11.08 per kilowatt-hour so 741-kilowatt hours costs a little over $8,000. A Bitcoin trades for about $30,000 today. So, hey, it’s still a helluva deal!

Conversely, those 741-kilowatt hours can be used to process about half a million Visa transactions.

I have seen it all now.

The Back to the Future movies made the otherwise unknown DeLorean car company a household name. Even with the modified flux capacitor (not a thing), that car looked sweet. It was one of the first cars to hit the public consciousness with those amazing gull-wing doors.

There were 4 DMC-12’s in the movie, but who’s counting? They’re mostly in the hands of collectors these days. Although, believe it or not, I saw one in a car show in my tiny rural town of fewer than 4,000 people a few years ago. I also sat in one of the movie DeLoreans at Universal Studios park once, but damn that was a long time ago.

Well, there may be more DMC-12’s on the road soon and they might be electric.

National Highway Traffic Safety Administration (NHTSA) has completed a regulation permitting low volume motor vehicle manufacturers to begin selling replica cars that resemble vehicles produced at least 25 years ago.

That said, with EV’s becoming more mainstream, we’ve been considering switching to an all-electric as the future. It certainly makes for an easier path through emissions maze which still looms large over any internal combustion engine.

I don’t really have an opinion on gas versus electric, but I have a really strong opinion on making more DeLoreans in general and that opinion is hell yes!

Hopefully, they’ll be a little more macho than the Harley Davidson LiveWire. Honest to god, an electric Harley. That sounds like a sewing machine. I can die now. I’ve seen it all.

my shorter content on the fediverse: https://the.mayhem.academy/@jdw


Dressing Up Your Scripts With GUI Elements

I’m working on another “weekend” project that has been bubbling for a long time. I use the Sucuri Web Application Firewall (WAF) on all my sites. The primary purpose of a WAF is to block bad requests which is great, but is has the side effect of making traffic analysis difficult because website logs are in two different places. The “good” requests make it to my web server and are in my access logs. The “bad” requests that are blocked by the WAF obviously don’t make it through to my server access logs. While Sucuri offers System Information and Event Management (SIEM) integration for enterprise customers, regular customers have to make API calls to pull event data, called “audit trails”, from the WAF Application Programming Interface (API).

I am building a little script named Tiny SIEM to handle those audit trails and provide some rudimentary analysis for us regular folks.

I am not a developer. I am a sysadmin that builds discreet, small tools to achieve a single task. I always favour simplicity and ease of development over the user experience because the users of my tools are always higly technical. Except when they’re not.

Sucuri WAF customers are not always highly technical. Technical people would have no problem configuring and using Tiny SIEM, but I’m looking to make it slightly easier. I emphasize the word “slightly” – it’s still going to be a script because my mission in life is to replace everything with a small shell script – and it’s still going to assume it is running on a Linux box because that’s my world view.

I know from experience that non-technical users are going to mess up the configuration if they have to type manual entries into a config file. To address this, I am using YAD to display GUI dialog boxes in my script to collect the data Tiny SIEM needs to do its job. This allows me to prompt for the specific information I want, and to validate that input much easier than making users write whatever they want into the config file.

What is YAD?

YAD is “Yet Another Dialogue” application written by Victor Ananajevsky in 2016 [SourceForge code repo link]. The last commit was 2017 so it’s no longer in development, but for my purposes it doesn’t need to be. It’s able to do everything I want and because my use-case is single user, local mode, I don’t see a situation where a critical security update would be needed. YAD is availble in the Ubuntu software repositories and I suspect other repos as well. It requires the GTK libraries so if you’re already running Gnome it will be a small install using something like:

sudo apt install yad

The “Yet Another…” naming convention is popular with code authors who are developing their own way of doing something that has been done before. Other applications that can provide graphical dialogue boxes in scripts are Zenity, Dialog, Whiptail and probably others.

Using YAD

There are tons of YAD usage examples here, so I won’t go into too much detail in this post. What I am more interested in is how to parse and validate the input. Let’s go back to my dialogue box image above and populate it with some data.

I use this data to populate a configuration file. When Tiny SIEM launches, it looks around to see if it has been configured yet. One of the things it looks for is an existing configuration file. Tiny SIEM needs my Sucuri API credentials to grab the audit trails, and it needs to know the domain these credentials belong to in order to properly name the audit trail files that it will download. I collect that info in a YAD box and, after doing some basic validation, I write it to the conf file.

$ cat tinysiem.conf DOMAIN=linkcho.mp APIKEY=MYAPIKEY APISECRET=API_SECRET

The next time Tiny SIEM runs, it will recognize that it has been configured and it will not prompt for this information. Future iterations of Tiny SIEM will likely support multiple domains, but you have to start somewhere.

YAD uses the pipe symbol | as a field delimeter which is sort of odd, but it works well enough for my purposes because I don’t expect to encounter any legitiate pipes in the data I am collecting from users. I use simple and widely availble tools like awk to write this data to the configuration file.

# Display the YAD window res=yad --title="" --text="Please enter your Sucuri info:" --form --field="Sucuri API Key" --field="Sucuri API Secret" --field="Domain name"

# Validate data here....

# Write to configuration file echo $res |awk -F| '{print “DOMAIN=”$3}' >> tinysiem.conf echo $res |awk -F| '{print “APIKEY=“$1}' >> tinysiem.conf echo $res |awk -F| '{print “APISECRET=“$2}' >> tinysiem.conf

Regardless of whether it prompts for initial configuration data or not, Tiny SIEM will them move on to prompting for the date range of audit trails to download and analyze. YAD provides a nice date dialogue box complete with picker.

yad —title=“” —text=“Select date range of audit trails to collect:” —form —field=“Start Date”:DT —field “End Date”:DT

My selections are returned in the same type of pipe-delimeted format when the dialogue closes.

2020-05-01|2020-05-27|

I now have everything I need to make the API call to Sucuri to get the WAF audit trails for my linkcho.mp domain. The next part of the script will be a little harder because the API does not allow me to say “give me all the audit trails from May 1st to May 27th”. I can only get one day at a time so I will need to loop through the days between the start and end dates which can get complicated if the date range spans months or even years. There is also the case where there may be no audit trails for a specified date, perhaps because the user picked a start date prior to deploying the Sucuri WAF.

Those are all problems for another day, however. This post was to get your started with YAD and the concept of using graphical dialogue boxes in scripts.

Happy prompting!

my shorter content on the fediverse: https://the.mayhem.academy/@jdw


Internet Folklore: The Cult of ATDT

I always get a bit nostalgic this time of year. I think it is a combination of my impending birthday and the fact that we’re socked deep into the Canadian winter in February and my work from home lifestyle has made me utterly stir crazy by now. Whatever the reason, today I’m reflecting on how I got where I am in my career, and acknowledging that almost all of it has to do with a passion for technology rather than any kind of career plan I had for myself. I never did, and don’t now have any idea what I want to do in a few years. I’m just one of those lucky people that does what they love and the career followed. These days, $DAYJOB dictates what I learn next, but that wasn’t the case in the early years. In those days, everything was wide open and new and exciting. This post reflects on those early pre-internet years.

198…3?

I can’t remember the circumstances surrounding this, nor the actual year, but in the early 80s, my parents bought a Commodore Vic 20 computer. It was ostensibly for “the family” but nobody had any interest in it but me, and it quickly moved from a central point in the house where everyone could access it to my bedroom. As one of the first computers aimed at regular people instead of businesses, it did not come with a monitor. It was designed to hook up to an existing TV set via an RF toggle just like the gaming consoles of the era did. Some time prior to buying the Vic 20, my parents had bought us all our own small black and white TVs for our rooms so that is what I used for the Vic 20 monitor. I can’t remember if the computer had color or not, but I never saw it in any case.

Initially, my main use of the Vic 20 was gaming. And of those, the game I remember the most was Adventureland. It was a text-based adventure game where you have to perform certain actions in a certain sequence to get to the end. I played it for hours and I did reach the end a few times, but I can’t remember the details now. I do remember a bear, chiggers, and a gas bladder, perhaps not in that order.

Somewhere along the way, I discovered the BASIC programming language. A friend of mine down the road had a Commodore 64 and a disk drive. It was much faster to write and retrieve code to the disk drive instead of the dataset (tape recorder) my Vic 20 came with (image below). Plus he had a color TV and…gasp….a 300 baud acoustical modem. I moved out of the area after a few years and have never seen this buddy in real life since, but those few years were packed with programming and BBS’ing and set the stage for a life-long love of technology.

Around the same time, my junior high school (grades 7-9) went on a major computer bender. It borrowed a Volkswagen-sized Hewlett Packard card reader from somewhere and my math class was put on hiatus for a month while we learned what to do with it. I still remember my math teacher, Mr. Wareham, asking us to instruct him how to stand up from a sitting position, step by step, as a learning tool to understand the type of discrete directions we’d have to provide to a computer in order to make it do anything. We then started penciling in the cards, shoving them into the computer, and marveling over the little ticket tape response that printed out showing the results of our work.

Shortly after that, two or three Commodore PET personal computers showed up in a slightly large-ish closet that became “the computer room” at school. It had a sign-up sheet to control the rush of students who wanted to use them. But, much like the Commodore Vic 20 in my house, these Commodores also weren’t all that popular so the reality was that I could use them whenever I wanted.

Around that time I got my hands on a BASIC programming book. Although I had already been programming rudimentary games in BASIC, I had no “formal” training. That book introduced me to PEEKs and POKEs which elevated my programming to a whole other level. I’d frequently run out of memory for my games on my Vic 20, but my buddy with the Commodore 64 was always ready to hack away, and we built some reasonably impressive games for a couple of kids. We even sent one to Thorn EMI but they rejected it. In retrospect, it was nice to get a response at all.

My family then moved across the country and the Vic 20 disappeared somewhere, and my interest in computers went with it for a few years.

199…0?

By this time I had failed to graduate high school, had spent many years running with the “wrong crowd” but somehow found my footing again and was the sous chef for a successful mid-range casual dining chain. Somewhere around this same time, my interest in computing was rekindled and I bought a used laptop, or what we called a “luggable” in those days. It had a 20MB hard drive and a monochrome blue VGA monitor. It came with Windows 3.0 installed and my mother-in-law lent me her Windows 3.1 disks so I could get all modern. I remember installing DOS 6.0 and then installing Windows over top of that. But the important thing is that it had a 1200 baud internal modem. That was a reasonably fast modem in those days, especially for a portable computer that generally did not come with internal modems at all, so I was very pleased with it. I rekindled my love for the BBS scene with that brick.

DOS 6.0 was significant in that it had disk compression built-in. Disk compression was extremely important in this era because portable storage technology was young and disk drives were very small. Compressing data was necessary to make any system usable. The technology in DOS 6.0 was named DoubleSpace but it came to light later that it was really called Stacker and Microsoft had stolen the technology from a company named Stac which successfully sued Microsoft for that. However, none of those legal wranglings changed what was on the DOS 6.0 disks so life went on.

Around this time I realized that there wasn’t a lot of career opportunity in cheffing, so while I enjoyed the work and evening shifts, it grew old after several years and I left it to become the assistant manager of a fast-food chain. My luggable VGA beastie wasn’t faring very well by now. The built-in monitor had completely failed by this point and I was using it as a desktop with an external monitor plugged into it. The time had come to buy my first new computer.

In an outlet of the only computer store in town, a young Future Shop, I found a lot of computers, but none of them were recognizable. The days of the “Commodore” and the “Amiga” were gone and the shelves were lined with identical-looking beige boxes. I had no clue what I was looking for so I just bought what I thought would work. A 486DX/30 with a 2400 baud internal modem. My defunct luggable was a 386 something, so this was a step up all around. I briefly flirted with the idea of buying a 14.4Kbps modem, but it was prohibitively expensive so I settled for the 2400. It was around this time that I learned the difference between “baud” and “bits per second (bps)” but that is too boring for even me to go into now.

I took this thing home in many boxes and set it all up. Windows 3.1 came pre-installed so I did not have to mess around with DOS or Windows, and the hard drive was big enough that I don’t recall having any space issues. But what I did have was a Super VGA (SVGA) color monitor which was blowing my mind. It was with this computer that I discovered multi-line BBSes, online MUDs, real-time chat, internet shell accounts and email and newsgroups using tools like PINE and ELM. Internet shell accounts were available on some BBSes. More expensive graphical SLIP and PPP accounts were available sparingly, but beyond my means at the time.

Several years later I went on to college for a Computer Information Systems program, did a stint in the Navy, and my career properly started. But these BBS and early internet years was the era that laid the foundation for a love of technology. A lot of that technology is still in use today, albeit hidden behind the shiny exterior of the modern internet.

The Single-line BBS Years

There were hundreds of BBSes in my local area code in these years. Partially because the world was a smaller place and there were fewer area codes, but also because it took some technical chops to connect to a BBS and understand what to do with it. Therefore, a lot of BBS users were also BBS SysOps who ran their own BBS system in addition to being a user on others. It was a fairly technical crowd.

The vast majority of BBSes were one-line hobby boards and most of us had long lists of BBSes we liked because we knew that the chances of connecting to any given one were slight, so we’d move on to the next one in the list when we got a busy signal. Almost everyone used Windows in those days. There were a few people with Apple computers, but the Linux kernel didn’t even exist yet so there were zero *nix people outside of RMS’ Free Software group; a group nobody outside of academia had heard of at all. The class of software used to connect to BBS systems were serial terminal programs, but we generally just called them terminal programs. The pre-eminent terminal program of the day was Procomm Plus and it had all the features we ever wanted – speed dialing, dialing lists, and modem volume control.

Windows came with a very basic terminal program called HyperTerminal and the standard way of setting up a new computer was to use the built-in HyperTerminal program just long enough to log into a BBS and download a better terminal program. Much like people today use Internet Explorer on a new computer just long enough to download Firefox and then never use IE again.

The single-line BBSes were basically “drop by” stops. They had short session limits, usually 20 minutes, which was enough to check if anyone had left you mail and download some software. Although, in the slow 1200 baud and less days, it was not always possible to download an entire file in that time. The universe answered our prayers with a download protocol named ZModem. ZModem had a lot of benefits that we were oblivious to, save one. It supported resumable downloads. If you were terminated from a BBS for whatever reason, when you were able to log back in and download the file again, it would resume where it left off.

Most BBSes in those days participated in FidoNet, or other Fido Technology Networks (FTNs). These were messaging networks. A BBS owner could choose to install mailer and mail tossing software on their BBS which would allow it to exchange both public messages in forums, called “echoes”, or direct and private (ish) mail, called “netmail”. A BBS SysOp who wanted to join one of these networks had to apply to that area’s coordinator through some already connected BBS and provide basic information such as the node name, the node phone number, and swear to observe Zone Mail Hour (ZMH). ZMH is is the hour every night when FTN networks would call each other and collect/drop off messages. ZMH was sacred in those days – your BBS had to be available during that hour otherwise you would not get your messages that day and your users would be annoyed. Today, the internet is used to transfer messages, so ZMH is no longer needed nor enforced.

The Multi-line BBS years

Eventually, BBS software improved so that it could support multiple lines. A few enterprising people stood up multi-line BBSes. Early multi-line BBSes had a separate phone number for each node so you had to have multiple entries for those BBSes in your dialer. Only a very few, usually commercial boards had the technology and money to have a single number round-robin to an open node.

Initially, multi-line wasn’t such a big deal because all the multiple lines really did was increase your odds of being able to connect to the BBS. But soon developers realized that if you have multiple people online at the same time, why not allow them to interact? It was during those years that Multi-User Dungeons (MUDs) and real-time chatting came into being and changed the BBS scene forever, pushing it towards what the internet would eventually become.

My board of choice was a BBS named Nucleus in Canada. Nuke still exists as an ISP now, having shut down its BBS long ago. Nuke ran a very expensive and amazing piece of BBS software named MajorBBS by Galacticomm. It had capabilities no other BBS had, and even though there were other multi-line BBSes in the area, Nucleus became the gold standard and was able to collect subscriptions from us, a feat that few other BBSes had managed to accomplish. Nucleus used to co-locate with a book and table-top gaming store, The Sentry Box, but it has come a long way now.

Once we were able to interact with other users in real-time, the world opened up.

MUDs

MajorBBS had a ton of games built specifically for the platform and therefore out-performed most other games of the era. Keep in mind that these games were text-based and mostly variations of MUDs, although some had rudimentary ASCII graphics. My favorite of them all was a game called Mutants. It was during Mutants play that I learned about what modern-day gamers complain about: lag. But in a dial-up scenario, the lag is not due to internet congestion. There is no internet involved and you have a 1:1 direct connection to the BBS. The lag came from modem speed. Mutants made no attempt to homogenize user speeds so if you had a nice speedy 14.4 modem then you could literally run circles around someone with a 1200 baud modem in the forest. It had the potential to be ghastly unfair. While I distinctly remember users running by me so fast that the game only told me I heard them go by, I don’t have any memories of this being an actual problem during gameplay. I am not sure how that could be, but memories are fickle things.

Chat

I recall only three multi-line boards in my area code in those years, Nucleus, Octopode, and Chatline. I think Chatline was also a paid BBS but it was less gamey and more chatty, as the name suggests, which wasn’t really of interest to me so I had an account but did not use it much. I remember Octopode as being one of those multi-line BBSs with 8 different phone numbers – hence the Octo part of the name. I guess that made sense because it was a free BBS so the SysOps were footing the bill for those lines themselves somehow, and adding features like single phone number would push that bill even higher. Being a free multi-line BBS, Octopode was overrun with users even younger than myself at the time, and it was hard to find a free line. I did not spend much time on Octopode for those reasons.

MajorBBS chat had a lot of features that were familiar from IRC and also allowed some customization, such as custom entry messages. That was a fun feature – you’d be chatting away and “Suddenly, the fog clears and in walks Doglier, dragging his dog bowls behind him…” would happen.

A great deal of fun in chat came from sabotaging newbies that did not have a good understanding of how their modem worked. I am sure this is arcane knowledge again by now, so I will recap a bit.

Recall that modems negotiate a serial connection over the phone line and once that connection is established, everything you type is just pushed through the pipe to the other end. But you still need to maintain control of the modem to tell it to do things like hang up when you’re done. To allow users to send commands to the modem and also to the connected system, there needs to be some kind of signal so the modem knows to take some text as commands. That is the Hayes Command Set, also known as the AT command set.

Beginning a string with AT tells the modem to pay AT tention to what comes next because it is a modem command, not something to be sent through the pipe to the other end. The most common AT command is ATDTxxxxxx where x is the BBS phone number. ATDT means “ AT tention, D ial using T ouch-tones” which tells the modem to use tone dialing to dial the number. There is also an ATDP for D ial using P ulse, but I have never used that. Other useful commands are ATH0 which causes the modem to hang up, ATL0 which shuts the modem volume L evel to 0 (off), and sending a plain +++ with no AT in front which causes the modem to go into command mode entirely and stop sending data to through the pipe.

It probably becomes pretty easy to see the sport in tricking new users into typing things like ATH0 and punting themselves from the board. Another common trick was to macro-bomb a user off the board. By sending private messages at a rate faster than the receiver’s modem can handle, most modems will simply hang up. MajorBBS eventually introduced rate-limiting for private messages to prevent this, but it was a good working tactic to get rid of people who annoyed you for a long time.

Moving on

The next few years were magical all over again because graphical SLIP and PPP connections became affordable. Suddenly, we weren’t staring into a black terminal screen anymore. We were using things like web browsers and email clients. That was another wild time to live through, but I will leave that era for a different post.

I remember the modem and BBS era very fondly. I have a lifelong friend from those days that I met online and we’ve remained friends ever since, despite never having worked together or attended school together. My first contact with technology was that Vic 20 where I learned programming, and my friend’s Commodore 64 with a modem is where I learned that there was a whole world outside of my bedroom window that I knew nothing about, full of possibilities. That 300 baud acoustical modem kick-started an entire lifetime and career in technology.

  • * *

Header image credit: By Lorax at English Wikipedia - Own work, Public Domain,

my shorter content on the fediverse: https://the.mayhem.academy/@jdw


Writing On Medium Made My Writing Worse

Medium is the 800lb gorilla of writing sites. The internet is peppered with links to articles and stories that people have written on Medium. Its popularity is primarily because it has an effortless way to pay authors for their work, and a large built-in audience for writers to gain exposure. It also has a business model that encourages click-bait and topic convergence that ultimately discourages unique content. Instead of raising the bar for content on the internet, Medium keeps the bar squarely in mediocre land.

I am writing this post because of a recent exchange on social media. I use the Fediverse as my primary social media and the Fedi is disproportionately populated with early adopters at this point in its evolution. Within early adopter circles are technical people and, in the case of the Fediverse, vulnerable groups that have left mainstream social media because of the toxicity those platforms encourage. This group is largely averse to internet tracking and snake oil salespeople. So, when one of my followers asked me why I only post 3 of the 5 weekly posts of my tech newsletter the “One Time Pad” on the web and the other 2 only to free subscribers, I knew I had to spend some time explaining myself properly. I think I did that and I liked my answer so much that I want to expand on it a bit, and give it a more permanent home in my Death By Tech newsletter archive (that’s this post).

The ratio

The 3:2 ratio is somewhat arbitrary. I chose to post five items per week in the OTP because I feel that is sustainable. Unlike Death By Tech, which you’re reading now, the OTP is a quick daily newsletter with “the one thing you need to know today about internet security”. That is a rich topic and provides enough content for five posts a week. Of those posts, I decided I’d like to lean towards giving away more content publicly, so the 3:2 ratio is a good way to go.

The reason for restricting two posts at all is a deeper topic. The short answer is that I've decided I'd like to build an audience that is interested in my work rather than just throwing stuff out there into the ether.

The longer answer

The longer answer is that I've been writing articles and blogs and books and magazine pieces since 2003. I've never tried to build an audience such as a mailing list before. The lesson I've learned, and the reason I'm doing it now, is because I have finally acknowledged that I find writing to anonymous readers to be unsatisfying and it has degraded my work. When I have no clue who I'm writing to, I tend to write more clickbaity stuff, or about topics that I don't necessarily know well just to get the clicks.

This year I've decided that I'd like my audience to be comprised, at least in part, of people who have taken some small concrete, manual step to express specific interest in my work. I can't see any other widely accessible way to do that except for the good old tried and true mailing list.

How Medium makes writing worse

Here’s a quick primer on how Medium works for writers. Each time a writer posts a story (everything is referred to as a “story” on Medium) the writer can choose whether that story is free for everyone to read or if goes behind Medium’s partial paywall. The partial paywall allows readers without a Medium account to view 2-3 posts per month and then it puts up a paywall encouraging them to subscribe to read more stories.

Writers are paid only when a paying Medium subscriber reads their story. There is no money for free readings by either logged in free Medium users or reads from the anonymous internet. Therefore, writers that want to make any money on Medium must put their work behind the paywall, but doing so does not mean a writer will make any money.

Exposure on Medium

Medium is a very busy place and it’s very hard to get your work in front of paying Medium subscribers. I have had articles on Medium with several hundred views in 24 hours, but they all came from a LinkedIn post or a Fediverse post and none of those readers were paying Medium subscribers so I made nothing on those posts. If you want to make any money, you need paying Medium subscribers to read your work and no other reader matters.

In theory, writers can build their own audience by attracting followers. However, we’ve all had that experience at some point in our life of trying to build a following on a very busy site like a Twitter or a Facebook group, and the reality is that it is hard. The busier the site, the more noise, and the harder it is to get noticed. Medium knows this and has a solution for any writer to get more exposure through a process called “curation”.

Medium has a bunch of humans that read all the posts that writers put under the Medium paywall. These humans are called curators, and if they like an article, then they will add it to a list of topics that Medium subscribers follow. When that happens, you tend to get more views because your story hits people who have expressed an interest in that topic, even if they have never heard of you and do not follow you.

There is another way to gain exposure to new audiences on Medium and that is by publishing in Medium publications. Publications are pages run by Medium members that cover certain topics. The publications build their own subscriber base and if your story is accepted, your story goes out to all the publication’s subscribers. Stories can be both curated and accepted into a publication, those things are not mutually exclusive.

While both curation and acceptance into a publication are nice, it still does not pay the author any more money. It has the potential to do so because both processes put the piece in front of a lot of users. But, those users aren’t necessarily paying Medium users and there’s no money for the writer in that case.

How much money do writers get paid?

This is the weirdest part of Medium, but I acknowledge that I don’t see a better way to pay writers with the system in place. Medium subscribers pay $5/month or $50/year. Medium must take some of that money to keep the lights on; to make this example easy, let’s say Medium takes 20%, or $1.00. That leaves $4.00 to spread out among writers.

If a subscriber reads a single post in January, that post will get all $4.00. But, if that subscriber reads two stories, then each author will only get $2.00. And so on. I don’t know about you, but I would easily read 5 stories a day over my morning coffee which, excluding weekends, is 100 stories a month which means all those authors get 4 cents from me for every read.

I had a story that had a single read and made 26 cents from that one read. On the other hand, my best read story had 500 reads and made $4.40.

The perfect storm

Let’s put this all together. Medium has developed a system that:

  • removes the control from writers as to how much exposure they can get from paying readers so writers have to hope to “get noticed” by readers.

  • has a completely opaque curation system that determines what articles get pushed into busy reader spaces so writers have to hope to “get noticed “ by curators.

  • supports publications that, for the most part, have no way for writers to contact and express interest in contributing, so writers have to hope to “get noticed” by publications.

Are you seeing the trend? Writing on Medium is less about actual writing and more about trying to get noticed. That makes all but the most Buddhist of writers start to lean towards clickbait, writing about popular topics instead of ones they are knowledgeable about, and publishing too often in a rush to get curated or pulled into a publication.

Here are some examples of the stories that are on the front page of my Medium account this morning.

  • “Hey, Bookworms, This Site Will Pay You Up To $60 To Review Books — No Experience Necessary”

  • “Password cracking is easy, here’s how to do it” (complete with hoodie image)

  • “My top 5 productivity apps”

  • “Bored? 7 fun things you can build”

  • “35 things you should never say to an Uber driver”

We have two obvious clickbait titles and then three listicles. I thought the general public had caught on to listicles and they were not catchy anymore, but here we are.

As you can see, this isn’t really a compelling list of topics that I want to read. They stink of clickbait that is doing its best to get me to read it so the author can get paid. It’s not compelling content and I am guessing they don’t represent the authors’ best work. But because of the way in which Medium pays writers, this is where the playing field is. Writers aren’t rewarded for slow, thoughtful, well laid-out posts that take deep dives. Writers are rewarded for “getting noticed”.

I tired of the Medium rat race almost immediately. When I found myself writing stories on buying shovels and useless skills I still remember from my Navy years, I knew it was time to move on. I much prefer the smaller, but deliberate audience I have now and I enjoy the time I spend building my reader base by honing my craft rather than trying to get noticed.

  • * *

If you are not currently a paid subscriber and like my work, please consider supporting me with a paid subscription. You will get at least one extra post per week, the ability to comment and like these posts on the web, and encourage me to keep going! I’m including a 50% off subscription button below for you to use. Thank you!

my shorter content on the fediverse: https://the.mayhem.academy/@jdw


Parsing JSON in shell scripts using JQ

I’ve always patted myself on the back for my early decision to build a career in Linux systems administrator rather than becoming a developer or, gasp, a Windows systems administrator. In the early aughts, there were not very many Linux sysadmin jobs anywhere, even in the major centre where I lived in Canada. It was a tough career to get started in, and I built it up by podcasting about Linux, taking any contract work I could find, and – finally – landing my first legit Linux systems administrator position with a colleague who remains a good friend today.

In the early years, I’d do anything to get work to add to my portfolio in order to build legitimacy. During those years, and indeed still today, I have learned hundreds of tools, languages, concepts, models, and swear words. In that huge pool of skills I’ve learned and forgotten over the years, one tool has stood the test of time and has been able to handle almost anything I’ve thrown at it. That tool is basic every day shell scripting. However, there are a few things that shell scripts don’t do well natively and one of those is parsing JSON formatted data. That wasn’t a big deal 10 years ago, but now JSON is a very standard text data format and I encounter it constantly.

The most brutish of all options for handling JSON in a shell script is to treat it like any other text, and use sed/awk/grep to find what you need. A slightly harder option is to switch to something that handles JSON natively better, such as pretty much any actual programing laguage. But the most elegant answer is to use leverage what Linux tools do best: one thing and pipe them together.

Enter jq, a command line JSON query/parsing tool that allows sysadmins to handle JSON formatted data using familiar shell script concepts.

jq is like sed for JSON data – you can use it to slice and filter and map and transform structured data with the same ease that sed, awk, grep and friends let you play with text.

It’s always best to learn by example, so let’s parse some JSON. I’m going to use Amazon Web Services (AWS) tools to grab some information about our EC2 instances for my examples. Many APIs output JSON and half the world is using AWS so it is a familiar place to start for many.

Let’s look at how we’d use jq to extract the instance IDs of each of our instances, regardless of what region they’re in. This isn’t an AWS post, but one way to do this is to use the aws command line tool, and grab your active regions using something like this:

$ aws ec2 —region us-east-1 describe-regions > regions

This will result in a file named “regions” containing JSON that looks something like the example below. I am only including three regions, but every region that is enabled in your account will be represented in the JSON.

{ “Regions”: [ { “OptInStatus”: “opt-in-not-required”, “Endpoint”: “ec2.eu-north-1.amazonaws.com”, “RegionName”: “eu-north-1” }, { “OptInStatus”: “opt-in-not-required”, “Endpoint”: “ec2.ap-south-1.amazonaws.com”, “RegionName”: “ap-south-1” }, { “OptInStatus”: “opt-in-not-required”, “Endpoint”: “ec2.eu-west-3.amazonaws.com”, “RegionName”: “eu-west-3” } ] }

There’s lots of stuff we can use this file for, but if you wanted to just pull the region names out of this file, you can use jq like this:

$ jq '.Regions[].RegionName' regions “eu-north-1” “ap-south-1” “eu-west-3”

If you add the -r switch to jq, the quotation marks will be removed to give you “raw” output. You can even put it all together like this in one call do away with the intermediate step of saving the regions to a file:

$ aws ec2 —region us-east-1 describe-regions | jq -r '.Regions[].RegionName' eu-north-1 ap-south-1 eu-west-3

We’re halfway there, now lets pull the instance IDs out of each region using something like this:

$ for region in jq -r '.Regions[].RegionName' regions; do echo $region; aws ec2 describe-instances —region $region | jq -r “try .Reservations[].Instances[].InstanceId” ; done eu-north-1 ap-south-1 i-0ace7f7696218ebce i-02112404f13ca11a1 i-0844cafbadfe53ce9 i-06ade579a2wc5d5af i-00378947jd66cf9a7 i-085cd87c9cc8811ef i-053052ecwfdfee23c i-0bf92c6ak0a7c9918 eu-west-3

From this we learn that I do not have any instances in the eu-north-1 and eu-west-3 regions, but I have 8 instances in the ap-south-1 region. This alone is a huge time saver when compared to the incessant clicking and slow refresh rate of the AWS web interface.

Note the [] after Regions. That means “all the elements in Regions”. But, like any good array, you access elements by index if you need:

$ jq -r '.Regions[0].RegionName' regions eu-north-1

Those are some examples how to select elements using jq, but it is MUCH more powerful than that. For example, jq has conditional structures like IF/ELSE. Let’s see how many instances are stopped in the ap-south-1 region:

$ for region in jq -r '.Regions[1].RegionName' regions; do echo $region; aws ec2 describe-instances —region $region | jq 'if .Reservations[].Instances[].State.Name == “stopped” then “Stopped” else empty end' ; done

ap-south-1 “Stopped”

Currently, Amazon does not charge for stopped instances, but that could change one day and being able to simply iterate through your instances to find wasted billing may reduce your costs. I’ve used a similar method to find instances with things like “test” or people’s names in them. Those are almost always forgotten EC2 instances spinning away eating up costs and pruning them is part of a prudent billing management process.

Like any working sysadmin, I learn just enough about any given tool to scratch my current itch. Because of that, I can’t hope to touch on all the amazing things that jq can do – the sheer breadth of its capabilities is amazing. There is also a very good support system such as:

I hope that is enough info to pique your interest in jq. Shell scripts are the work horse of any Linux farm because there’s almost zero dependencies so there’s a high degree of write once, use many” success.

my shorter content on the fediverse: https://the.mayhem.academy/@jdw


Pre-Cambrian squirrels are yummy

Think what you want about Apple’s security, but I am here to tell you that if you don’t know the PIN to get into an iDevice (iPad, iPhone, etc) then you are completely out of luck. There is literally no way to reset a lost PIN. The only “recovery” method that works is to completely reset (erase) the iPad and then restore a backup of your stuff. Virtually every other company has some kind of recovery process, Apple is alone in saying “tough shit, you lose all your stuff, sorry not sorry”. This is a hard position for me to be in because as an infosec worker, I really appreciate the fact that it would be really hard for someone to break into my Apple devices. But, in this case, I am in a situation that probably occurs several times a day within the Apple customer ecosystem and it’s hard for me to understand why Apple would design a system like this knowing that a typical Apple user is not very technical.

The Death Binder ™

The solution is what it always is – have backups. Back up your iThingy to iTunes frequently so that you have a fallback if you forget your PIN. Or, as is more likely the case, to allow your survivors to get into your iThing, if you so desire that after you pass on.

The other solution to this is to create and maintain a Death Binder ™. Not really a trademark. I have a binder hidden away that only one person knows about. It has information about how to access everything I own if I suddenly become…unavailable. Far and away, it is easier for your survivors to access your accounts if you just give them your passwords and information about how to boot up your stuff and where your 2FA tokens are, etc. So make a Death Binder and include all your insurance policies, wills, and passwords in it. And if you’re a cryptographic enthusiast like me, include your passphrases to unlock your keys.

There are much more complete and formal Death Binder templates you may wish to use.

Note that I am not a lawyer; it’s probably illegal to impersonate a dead person where you live, so this is not a long term strategy and possibly not a good strategy at all. But it will help your survivors make sense of the pieces you’ve left behind and hopefully help them recover meaningful things like pictures. Even if your survivors aren’t terribly technical, they may be able to enlist one of your friends to make sense of the information you’ve left behind.

Non-animal meat

With few exceptions, the word meat means animal flesh. We’ve been eating meat ever since Grog ran across a smushed pre-Cambrian squirrel and thought that it looked tasty (full disclosure: I have no idea what fauna if any, existed during the Cambrian era). His cohorts agreed and started experimenting with trapping and eating all sorts of animals and, eventually, in a lunge out of the hunter-gatherer phase of societal development, started farming animals specifically to eat them. This is the basic history of the Montana restaurant chain and vegetarians worldwide think the whole thing is just gross.

There are a lot of reasons to avoid eating meat. Some are health-related; meat is “calorie-dense” which is science talk for “it makes you fat.” Some of it is moral objections; animals are treated just terribly. I mean, let’s face it, even “free-range” chickens are literally bred to be eaten which is…well, just gross. And some of it is that livestock farming is terrible for the environment.

Livestock production accounts for 70 per cent of all agricultural land use, occupies 30 per cent of the planet’s land surface and is responsible for 18 per cent of greenhouse gases such as methane and nitrous oxide.

But don’t lose hope if you’re a carnivore; the playing field is about to undergo a dramatic change. For those that have moral objections, how does non-animal meat sound? Does meat grown in a lab, not on an animal, assuage those moral objections? I’m not signing off on lab-grown meat yet because it also sounds like it will be gross for different reasons than animal meat, but at least no animals were harmed growing it. I am going to give it a try when I can and Singapore is one of the first countries to put lab-grown meat on the supermarket shelves.

Canadian COVID tax credits

I have run many businesses over my lifetime; at one point there were three businesses running out of my house and I learned a lot about what types of things the Canada Revenue Agency (CRA) allows us to write off. Most of that knowledge isn’t very useful these days now that I am back to being a “T4 employee” (meaning, I just have a regular old job and no self-employment income) but with COVID, some of the knowledge I have about home-based write-offs is becoming useful again.

In general, the CRA allows two types of write-offs: those that you need to keep receipts for and therefore you can claim the actual dollar amount of those expenses, or those that use a “quick” method which is not as accurate but doesn’t require you to keep receipts. With so many people unexpectedly working from home during COVID, the CRA has introduced write-offs which can be calculated in either of those ways, whichever method you’d like to use. You can claim a credit of $2 per day you worked from home to a maximum of $400, or you can claim more if you can provide the receipts for things you otherwise would not have had to pay for if you weren’t forced to work from home. Here are the very simple guidelines and calculator from the CRA site. Go forth and claim!

my shorter content on the fediverse: https://the.mayhem.academy/@jdw


How Do We Retire From COVID?

Anyone who has been following my writing for a while probably realizes that I write about two types of things: strict technical topics, and tech adjacent topics. The straight tech topics, such as how to use application $FOO, are useful for introducing new ideas to other tech people. My tech adjacent topics potentially have a much broader audience because those articles are about the impact of technology, not any particular technology itself. Today’s post is one of those adjacent topics and it deals with the unintended consequences of our response to COVID.

Before we go too much further, I’d like to do some expectation setting. I am not well educated in financial matters and retirement planning. In my experience, most people are also not well educated in financial matters, so that is where I am aiming for this article. Also, whenever I put the word “COVID” into any article, there’s a risk that someone will assume I am about to go all conspiracy-theory weird about it. I’m not. It’s real. You need to take precautions. It’s not a government plot (I can’t believe I even have to say this, but I live beside the United States). OK, moving on.

Today

The “off-the-shelf” retirement plan preached in Canada looks something like this: You will need 80% of your working income each year you are retired and you will accumulate this through a combination of savings, government programs, and liquidating assets you no longer need.

Males live an average of 79 years in Canada, females 84. I’m going to split the difference and use 82 years. Generally, people retire at 65. The average income in Canada for “unattached individuals” is $61,000. So, if you expect to live 17 years after retirement, and you need to draw 80% of that salary each year ($48,800), you will consume $830,000. Because Canada is awesome, some of that will be offset by the Old Age Security to the tune of $186,000 over 17 years (maximum for single people). In addition, if you’ve paid into the Canadian Pension Plan during your work life, you could get up to $1,175 a month, but I am going to use the actual, average payout of CPP this year which is about $700/month which works out to $142,000 over 17 years.

Once we crunch this all together, we see that we have to save, or be able to liquidate assets in the value of around $500,000 (half a million bucks) between 65 and 82 years of age just to have a very moderate lifestyle.

Remember that some of that money is taxed when it is withdrawn just like a salary would be.

The 80%

Back to the 80% rule: where does the idea that we can maintain our lifestyle on 20% less income stem from? Conventional wisdom says that 20% is what we “waste” working. It is the money we spend on work clothes, commuting, parking, lunches, and things we would only do if we were working. When we retire, that 20% is magically freed up because we’re not doing those things anymore.

I can’t speak to how accurate that 20% number is. But I can confidently point out that the vast armies of workers who have been sent home during COVID have already stopped spending money on those things. I’ve been working from home since 2014 and I’ve managed to find a nice balance. True, I spend exactly zero on commuting costs, but I don’t work in a towel. I still buy clothes, go out for lunch, and meet people after work, but I definitely don’t spend as much money as I did on work. Eventually, the novelty of working from home will wear off for workers who were sent home during COVID and they will also start spending a few bucks on a normal work-life balance again. But, like me, those expenditures will likely not rise back up to the amount spent while at a brick and mortar office.

Tomorrow

We’re in the first wave of COVID. The epidemiology models in Canada predict ever-lessening waves of COVID every few months until there is a widely available vaccine. Eventually, COVID will be gone; either by reducing the vulnerable victim rate to zero through immunity or by the presence of a vaccine. But things will never be the same.

They’ll never be the same because a number of industries have been irreversibly affected by COVID. Some for the better – many businesses, my company included, made dramatic expenditures to facilitate working from home because any other solution was suicide: either figure out this thorny and expensive remote work problem or shut down the company. These types of expenditures had languished on the back burner for years because there was no compelling reason to spend that money before now.

If I can editorialize for a bit…some changes may seem worse but may actually improve our world a bit. A big example is the sharing culture “gig” jobs that have basically dried up entirely. The Ubers and Air BnBs of the world may not be coming back which is fine with me. Those “businesses” are utterly exploitative of people and resources and it makes me sad that humanity allowed them to thrive in the first place.

Another bunch of industries such as entertainment, transportation, and restaurants are undergoing massive change. These are the industries that have elevated the practice of exploiting both their workers and their customers to an art. These industries need to pack as many humans in as small a space as possible, charge them as much as they can, and pay their workers as little as possible in order to survive. While I will bemoan my $7 burger now costing $12, I won’t mourn the loss of the leg-room and breathing air on my next flight regardless of how much more it costs. Those industries have not done many favours for humanity and they’ve been treating all of so poorly for so many years that the behaviour has become normalized. Paying minimum wage to servers and just winking at their tips to stave off starvation is considered “normal”. That’s not normal, that’s exploitation. Ok, end editorial…

I think we should also consider that COVID-19 may not be the last novel virus we see. I have no expertise in the medical field at all, so I am not making a prediction, but I honestly do not see why there is any reason to believe we won’t see other pandemics of diseases for which we have no cure or vaccine. Why can’t there be a COVID-20, or a COVID-20-1 and a COVID-20-2, and so on? Maybe there is a good answer to this, maybe not. But the far-reaching implications of these viruses have changed the landscape forever. More people will be spending more time apart in the future and that has an impact on our tried and true retirement philosophies.

Now we’re faced with the specter that we’re going to need 100% of our income when we retire.

Retirement

I think it is pretty unlikely that most Canadians have a viable plan to finance their retirement at the 80% level. Now we’re faced with the specter that we’re going to need 100% of our income when we retire. Our income needs will not drop at all when we retire, so now the retirement goal is to simply stop working and somehow make the same amount of money we did while working. I could not afford to do that today and I don’t see how I will be able to afford to do that when I retire.

Also, keep in mind that we’re in our first significant modern pandemic. It’s feasible now for workers to sock away that 20% they’re no longer spending on work stuff. But, businesses are sociopathic, and the next generation of workers that have never set foot in the office will likely be hired at 20% less than the previous generation because they won’t have work expenses. In capitalist countries, we reward businesses for cost-saving behaviour like that. That generation of workers is going to have an even tougher time trying to amass 100% of their pre-retirement income.

Less Bleak Thoughts

I can’t leave you on that bleak note, so here’s some sunlight. I work in tech and, as I mentioned, I have been working at home for years, long before COVID. That is a double-edged sword. When your location no longer matters, your potential job market becomes global and there are thousands of more jobs you can work at than you can consider if you have to be physically present. On the other hand, you’re now competing for those jobs with literally everyone else in the world, not just those people who happen to live around you.

In my experience, the balance still swings in the workers’ favour. Believe it or not, there are people who simply cannot work from home. They either deliberately eschew the idea, or they are simply not competent to motivate themselves to work without heavy oversight. For this reason, the remote work job market is as dynamic as the brick and mortal market and there are always lots of positions open.

I’ve never really bought into the idea of a “career”. I’ve been lucky enough to do what I love and the jobs followed. I don’t recommend my path to anyone because it has been fraught with setbacks and sleepless nights, but my path works for me and there is one part of it that I do think everyone can benefit from. Stop thinking of your job as something you have to do and start thinking of it as something you want to do. If you don’t want to do your job, do something else. It’s very hard to follow that philosophy when you’re limited to the opportunities within commuting distance of your house. It’s much easier to do in a COVID world where businesses are desperately trying to figure out how to stay alive using remote workers located anywhere. This period of time is a fantastic and unprecedented opportunity for workers. Take advantage of it.

my shorter content on the fediverse: https://the.mayhem.academy/@jdw