This is a running record of things I work on each day where I feel like I learned something new or I'm a little bit proud of it. Inspired by Julia Evans' post Get your work recognized: write a brag document.
hostname -I
to get the IP (or find it using netstat -r
or similar).ssh pi@<PI_IP_ADDRESS>
(you may need to enable SSH first).sudo nano /etc/resolv.conf
and note down all nameservers.sudo nano /etc/dhcpcd.conf
and add:interface wlan0
static ip_address=<PI_IP_ADDRESS>/24
static routers=192.168.1.1
static domain_name_servers=<NAMESERVER_1> <NAMESERVER_2> <NAMESERVER_3>
sudo nano /etc/hostname
and change raspberrypi
to the hostname you would like.sudo nano /etc/hosts
and replace raspberrypi
to the same as above.sudo reboot
.ssh pi@<NEW_HOSTNAME>.local
to SSH into the Pi using the new hostname.ssh <RASPBERRY_PI_ADDRESS>
sudo adduser <NEW_USERNAME>
sudo adduser <NEW_USERNAME> sudo # Add sudo privileges
echo '<NEW_USERNAME> ALL=(ALL) NOPASSWD: ALL' | sudo tee /etc/sudoers.d/010_<NEW_USERNAME>-nopasswd # Add passwordless sudo
sudo reboot
ssh <username>@<RASPBERRY_PI_ADDRESS>
sudo deluser --remove-home pi # Remove default user if you want to
Mainly used to update a Raspberry Pi.
sudo apt update
sudo apt -y full-upgrade
sudo apt -y autoremove
sudo reboot
We ran into a bug on fishbrain.com today due to the fact that on Incremental Static Regneration (ISR) pages in our Next.js app we were returning { notFound: true }
for pages were we couldn't find the corresponding data in the database. This kind of works correctly (returning a 404), but the problem is it never checks that page again - any future requests are just assumed to also be 404s. In our case we knew that the page may no longer be a 404 later on (for example if a new product is published in our database), so we need to change the return value to { notFound: true, revalidate: SOME_NUMBER_OF_SECONDS }
to make it behave as we wanted.
Intelligent Tracking Prevention (ITP) is a feature in the Safari browser that I learnt about today while trying to debug an issue with some cookies we use on fishbrain.com. From our logs we could see that the cookies weren't being applied for some users on Safari, but it's not something we've been able to replicate. ITP will try to block what it believes to be tracking cookies, and our suspicion is that is what is happening to our particular cookie. Unfortunately we couldn't replicate it, but I did learn that there is a debug mode in Safari that will print info about blocked cookies to the console.s
This morning I was working on a script to convert some files from markdown to JSON and learnt about two handy functions to aid in this (or re-learnt I guess as I've probably used them at some point).
import { homedir } from 'os';
This allows you to include the home directory in the path. I was working on MacOS and trying to use a path starting with ~/
to represent the home directory, but apparently this isn't possible within Node, so the homedir
was a nice substitute (with the additional benefit of being cross-platform).
JSON.stringify(obj, null, 2)
I was generating some JSON and writing it to a file, but it all got written to a single file without any formatting. Thankfully we have an option built right in JSON.stringify
to allow for pretty printing (kudos to this Stack Overflow answer).
I've been writing tests for React for about 7 years now, but today was the first time I found the need to programatically update a components props and trigger a re-render for testing purposes. Turns out it's pretty easy using testing-library
.
Like the docs say, it's probably not something you want to do often., but in my situation I need to test changes triggered by data from a third party being injected into our React app and I couldn't really identify a way of testing with out reaching for rerender
. Turns out it worked perfectly!
This was a pretty basic change to require 3DS on all transactions in our app. The main benefit I got from it was a reminder of the utility of good developer documentation (Stripe are the masters).
At work we've been using Contentful for a couple of years now. I personally love it, but one of the downsides is no out of the box previewing of your content (it's headless, so it makes sense). I took a bit of time (not much really!) today to set up the preview mode in our app, more or less following this guide.
I wrote an Elasticsearch query today that leaned into using Aggregations. I'm not sure if it's the most efficient approach to solving the problem I had (updating the indices to contain some aggregated info might work better), but it was certainly interesting learning about how it worked.
AWS Cloudfront allows you to extend request headers with a number of attributes, including geolocation data like the City and Country a user is making the request from. It was relatively simple to set this up on the Cloudfront side of things, but I had to do a little work to transfer the values down to our client-side app where we would actually be using them. I managed this by basically transforming the headers into cookies on the server-side.
I used the prefers-reduced-motion
CSS feature for the first time today. The use case was for a card loading state with a pretty standard "shimmer" effect as the card loads. When the user has a preference set for reduce motion it just displays as a plain gray background instead.
A big shout out to Tailwind for making this super-easy to incorporate.
I use Datadog pretty heavily in my day job, but today I tried out the Notebooks feature for the first time. I'm currently using it to document a few optimisation projects I'm working, so I can write notes about my findings, display a graph with supporting metrics, make some changes, then repeat the process based on the changes made.
I've been learning about the Contentful App Framework with the intent of enabling our Content team to pull data from an external API into their Blog posts, etc. I'll probably write a blog post on this at some point so I won't go into too much detail here :)
I wrote a Chrome extension that allows my team to easily toggle feature flags in the app we work on. I've worked on a Firefox extension recently, and I would say it was a little frustrating that the APIs are similar but not the same, but otherwise it was a pretty smooth process. Publishing was okay, but they could do with a better process for publishing private extensions.