I’m a day late again, but this post took quite a while and ended up more detailed than I intended my daily posts to be, so this counts for two. At least, it does until I get inspired and write two posts in one day (this weekend is a long weekend, so don’t be surprised).
The website that you’re reading is pretty new. It’s also the first website I’ve ever hosted (excluding via GitHub Pages), and I learned a lot about how the Internet works while getting everything online. In this post, I’ll walk through my experience with the following things (in no real order):
- Domains
- Web servers
- DNS
- HTTPS
- CDNs
Registering A Domain
#This part was easy.
I bought my domain, cdg.dev
, from Google Domains a while back, on the first or second day of General Availability for the .dev
TLD (top-level domain).
Being a really short name, it cost more than I expected it to, but I foolishly bought it anyways.
The likelihood of me finding cdg
or something similar that I like under any other TLD is pretty low, so oh well.
Initial Web Server Setup
#NGINX is a popular web server that can be used for many things, including my use cases of a static file server and reverse proxy. Getting it working requires no configuration: just install with your system package manager and start with systemd or similar. At this point, requests to my server’s IP address returned NGINX’s default HTML page. Obviously, this was not yet what I wanted, but it meant that traffic was reaching my server, so it was enough to move on to the next step.
Resolving Names To An IP Address
#Since I didn’t want to tell my mom to navigate to 123.456.789.123
in her web browser, I had to get my purchased domain name to point to my server.
This involved changing the DNS settings for the domain.
I ended up completing these steps three times, each time on a different DNS provider, as I learned more ab
out what I wanted.
The first DNS provider that I used was DigitalOcean (DO), since my server was hosted there.
I set the nameservers in Google Domains to the DO nameservers, so that everything else going forward would be handled on their side.
Adding an A record to direct my domain to my server’s IP address proved to be really easy, since DO provides a nice little drop-down menu to choose my server instead of needing to know the actual IP.
I also added an A record for the www
subdomain so that I could access both cdg.dev
and www.cdg.dev
.
Eventually, I went on to repeat these steps two more times with different DNS providers as I got to know what I wanted a little better. s
Once the DNS settings were in place, I could see access my website via curl cdg.dev
, but not yet through my web browser.
Configuring HTTPS
#The .dev
TLD is on the HSTS preload list, which means that domains under that TLD are obligated to securely serve content to browsers over HTTPS.
The tool of choice for obtaining HTTPS certificates is Certbot and Let’s Encrypt.
Certbot offers plugins for all kinds of DNS providers, and also includes a plugin for NGINX configuration.
So, after installing the correct plugins and invoking the correct certbot
incantation, I was good to go.
I really expected this part to be harder, but the tooling is great.
I could now navigate to cdg.dev
in my browser and see the NGINX landing page, complete with a green lock indicating an HTTPS connection.
Moving To Google Cloud
#At this point, separate of all this work, I realized that instead of paying about $8 a month for my DigitalOcean server, I could have one for free from Google Compute Engine.
It wasn’t much work to get everything moved over, since I don’t have too much running on servers anymore (I’ve moved a lot of stuff to AWS Lambda via the Serverless Framework).
An added bonus was that I was able to get my new server running Arch Linux, the same operating system that I use on my personal computers (my DigitalOcean server ran Ubuntu).
Once my server was in Google’s cloud, I thought it made sense to change my DNS provider to Google Cloud DNS, so I updated my DNS settings and ran Certbot once again to generate a new HTTPS certificate.
Before updating the addresses of my A records, I made sure to reserve a constant IP address for my server so that I wouldn’t have to worry about it changing upon reboot.
Now I was back to where I had started, except with an extra $8 in my pocket every month.
Serving My Content
#With my domain set up through DNS, I was ready to make NGINX serve my site’s content.
My blog is built with Hugo, which outputs a nice directory full of static HTML and other assets to /opt/site/public
.
Getting NGINX to serve files from that directory wasn’t too hard:
server {
listen 80;
listen [::]:80;
server_name cdg.dev www.cdg.dev;
include /etc/nginx/ssl.conf;
# Hugo static files.
root /opt/site/public/;
location / {
try_files $uri $uri/ /404.html;
}
}
ssl.conf
contains the following, which Certbot was kind enough to provide me with (I went on to reuse it later):
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/cdg.dev/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/cdg.dev/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
After restarting NGINX, I could see my blog’s home page at cdg.dev
.
Yay!
In addition to my blog’s content, there were a few other things I wanted to be able to access via my domain.
First was a directory of PDFs that I use to teach math.
This wasn’t much different than setting up the static content from my blog:
# Teaching resources.
location /teach {
root /opt/;
autoindex on;
try_files $uri $uri/ =404;
}
Really, I should have set the root to /opt/teach/
and rewritten the request path to remove the leading /teach
, but this works just fine.
Second was a little web server that I use for keeping track of the local IP address of a Raspberry Pi I have sitting in my apartment, as an alternative to setting a static IP (which I find pretty finicky):
# IP tracker.
location /the/path {
proxy_pass http://localhost:the_port;
}
Both of these techniques (root
+ try_files
for static content, and proxy_pass
for local web servers) are good to know, since they cover common use cases.
Enabling Blog Comments
#I used Commento to embed a comment service on my blog posts.
It has no ads or other nonsense, and is quite lightweight, so I didn’t have to think too hard about it.
It can be self-hosted for free, or you can pay for a hosted service.
Since I already have a server, I went with the self-hosting option.
It’s conveniently packaged in the AUR, so I was able to install it with a quick yay -S commento
, and it’ll be trivial to keep updated.
Setting up SMTP support for email notifications, password reset email, etc. was a breeze once I created an app password to get around my Google account’s 2FA.
It was also simple to set up Google, GitHub, and GitLab as login providers (Commento definitely shows its bias towards developer crowds with its provider options).
I hosted the service at commento.cdg.dev
, which meant I had to create more A records for the commento
and www.commento
subdomains, add the subdomains to my HTTP certificate, and add another server to my NGINX configuration:
server {
listen 80;
listen [::]:80;
server_name commento.cdg.dev www.commento.cdg.dev;
include /etc/nginx/ssl.conf;
location / {
proxy_pass http://localhost:the_port;
}
}
Actually including the comments in the generated HTML only involves adding two lines to the single-post template, and Hugo takes care of the rest. To run the comment service, I used systemd, for which a unit file is provided by Commento.
Optimizing Load Times
#After all these steps, I was “done”.
I could access my site and it had all the features that I wanted.
But I was seeing request times of about a second for simple HTML pages, which is not great.
My server is in the US, and I am across the world in Thailand, so naturally things take a little while to move through the pipes.
Thankfully, there is a well-known solution to problems like this: CDNs.
Cloudflare is probably the most well-known CDN, and it also offers a whole bunch of other convenient services.
So, on my quest for faster page loads, I switched my DNS provider, once again, to Cloudflare.
By default, pretty much every static resource imaginable (CSS, JPG, PDF, etc.) is cached by Cloudflare.
But there were two problems with this:
- My posts were not cached
- My teaching resources were cached
To correct this, I used Cloudflare’s Page Rules.
I made sure to cache all pages starting with /post/
, and disabled caching for pages starting with /teach/
.
I don’t mind purging the cache manually if I edit a post, but having to do so before every class would be a bit too much trouble.
Load times for cached pages are now in the tens of milliseconds, a huge improvement over before.
Edit: After posting this article, I checked how long it took to retrieve this HTML page for the first time (i.e. before caching), and then reloaded the page with the browser cache disabled. The first response took 935 ms, and the second took 15 ms. Those are some serious gains!
Keeping The Content Up To Date
#The source of my site is in a Git repository, so if I want to update my site on my server, I just need to git pull && hugo
.
I automated this with a simple script on a systemd timer:
sha1=$(git rev-parse HEAD)
git pull origin master
sha2=$(git rev-parse HEAD)
[ $sha1 = $sha2 ] || hugo
With this in place, I won’t need to remote into my server to get new posts to appear online.
That should cover most of the underpinnings of my website.
I learned a lot going through this process, and I’m really happy with how it turned out.
I was especially pleased with:
- Arch Linux on Google Compute Engine
- Certbot
- Commento
- Cloudflare
The ease of use of each of these systems seriously blew me away.
I am also very pleased to now have a baseline understanding of DNS and HTTPS, since these are two areas that were complete black boxes to me before about a week ago.
Hopefully you learned a thing or two, too!
Thanks for reading.