This Bloody Website
Sunday, 04 July 2021 | 3382 words | 14-17 minutes
Table of Contents
I was enamoured by the idea of creating a website, which makes sense as it is common for people with a computing background have their own portfolio site. While many free and paid services that provides you a platform to create your own custom site, I wanted to have complete total control1 over the entire structure, styling and the build processes.
A Bit of History
The Domain Name
Earlier this year, The domain julian-heng.id.au was free for a year as part of a promotion with VentraIP and decided to try it out. It was pretty cool to own a domain that had my name on it and I can host whatever I wanted.
Before obtaining this domain, I’ve used services such as No-IP and DuckDNS to have my own “domains” to give my public facing private server an easy alias. The reason I call them “domains” is because they’re not really giving you a real domain, but instead pointing a subdomain to your public IP address.
There isn’t a downside for using these services. In fact, I think they’re great. They’re free2 to use and you do not need to remember your public IP address to access your servers. But a big part for me to get my own domain is the brand factor. I get to share the URL for my site with noany other names attached to it. This leads to a much more professional look in my own opinion, especially when you are sharing it on social media, résumés and such.
A Blank Canvas
After obtaining the domain, I created a Gitlab repository to store the source code for my website. With this repo, I made my first commit:
<html>
</html>
Using Gitlab, this source code can be deployed using GitlabPages. In order to do so, a CI/CD (Continuous Integration/Continuous Deployment) pipeline needs to be defined.
pages:
stage: deploy
script:
- mkdir .public
- cp -r * .public
- mv .public public
artifacts:
paths:
- public
only:
- master
With this pipeline, any new changes will be deployed on Gitlab pages by accessing https://julian-heng.gitlab.io/julian-heng.id.au.
Wait, what’s up with this weird looking URL? Why is my name duplicated? This certainly does not look very professional. Well, this URL is the repository’s project page, where the subdomain is the username (or the group name) and the path being the project’s name.
Luckily, Gitlab pages allows you to set up custom domains. Setting one up was fairly straightforward, as it needed a TXT record for verification and an A record pointing to the Gitlab pages URL. The only bit that was annoying was waiting for the DNS records to propagate throughout the network, which took a few days.
So with the domains and deployment set, all that’s left is the creation of the actual website. How hard can it be?
Website Version Uno
Initial designs of the website took inspiration from business cards, where a small card sized area is dedicated for content rather than a full page. The content was also fairly small in scope, only containing social media links and a list of projects.
Utilising the excellent Bootstrap library, creating the website layout was fairly straightforward. Benefit of using Bootstrap are that most of the common CSS class rules are already defined. This meant that you can apply styles by assigning the self describing rules to the DOM elements.
The early version displays projects as a collapsible list. The collapsible list is a component of Bootstrap, but it requires loading the Bootstrap JavaScript companion and it’s dependency JQuery.
By this point, an email alias was set up using ImprovMX, which allows emails to contact@julian-heng.id.au to be forwarded me.
Dark mode was soon added to the website, where it is able to automatically use the user preferred colour scheme or user toggleable by pressing the light bulb icon on the bottom right. After that, a better mobile layout was implemented.
So it seems that the website is completed. It displays my social media, contact info as well as displaying my projects. However, I was not fully satisfied with the final product. For one, this initial version does not allow easy extensibility for new pages, such as blogs, news and other content. Thus, a new approach needs to be adopted.
The New and Improved
Okay enough history, lets get into the details of the new website.
Firstly, I wanted to set out some goals for this new website.
- Pure HTML and CSS only, no JavaScript.
- Retain feature parity with the old site.
- Be able to add new pages with relative ease.
- Use Markdown to generate pages.
- Use Sass to generate the CSS stylesheets.
- Have a script to easily build the website.
- Use Gitlab CI/CD in tangent with the build scripts.
Seems like a tall order to fulfill, and it is. Basically, in order to achieve all of these goals, you would need to write a static generator framework from scratch. And on top of these, you would need to convert the old site into the new infrastructure.
But before anything else, a new design for the website needs to be created first.
Redo, Redo and Redo
To get inspiration for the new design, I looked through a couple of user submitted sites to the excellent personal-sites repository and decided to go with a simple 2 column layout. The left column would contain the navigation and social links, and the right column containing the page’s main content.
As for the mobile version, there were some initial pains with how it was structured inside of the HTML document. The main problem being that I did not know how to properly reuse the same DOM nodes for both the mobile view and the desktop view. Thus, I settled for a naive solution where I duplicate the sidebar and style it accordingly, hiding one or the other depending on the screen size.
<aside id="sidebar-container" class="...">
<div class="d-none d-md-block ...">
<div id="name-container">
...</div>
<hr class="..."/>
<div id="contact" class="...">
...</div>
<hr class="..."/>
</div>
<div class="d-md-none">
<div class="row ...">
<div id="name-container" class="...">
...</div>
<div class="...">
...</div>
</div>
</div>
</aside>
I knew that this is a bad solution, but for a prove of concept it was good enough and I had to move on to the other objectives.
As mentioned in the list of goals, I wanted the site to use pure HTML and CSS only. But the earlier version of the website utilises JavaScript. It needed JavaScript for the collapsible menu (which drags in JQuery) as well as the dark mode toggle switch.
The solution I ended up with was to omit the idea of the collapsible menu, which I wanted to do anyways, and omit the ability to toggle dark or light mode. I actually would prefer to have a way to toggle between themes without using any developer tools, but you would require JavaScript for that unfortunately. So using media queries would suffice for now.
Interestingly, the omission of a toggle switch and using media queries does prevent other issues such as FOUC (Flash of Unstyled Content) when the page loads, as the theme selection would need to be determined after the script has loaded.
Once I’ve given my projects a descriptive summary, the website is pretty much how I wish it was originally.
I liked this design quite a bit as it is easy to place content on the right column, along side with the sidebar being easily extended by link groups. The only growing pains this design would suffer is the mobile view; it would be fairly tricky to add new elements as the sidebar would have to grow horizontally, not vertically. With that said however, we can cross off goal 1 and 2.
Building the Tools After the Fact
I decided to start tackling goal 6, which does not seem too bad at first. You would need the script to be able to create and output the website to a dedicated build folder, which would have included all of the compiled CSS and html files as well as the static assets. However, there were several things I needed to consider as they could possibly impact the other goals.
Firstly, I had to decide on what to write the script in. I’ve seen
several static site infrastructure that uses make
to
generate the sites, which would be pretty ideal. Make is capable of
determining if a file has changed by checking the modified date, thus
saving time when generating the site. Make can also perform parallel
execution and is able to resolve dependencies on targets. It seems
make
would be the perfect tool in, however there were some
issues if I were to use make
:
- Since we’re using Gitlab CI/CD to build the website and we’re
splitting off the tasks to different docker images, I cannot guarantee
that
make
will be installed in each image. - The benefit of incremental building would be lost as the pipeline would always be building from scratch regardless of previous runs.
As a result, I decided to go with POSIX Shell3
instead of make
. Yes, POSIX Shell and not BASH, mainly
because every docker image would at least include sh
.4 Because I’ve decided to stick with
POSIX Shell, I do not get to have some of the nice features that
make
comes with, so a compromise was made.
To emulate the feeling of invoking the make
command,
I’ve written the script with the idea of the targets being functions and
have them be callable from the command line. This script does not handle
any dependency resolution, so don’t expect it work correctly if you
invoke it in the wrong order.
Pandoc and Sass and Problems Galore
It’s somewhat important to note that I did not write the build script all at once. In order to get an idea of how the build script operates, there needs to be input for the script to process through to get a final product. Thus, I would need to start thinking about incorporating both Pandoc and Sass into the website.
Let’s talk about Pandoc. Pandoc is a really good tool mainly used to convert from one document format to another. In this particular case, from Markdown to HTML. Thus, every page on this website originated from Markdown. Pandoc allows the use of a template to wrap the generated HTML code, which makes it really easy to insert the website’s sidebar to all of the generated pages.
But first things first, in order to use Pandoc with the site, the main page would need to be split off into multiple template files for Pandoc to inject the contents easily. Here is where the first big challenge appeared, and also one of the main gripes I’ve had working on this project.
The challenge was to remove the sidebar and separate it into it’s own file. However, I was not fully happy with the naive solution with the mobile view, and decided to try and solve it once and for all. But before I could do that, I would also need to implement site navigation as there are now multiple pages on the site. But on top of all of these, I would need to have the build script working as intended in order for me to check if the generated pages was right. So it just turned into a long chain to problems for me to solve.
It was possibly the hardest problem of this site and probably the most painful one to solve, but I managed to get there in the end. And now with the site fully capable of being converted from a list of Markdown files to HTML, progress was then made to convert the existing CSS stylesheets to SCSS… which ended up being the second most painful experiences with the project.
Most of the pain for SCSS didn’t come from the language itself, but more from how it relates into the overall build process. As a side note, the website was always using the pre-compiled Bootstrap library that is included in the official Bootstrap repository. But as a learning exercise on SCSS, I decided that I will learn to use the Bootstrap SCSS files to create a version of Bootstrap that is optimised for this website.
Other SCSS tidbits includes separating the custom stylesheet into multiple files with different responsibilities. For example, having one stylesheet to handle the custom fonts definition, another one for the layout of the elements, and one for the colour scheme used throughout the website. The latter is probably the most important one, as it made it really easy to generate different stylesheets depending on the user’s preferred colour scheme that does not include any of the other irrelevant rules from the main stylesheet.
With all of that done however, the CI/CD was then updated to use the new build scripts and finally, I’ve finished writing the website and it’s build infrastructure.
The Optimal Optimisation
The original old site would transfer approximately 770 KB of data, the largest file transferred being the image. After the rewrite, it increased to 1.4 MB. This increase is mainly attributed to a couple of main culprits:
- The picture is loaded twice due to my bad implementation of an image fallback.
- The fonts are in TrueType format.
In regard to the fonts, switching to a newer font format like WOFF2 reduces the font file size by 2-3 times, resulting in less data being transferred.
For the image, I decided to just drop using Gravatar and image fallback entirely
and just self-host the image. The reason the fallback implementation is
terrible is because it uses <object>
wrapped around
an <img>
tag. This meant that if the data inside of
<object>
failed to load, it replaces it with the
internal <img>
tag. However, it meant that the
pictures are both loaded regardless if the first image successfully
loaded. In the early version of the site, JavaScript was used to replace
the image using the onerror
attribute. Of course, to stay
compliant with goal 1, we can’t use that.
For further optimisations, purgecss and minify were used to reduce the size of each file on the site, which brought down the transfer size for the homepage from 530 KB to 364 KB. Neat!
Other miscellaneous optimisations includes compressing each image using TinyPNG.
The Blog
Right, so the blog structure of the site took quite a while to figure
out. Basically all blogs will be contained within a subdirectory called
blog
where it is organised by year then month. Each blog
entry would need to contain the metadata necessary for the build scripts
to generate them correctly. The title
attribute is
mandatory as are all pages, but each blog requires both
date
and dateiso
to be defined.
The reason for having 2 separate dates lies with how the build
scripts generate the pages. The template files passed into Pandoc has
conditionals that checks if variables like date
exists,
which then gets added into the final document. However, the date format
on the blog piece and on the index page are not the same, and Pandoc
does not provide a way to convert between time formats. It would need to
be done externally. I decide that it would be too much of a hassle to
check if each Markdown document contained definitions for
date
and just manually defined both formats.
The blog index is generated during build time, where a Markdown file containing all blog posts and their publishing dates gets created. This file is then used to generate the final index page for the blogs. Some features for this index page include grouping blogs by the publication month and being chronologically correct. I wished there was a better solution because this meant that the source tree has been modified and in addition to cleaning the build directory, you would also need to clean up any generated file inside of the source tree.
However, as far as my testing goes, this blog publication system seems to work just fine.
The End?
And with that, this bloody site is finished.
There are a couple miniscule issues that may or may not be ironed out in the future:
- A better way of generating the stylesheets as currently some rules are duplicated across multiple files due to the inclusion of some components from Bootstrap which purgecss can’t purge.
- Currently the project page leads to the website hilariously breaking due to using the absolute path to load assets.
- All blog assets such as images are placed in a similar folder
structure under the
/assets
folder. I’m not sure if this is the best approach and am currently wondering if it would be better to host the images from within the same directory as the blog post. - I’ve yet to implement getting Gitlab CI/CD to use a previous pipeline job’s artifacts if the current pipeline contains no changes for a particular job. Instead it just recreates the same artifacts which is less efficient.
I suppose time will tell if I can get around to fix these issues, but really they don’t bother me too much at the moment.
Takeaways
So what are my main takeaways while working on this site. For one, there is a reason website generation tools like Jekyll exist. The amount of dedication one person needs to have to manually create their own build system for generating a static site is immense. Lots of pain and anguish were spent around both designing and the implementation aspects for the infrastructure as a whole, and without further usage, it ends up being fairly difficult to determine if it will encounter any issues in the future. Plus, you’d need to end up maintaining it in the long run.
The other takeaway I had was that limiting yourself to not using JavaScript is perhaps too much. There were several instances where JavaScript would have made it much easier to avoid hackish solutions involving HTML and CSS, and in some cases including JavaScript could have added features such as email protection and toggleable dark mode.
Overall, I feel like if you wanted a quick website to show off a couple of your projects and nothing else, you can easily create one using pure HTML and just deploy. If you wanted a website that contains more content such as blog posts and more, use an existing web site generator, it would save you a lot of time.
Closing Words
I’m pretty happy with the end product, even if it did take a couple of days to create. Was it over-engineered? Yes. Was it necessary? No. Was it worth it? Yes, and no. Do I recommend anybody else to try? Actually, yeah.
The experience from working on this project helped reinforce my skills with POSIX Shell, and it helped me understand more about new tools such as Gitlab CI/CD and Sass. While I don’t think I would attempt something like this again, the skills and knowledge gained from working with these tools are at least worth something.
Credits
- This blog post by Dylan Araps for the main inspriation as well as describing their process for creating the Kiss Linux website. Original post was deleted, hence the archive.org link.
- This repo by lukasschwab for serving as a reference point for the organisation of blogs
- This blog post by fmash16 for their excellent write up on their custom made static site generator.
Okay well, maybe not total control as I’m not using my own web server to host this website.↩︎
No-IP does require you to continuously reverify your subdomains every month, and there’s a limit of up to 3 subdomains.↩︎
Slightly cheating here, its POSIX Shell plus the list of standard UNIX tools, which yeah I recognise that some docker images might exclude.↩︎
I am not actually sure if it is possible to have a Linux based docker image to not include POSIX Shell, but from what I can tell, I’ve not encountered an issue from the pipeline from a missing shell.↩︎