Building an Eleventy Boilerplate, Part 2

Posted on
Approx. reading time: 14 min (4287 words)

Welcome back to the series! Today we are going to focus on two tasks.

  1. Break apart the templates and setup inheritance
  2. Create ancillary meta files
    • sitemap
    • robots.txt
    • opensearch
    • webmanifest
    • humans.txt
    • 404.html

First, we will look into how nunjucks works and then break apart the index page to make it more modular. But first, if you want to check out the boilder plate in it’s current state, then visit Eleventy Core on Gitlab.

Nunjucks is a great templating language by Mozilla. It has a ton of great features that help make things easier for development. One of the first things you will notice is that we will focus on using pure Nunjucks templating. The reason is that Eleventy’s templating / layout engine is not as powerful as Nunjucks. In my opinion, it is remarkable that it allows me to mix many templating engines, but I find that I lose too much power if I overuse Eleventy layouts.

Setting up the templates

Before we go any further, I just wanted to say that many people will have strong feelings about the template folder structure. I am merely showing off my way. You can change things around to however you want it.

First, we will need to add an _includes folder inside the _src folder, right beside the _data folder. _includes is where all of our templates will go. We will need one more folder inside the _includes created; it will be named layouts.

Our first template

Now we are going to create a new file named _base.njk. This file will hold the majority of the nunjucks page-specific logic. All other template files will either directly or indirectly inherit from this file. For now, we will just copy the index file contents without any of the frontmatter. We will be using the nunjucks parser to do the file includes, which means that the frontmatter will be ignored and potentially cause errors.

Once we add the HTML portion of the index file, we can start setting up the inheritance chain hooks. The first concept that we will be using quite a lot is blocks. Blocks allow for a template to override portions of another template from which it inherits. Inheritance is useful because it cuts down on the lines of code needed and enables more flexibility.

Let’s start by adding some sections to the head of the document. After the viewport meta tag and before the closing head tag, add the following code:

{%- block styles -%}
{%- endblock -%}

There are a couple of things going on here. First is that this block looks like a variable but uses {% %} instead of {{ }}. When the percent signs are present, it indicates a Nunjucks block. The dashes on either side instruct the Nunjucks parser to collapse whitespace on whatever side the dash is on. Later, we will be able to override this block when we want to add some CSS files.

Now we are going to add some markup into the body. Replace everything between the body tags with the following:

{%- block main -%}
    {{- content -}}
{%- endblock -%}
{%- block scripts -%}
{%- endblock -%}

So this is more of the same except for the one variable. The content variable is special. It holds all of the content that is not frontmatter and is created by Eleventy in the background. As long as a template has this special variable, Eleventy’s layout system can use it.

Our second template and page

Next, we will need to create a new file in the _includes folder. This one will be named page.njk, and its purpose is to be the base of a normal web page. This one is fairly straightforward for now. Just copy the following into it:

{% extends "layouts/_base.njk" %}

{%- block main -%}
<main>
{%- block content -%}
    {{- content -}}
{%- endblock -%}
</main>
{%- endblock -%}

So in this example, we are extending the base template and then overriding the main block. Inside the main block, we add the main tags and create a new block called content.

We will be changing our index.njk file to use the new page template. Frontmatter is still used to define the data for this simplified file.

---
title: Eleventy Core Boilerplate
description: Eleventy Core boilerplate is a starter file that is unopionated and optimized for today's demanding web applications. We are proud to present to you the most useful and realistic use of the Eleventy Static Site Generator.
keywords: Eleventy, 11ty, Static Site, Boilerplate, Starter File
---

{% extends "page.njk" %}


{% block content %}
<h1>Home Page</h1>
<p>This is a test paragraph.</p>
{% endblock %}

Notice that this time, the extends directive is just a file name. That is because this directive is always relative to _includes. That is why we placed our page.njk in that directory.

If we do another build, we should come up with a new HTML file. We have finished this feature, so we should make a commit.

Our second page

Now, we will need to add a 404 page, just in case our visitors get directed to a portion of the site that doesn’t exist. The 404 is usually a simple page, so we will set up a simple way to create it. First, we need to make a small modification to our .eleventy.js file. Near the bottom of the file where we set up our templating engines, we need to add markdown mostly because I wanted to show off Eleventy’s layout functionality.

templateFormats: ["html", "njk", "md", "11ty.js"]

After that, create a file called 404.njk to the _src directory. The frontmatter this time is slightly different. We will be using some of Eleventy’s internal variables.

---
layout: page.njk
permalink: 404.html
---

The layout parameter tells eleventy to look in the _includes folder and use that as the template for this file. Remember the content variable we saw earlier? Eleventy will fill that variable with whatever is in this file below the frontmatter. The permalink variable tells Eleventy what the name of the file should be. The permalink variable can also include a path, but for now, we’ll just store this in the root of the site.

So the next part will be written in Markdown and appear directly under the frontmatter.

# Ooops! We can't find that page!

We are really sorry, but we can't actually find that page right now. Please try going back to the [home page]({{ config.site.url.base }}).

As you can see, there is something a bit weird. Are we using nunjucks variables inside of a Markdown file? Well, not exactly. What is happening is that Eleventy passes the Markdown file through the Liquid parser, which has some similarities with Nunjucks. In this case, they are indistinguishable. After that, the file goes through the Markdown parser. After parsing the markdown and assigning the output to the content variable, the page.njk template is then executed. That is a ton of parsing for a straightforward file. This process can become extremely slow if we have a lot of these files. So I like to use this technique only for specific cases. If the case gets more complicated than this, I will generally switch to using Nunjucks. Quickly make a commit, and we can move on!

Ok, so I made a slight mistake, and I will correct it at the end of this tutorial. If you render the 404.html, you will notice that the HTML markup is being encoded and shown in the output. I promise I will address this near the end of the article.

Fixing some mistakes

If you tried to build the project after the last step, you would have noticed that it doesn’t build. That is because we are not defining all of our data in the frontmatter. The missing data is then causing the computed data to fail in eleventyComputed.js. Fixing this is easy; we will simply add a few cheks.

pageDescription needs to cahnge to this:

if (((data.description && data.description.length) || 0) > meta.page.description.length) {

pageKeywords needs to change to this:

return (data.keywords || "").split(',').slice(0, (meta.page.keywords.count || 5)).map((item) => item.trim()).join(',');

Now it should build again! Make a quick commit, and let’s move on.

Enter a new template language

Up until now, we have been pretty comfortable with Nunjucks and Markdown. But now we are going to push the limits of what we can do in Eleventy. There are many tutorials on making a sitemap, but I like to do it differently. I want to use the JavaScript templating language.

Setting up for the sitemap

To get ready, we will want a way to keep specific files out of the sitemap. We will want to keep files such as the sitemap itself, our robots.txt, and even our 404 page out. These extra files don’t need to be in there; we just need our content pages. This requirement will entail making another change to the eleventyConputed.js file. While we are in there, we will add two other properties that we will need, the change frequency and the crawl priority.

sitemap: {
    priority: (data) => {
        return (data.sitemap && data.sitemap.priority) || 0.5;
    },
    changeFreq: (data) => {
        return (data.sitemap && data.sitemap.changeFreq) || "monthly";
    },
    ignore: (data) => {
        return data.sitemap.ignore || false;
    }
},

This code will default the priority to 0.5 and set the change frequency to monthly. These are pretty good defaults, but we will be overriding them most of the time anyway. The essential one is the ignore property. If you forget to set it to true on a file, then it will default to false. When this returns false, the file will have an entry added to the sitemap file.

Create a JavaScript template

The second thing to do is create a sitemap.11ty.js file in the _src folder. This file will be the beginning of our new template. Now it is important to note that JavaScript template files do not have frontmatter. Instead, they have two functions on an object (a class, I guess). The data function takes the frontmatter’s place, and the render function is the body of the template file. We can start like this:

module.exports = {
  data: function () {
    return {
    }
  },
  render: function (data) {
  }
}

We need to define the filename using the permalink property, and the sitemap objects ignore property. These tasks are easy enough. We just need to change the data function to the following:

data: function () {
    return {
        permalink: 'sitemap.xml',
        sitemap: {
            ignore: true
        },
    }
},

Now that we set the properties, we just need to create the sitemap content. I like to delegate these jobs to other experts. In this case, that expert will be an NPM package named sitemap.

npm install sitemap --save-dev

Add the following code to the top of the sitemap.11ty.js file to include both the sitemap and internal streams library. These libraries will allow us to work asynchronously.

const { SitemapStream, streamToPromise } = require('sitemap')
const { Readable } = require('stream')

Since we are working asynchronously, add the async keyword to the render function.

render: async function (data) {

Now we need to gather all of the pages that need to go into the sitemap. Luckily we can get a reference to all pages through the unique collection called all. All we have to do is filter out the pages that we should exclude from the sitemap. This line of code does just that.

const links = data.collections.all.filter((item) => { return (item.data.sitemap && !item.data.sitemap.ignore); }).map((item) => { return { url: item.url, changefreq: item.data.sitemap.changeFreq, priority: item.data.sitemap.priority } })

This line is a bit busy, so let’s break it down. I grab all of the pages out of the collections. Then I filter the pages on the sitemap ignore property. I then map the remaining pages into a format the sitemap library is expecting.

We then need to start the stream by initializing the sitemap object.

const stream = new SitemapStream({ hostname: data.config.site.url.base });

The last step is to pipe all of the link objects through the sitemap stream and await the return.

return await streamToPromise(Readable.from(links).pipe(stream));

The 404 redux

We need to revisit the 404 page one more time. This time we need to add the sitemap ignore property. In the frontmatter, all we need to do is add this snippet:

sitemap:
  ignore: true

This snippet will now remove or add any file that you want to the sitemap file.

Robots

We use the robots.txt file to help control things like crawl rates and which pages to crawl. It isn’t a guarantee that the robots will respect it 100%, but it does help with any robots that look at it. The file is relatively simple, but there are a ton of nuances to it. So, just to be safe, I am delegating the creation of this file to people that have dedicated actual time to the task. Time for another NPM module! Namely the generate-robotstxt package.

Now that we will be building another file that needs to have selected routes, we will need more control points. These points come in the form of, you guessed it, computed data.

More computed data

We go back to the eleventyComputed.js file in the _data directory and we will need to add these lines in to the exported object:

robots: {
    allow: (data) => {
        return (data.robots && data.robots.allow) || false;
    },
    ignore: (data) => {
        return (data.robots && data.robots.ignore) || false;
    }
},

The concept is a bit pedantic but also gives me a lot of flexibility. The allow property simply puts the file into the robots’ allow list if true. If it is false, it puts the file into the disallow list. The ignore attribute simply removes the file from consideration in the robots.txt altogether.

Adding more logic to the sitemap

Now that we have the robots.txt added, that gives us a chance to add some extra logic to the sitemap. In the sitemap ignore property, we can change the line to consider the robot’s properties.

return data.sitemap.ignore || !(data.robots && !data.robots.ignore && data.robots.allow) || false;

Notice now that we are looking to ensure that the robot’s properties are not ignoring it and ensuring that the allow property is true? That means that if it is in the robots allow list. It’s probably suitable to be in the sitemap as well. This logic is up for debate, but I haven’t run into any problems with this, though.

Back to our robots

Ok, so now we need to build out the robots file. If you haven’t already, then please use the following:

npm install generate-robotstxt --save-dev

Once this is installed we can create a file named robots.11ty.js in the _src folder. Next is the base object for export:

const robotstxt = require("generate-robotstxt");

module.exports = {
  data: function () {
    return {
    }
  },
  render: function (data) {
  }
}

Next we need to set the data method:

data: function () {
    return {
        permalink: 'robots.txt',
        sitemap: {
            ignore: true
        },
        robots: {
            ignore: true
        }
    }
},

Set the filename with the permalink and set both the sitemap and the robots ignore to true. These ancillary files should not be in the robots.txt file.

Next is the render method. It is pretty simple. We collect the allowed and disallowed pages into two arrays.

const allowedPages = data.collections.all.filter((item) => { return (item.data.robots && !item.data.robots.ignore && item.data.robots.allow); }).map((item) => { return item.url; });
    const disallowedPages = data.collections.all.filter((item) => { return (item.data.robots && !item.data.robots.ignore && !item.data.robots.allow); }).map((item) => { return item.url; });

The logic isn’t too crazy. It’s just removing ignored files and then adding the allowed properties into true or false piles. We can optimize this code a lot more, but we are going to stick with this.

The next step is to create a policy object. It is a simple setup, but in a later tutorial, we will be adding a lot more features.

const robots = {
    policy: [{
        userAgent: "*",
        allow: allowedPages,
        disallow: disallowedPages
    }],
    sitemap: new URL("/sitemap.xml", data.config.site.url.base).toString(),
    host: data.config.site.url.base
};

Right now, I am setting the user agent as all, but I could get a lot more fancy and selective. I name the sitemap file for auto-discovery, and then I put it in the host property for the file. The host property makes relative URLs work in the file. This last part will render the file out to the file system.

return await robotstxt(robots);

Last but not least, set the render function as async. This modification will allow the whole thing to render asynchronously. And now your robots.txt implementation is done! That was easy!

Not quite, now go back to the sitemap and add the robots ignore property. Then go back to the 404 and add the robots allow property, but set it to false. In the index.njk, you can either leave it as is or set the robots allow to true.

Time for the manifest

Again, we have another ancillary file that we will need to create, and again, we will look to outside sources. This time we are going to use the NPM module @pwa/manifest. This task will require many new data properties, keeping these files as configurable as possible. From now on, I will be moving quite a bit faster in my explanations.

In _src/_data/config/site.js add:

shortname: "Eleventy Core",
theme: {
    color: "#fafafa",
    backgroundColor: "#fafafa",
}

These data properties just add some more detail to the site.

In _src/_data/config/meta.js add:

icons: {
    webmanifest: [
        {
            src: "icon.png", // these values are for testing, change these for your own site.
            type: "image/png",
            sizes: "192x192"
        }
    ]
},
webmanifest: {
        filename: "site.webmanifest",
        startUrl: "/?utm_source=homescreen",
        display: ["fullscreen", "standalone", "minimal-ui", "browser"][3]
    }

These entries are for some icons and a few other details. I decided to make the name of this file configurable, as there are a few different preferences. The other files have names that are either so common that they could be codified or specified by the appropriate RFC.

Create _src/webmanifest.11ty.js and add:

const webman = require("@pwa/manifest");
const meta = require("./_data/config/meta");


module.exports = {
  data: function () {
    return {
      permalink: meta.webmanifest.filename || false,
      sitemap: {
        ignore: true
      },
      robots: {
        ignore: true
      }
    }
  },
  render: async function (data) {
    const manifest = await webman({
      name: data.config.site.name,
      short_name: data.config.site.shortName,
      icons: data.config.meta.icons.webmanifest,
      start_url: data.config.meta.webmanifest.startUrl,
      display: data.config.meta.webmanifest.display,
      background_color: data.config.site.theme.backgroundColor,
      theme_color: data.config.site.theme.color,
    });

    return JSON.stringify(manifest);
  }
}

This code should create the manifest and send it to the file system.

Let the searches commence

We are getting to the end of our ancillary files, but we still have work todo. The opensearch.xml file is probably the most manageable file to implement. We only need one NPM module to add xml. The open search file is the only file that I couldn’t find a custom-built module. So I had to roll my own, but good thing it was easy.

const xml = require("xml");

module.exports = {
  data: function () {
    return {
      permalink: 'opensearch.xml',
      sitemap: {
        ignore: true
      },
      robots: {
        ignore: true
      }
    }
  },
  render: function (data) {
    const search = [
      {
        OpenSearchDescription: [
          { _attr: { xmlns: "http://a9.com/-/spec/opensearch/1.1/" } },
          { ShortName: data.config.site.shortname },
          { Description: `Use Google to search ${data.config.site.name}` },
          {
            Url:
            {
              _attr: {
                type: "application/rss+xml",
                template: `http://www.google.com/search?q=site:${data.config.site.url.base} {searchTerms}`
              }
            }

          }
        ]
      }
    ];

    return xml(search, { declaration: true });
  }
}

That’s it! I just needed to create a JSON representation of an XML schema and then actually do a conversion! This file allows browsers to find the local search engine, and in this case, I am just pointing them to Google with a site operator.

And now we should thank the humans

I am a big fan of the humans.txt project, and I think all sites should have one. So, why not make it easy and add it to our boilerplate!

We will need the NPM package humans-generator; it is nifty and easy to use. We need an additional configuration file that we will use. This file is slightly different, though, so we will put it in another directory: _src/_data/other/humans.js. Once you create this file, then follow the basic template I have outlined below:

module.exports = {
    team: {
        "Eleventy Core Developer": "Brent Ritchie",
        "twitter": "@thefrugaldeveloper",
        "website": "thefrugaldeveloper.life",
        "email": "brent@thefrugaldeveloper.life"
    },
    thanks: [
        "Zach Leatherman (@zachleat)"
    ],
    site: {
        Standards: "CSS, HTML, robots.txt, humans.txt, SVG, Sitemap, opensearch, NPM, node",
        Software: "Eleventy, VS Code, Eleventy Core"
    },
    note: "Eleventy Core is made in Canada!"
}

Next is to create the file at _src/humans.11ty.js and fill it with the following code:

const humans = require('humans-generator');

module.exports = {
  data: function () {
    return {
      permalink: 'humans.txt',
      sitemap: {
        ignore: true
      },
      robots: {
        ignore: true
      }
    }
  },
  render: async function (data) {
    return await new Promise((resolve, reject) => {
      humans(data.other.humans, (error, humans) => {
        if (error) {
          reject(error);
        } else {
          resolve(humans.join('\n'));
        }
      })
    });
  }
}

And that is it; we have a fully functional humans.txt.

Revisiting the templates

There is one last thing that I want to do. Right now, the 404 page will work, but you may have noticed that the HTML is escaped and doesn’t look nice. There is an easy fix for that, and it is a nunjucks filter that will allow the HTML to pass through from the markup into the nunjucks template.

in the page.njk and _base.njk template add change the line that has the content variable to:

{{- content | safe -}}

Automatic for the robots

Ok, so now we have a ton of ancillary files. We now need to notify the robots of their existence. There are a couple of ways of doing this. Remember, we added the sitemap to the robots.txt. This concept is auto-discovery; it allows the robots to find other essential files by merely parsing the files it would anyways.

We are going to go back to the _base.njk file and start there. We will need to add a few more lines of code to the head. First of all, we will add a new block called discovery. This block will house any declarations that are there purely for auto-discovery.

{%- block discovery -%}
    <link rel="search" type="application/opensearchdescription+xml" title="Search {{- title -}}" href="{{- config.site.url.base -}}search.xml"/>
    <link rel="sitemap" type="application/xml" title="Sitemap" href="{{- config.site.url.base -}}sitemap.xml"/>
    <link type="text/plain" rel="author" href="{{- config.site.url.base -}}humans.txt"/>
{%- endblock -%}

I like to make sure this stuff is just below the viewport declaration and before any other imports. Again this is just personal preference, so you can organize it any way you want.

Now we are going to give our website some application specific code. This code will help with a few extra ergonomic features and make additional information available to the browsers. This spot is also where we will notify the robots about the webmanifest if given a name.

{%- block appHead -%}
    {%- if config.site.theme.color -%}
    <meta name="theme-color" content="{{- config.site.theme.color -}}">
    {%- endif -%}
    {%- if config.meta.webmanifest.filename -%}
    <link rel="manifest" href="{{- config.meta.webmanifest.filename -}}">
    {%- endif -%}
{%- endblock -%}

The app head block goes directly under the auto-discovery block. It is simple but will open up a considerable number of possibilities later.

The last thing we need to do is to move the base tag to just under the meta charset tag. The base tag will ensure that all relative URLs work consistently.

Now that brings us to the end of this part of our series. There are a few things to note, I did add a filename config parameter for the webmanifest, and if it does return a false value, it will skip the file. I could have done that with all of the ancillary files but chose to keep it simple for this part. If you want to do that, it shouldn’t be too difficult to add those configuration parameters. I will leave that as an exercise for the reader, though.

Stay tuned because next time, we will be adding to the templates and setting up JavaScript and CSS pipelines!