To reduce the chance of a human error affecting the a production site (especially in a team environment) it is good
practice to implement a CI/CD pipeline. This post covers the implementation of a CI/CD pipeline for the AlphaGeek
site. In future I will provide a more generic post about CI/CD pipelines.
There are a large number of CI/CD Pipeline providers. Some of them offer self-hosted solutions, others charge for even
the most basic account, some are free for open-source projects, and some provide limited accounts for free.
For my purposes I selected SemaphoreCI as it integrates with GitHub,
is free (with significant usage limitations) and appears to provide a high level of configuration.
As the implementation of a CI/CD pipeline is to improve the reliability of my blog I added a number of new packages to
my blog’s requirements as part of the CI/CD pipeline.
If any task in the pipeline fails all subsequent actions should not occur. The final pipeline design will operate as
follows:
The first task was to sign-up to SemaphoreCI. This was as simple as clicking on the
large Sign up with GitHub button and selecting the repository I wanted to integrate.
To ensure the markdown for all the posts is formatted consistently markdownlint-cli
was installed.
1
npm i --save-dev markdownlint-cli
A custom configuration was created for this dependency so it operates how I want it to. This configuration enforces the
top level heading as level 2 (level 1 headings are used automatically for the post title in my chosen template); the
maximum line length was disabled because it is not compatible with Hexo’s default Markdwon interpreter; and I removed
the ? character from the heading validation.
To ensure there is no invalid HTML on the site I implemented node-w3c-validator.
This script has a dependency on Java, so that may need to be installed as well.
Pa11y-CI is a wrapper for Pa11y to make
it easier to integrate in a CI/CD pipeline.
Pa11y scans an HTML file for any accessibility issues. As my blog has a large
number of existing issues I have configured the scripts that run this to allow it to fail.
To simplify running the existing NPM commands and the new dependencies I added a number of elements to the scripts
section of the package.json file, modified some of the definitions and re-ordered them to make more sense to me.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
"scripts": { "jest": "jest", "mdlint-drafts": "markdownlint --config .mdconfig ./source/_drafts", "mdlint": "markdownlint --config .mdconfig ./source/_posts", "precheck": "npm run jest && npm run mdlint", "clean": "hexo clean", "build": "hexo generate", "cleanbuild": "npm run clean && npm run build", "linkcheck": "blcl --filter-level 3 --get --recursive --exclude /atom.xml --exclude /favicon.png --exclude http://2019-01-28-securing-s3.demo.alphageek.com.au.s3-website-us-east-1.amazonaws.com --exclude http://localhost:4000 --exclude http://dev./%3Cyour_domain%3E/ public", "htmlcheck": "node-w3c-validator -v -s -i public/", "a11ycheck": "pa11y-ci public/*.html public/*/*.html public/*/*/*.html public/*/*/*/*.html public/*/*/*/*/*.html || true", "validate": "npm run linkcheck && npm run htmlcheck && npm run a11ycheck", "buildtest-local": "npm run precheck && npm run cleanbuild && npm run validate", "precommit": "npm run buildtest-local" },
If you are running the default Hexo theme (landscape), it is not W3C compliant, so you will need to change the
htmlcheck script in your package.json to be node-w3c-validator -v -s -i public/ || true. This will display the
output when you run the precommit hook, but will not enforce the HTML validation.
Implementing a Git pre-commit hook is simply a matter of creating a file at .git/pre-comit and populating it with a
valid shell script that completes with an error code of 0.
version: v1.0 name: Hexo Serverless Build Pipeline agent: machine: type: e1-standard-2 os_image: ubuntu1804 blocks: # Prepare the build environment - name: Prepare task: jobs: # Make sure we've got the right Java version configured - name: Set Java Version commands: - change-java-version 8 # Run NPM install, using Semaphore's cache where possible - name: NPM Install commands: # Update NPM because it's so old - cache restore npm - npm i -g npm - cache store npm .nvm/versions/node/v8.11.3/lib/node_modules/npm - checkout # Reuse dependencies from cache and avoid installing them from scratch: - cache restore node-modules-$(checksum package-lock.json) - npm ci - cache store node-modules-$(checksum package-lock.json) node_modules # Run the validation routines that don't require a build - name: Validate task: prologue: commands: - checkout - cache restore npm - cache restore node-modules-$(checksum package-lock.json) jobs: # Run the jest test suite - name: Run Jest Tests commands: - npm run jest # Run the Markdown linter - name: MD Lint commands: - npm run mdlint # Build the deployment files - name: Build task: prologue: commands: - checkout - cache restore npm - cache restore node-modules-$(checksum package-lock.json) jobs: # Ensure we have a clean build directory, generate the files and add asset versioning - name: Build Site commands: - npm run clean - npm run build - cache store public-$(find source -type f -exec cat {} + | checksum) public # Run tests on the deployment files - name: Test Locally task: prologue: commands: - checkout - cache restore npm - cache restore node-modules-$(checksum package-lock.json) - cache restore public-$(find source -type f -exec cat {} + | checksum) jobs: # Check that all links are valid - name: Test link validity commands: - npm run linkcheck # Check that the HTML is valid - name: Test W3C compatability commands: - npm run htmlcheck # Check if we meet a11y standards - name: Test Accessibility commands: - npm run a11ycheck
As a pre-commit hook has been added your next commit may be rejected until all existing errors have been fixed. In my
case I had to fix errors in all of the markdown files for each of my previous posts.
At this point the CI/CD pipeline should successfully build and test the site.
To enable the automated deployment of content changes, and to provide functionality to deploy infrastructure and code
changes, some additional scripts need to be defined in the package.json file. These scripts will allow for deployment
and testing of both the UAT and Production environments.
1 2 3 4 5 6 7 8 9 10 11 12
"deploy-uat-infra": "npx serverless deploy -s dev", "deploy-uat-site": "npx serverless s3deploy -s dev -v", "deploy-uat": "npm run deploy-uat-infra && npm run deploy-uat-site", "linkcheck-uat": "npx blc --filter-level 3 --get --recursive --exclude /atom.xml --exclude /favicon.png --exclude http://2019-01-28-securing-s3.demo.alphageek.com.au.s3-website-us-east-1.amazonaws.com --exclude http://localhost:4000 --exclude http://dev./%3Cyour_domain%3E/ --user-agent '**PASSWORD_DEFINED_IN_SERVERLESS_CONFIGURATION** Tester' http://dev.alphageek.com.au", "test-uat": "npm run deploy-uat && npm run linkcheck-uat", "buildtest-uat": "npm run buildtest-local && test-uat", "deploy-prod-infra": "npx serverless deploy -s prod", "deploy-prod-site": "npx serverless s3deploy -s prod -v", "deploy-prod": "npm run deploy-prod-infra && npm run deploy-prod-site", "linkcheck-prod": "npx blc --filter-level 3 --get --recursive --exclude /atom.xml --exclude /favicon.png --exclude http://2019-01-28-securing-s3.demo.alphageek.com.au.s3-website-us-east-1.amazonaws.com --exclude http://localhost:4000 --exclude http://dev./%3Cyour_domain%3E/ http://alphageek.com.au", "test-prod": "npm run deploy-prod && npm run linkcheck-prod", "buildtest-prod": "npm run buildtest-uat && test-prod"
# Remove Deployment Functionality from Serverless Framework
As the deployment process has been migrated to NPM commands the serverless.yml configuration file needs to the
build and deployment functionality removed. Delete the following lines from the file:
1 2 3 4 5 6 7 8 9
scripts: hooks: # Run these commands when creating the deployment artifacts package:createDeploymentArtifacts: > hexo clean && hexo generate # Run these commands after infrastructure changes have been completed deploy:finalize: > sls s3deploy -s ${self:custom.stage}
To enable deployment to AWS SemaphoreCI will need to have access to our AWS credentials. Credentials and other secrets
should never be stored in a code repository, so we will require a method to securely save the credentials on Semaphore.
This can be done using the SemaphoreCI command line utility. You will also need to know your SemaphoreCI organization name
and the SemaphoreCI API token (which can be found at the SemaphoreCI Account Page).
1 2
curl https://storage.googleapis.com/sem-cli-releases/get.sh | bash sem connect **ORGANIZATION**.semaphoreci.com **API_TOKEN**
To provide credentials and secrets to SemaphoreCI a file needs to be created. So this doesn’t accidentally get
committed to the code repository we begin by adding to the .gitignore file.
1 2
# Semaphore secret files .semaphore/secrets/*
It’s now safe to create a file with your AWS credentials in it. Create a new file at .semaphore/secrets/aws.yml with
the following content (updated with your AWS details).
version: v1.0 name: AlphaGeek Production Content Deployment Pipeline agent: machine: type: e1-standard-2 os_image: ubuntu1804 blocks: # Use Serverless to deploy to Production - name: Publish to Prod task: # Import the secret environment variables secrets: - name: alphageek-aws prologue: commands: - checkout - cache restore npm - cache restore node-modules-$(checksum package-lock.json) - cache restore public-$(find source -type f -exec cat {} + | checksum) jobs: - name: Deploy Content commands: - npm run deploy-prod-site # Run tests on the Production site - name: Test Production task: prologue: commands: - checkout - cache restore npm - cache restore node-modules-$(checksum package-lock.json) jobs: # Check that all links are valid - name: Test link validity commands: - npm run linkcheck-prod
Now we create files for both UAT and Production infrastructure and Lambda function changes at .semaphore/uat-infra.yml
and .semaphore/prod-infra.yml respectively.
version: v1.0 name: AlphaGeek Production Infrastructure Deployment Pipeline agent: machine: type: e1-standard-2 os_image: ubuntu1804 blocks: # Use serverless to deploy to Production - name: Publish to Production task: # Import the secret environment variables secrets: - name: alphageek-aws prologue: commands: - checkout - cache restore npm - cache restore node-modules-$(checksum package-lock.json) jobs: - name: Deploy Infrastructure commands: - npm run deploy-prod-infra - cache store serverless-$SEMAPHORE_GIT_BRANCH .serverless # Run tests on the UAT site - name: Test Production task: prologue: commands: - checkout - cache restore npm - cache restore node-modules-$(checksum package-lock.json) jobs: # Check that all links are valid - name: Test link validity commands: - npm run linkcheck-prod
# Add UAT and Production Deployment Configuration to Semaphore
Now the deployment processes have been defined we need to add triggeres for them to the primary SemaphoreCI
configuration file.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
promotions: - name: Deploy Content to UAT pipeline_file: uat-content.yml auto_promote_on: - result: passed branch: - ^develop$ - name: Deploy Infra to UAT pipeline_file: uat-infra.yml - name: Deploy Content to Production pipeline_file: prod-content.yml auto_promote_on: - result: passed branch: - ^master$ - name: Deploy Infra to Production pipeline_file: prod-infra.yml
Because the link checking tool we’re using doesn’t support a custom authentication header we need to enable another
method to gain access. For this we will use a custom user-agent string. This is defined in config/resources.yml.
We need to manually deploy the new infrastructure before we can push the code to GitHub as some of the functionality in
the CI/CD pipeline will fail with the current configuration.
Publishing new content is now as easy as committing the changes and pushing the develop branch to GitHub and waiting
for the deployment to the development site to complete. Once you’ve completed any user acceptance testing (UAT) you can
merge the develop branch into master and push that to GitHub. Once all testing has completed the content will
automatically be published to your production site.
# How to Deploy New Infrastructure and Lambda Functions
Deploying new infrastructure is similar to the process for new content, but once the tests and build have completed on
SemaphoreCI you will need to press a button to deploy. Just follow these simple steps:
Login to SemaphoreCI
Locate the build you wish to deploy the infrastructure from
Open the build
Click the Promote button under the deployment you wish to run
# Plugins for additional Serverless functionality plugins: -serverless-s3-deploy -serverless-plugin-scripts
# Configuration for AWS provider: name:aws runtime:nodejs8.10 profile:serverless # Some future functionality requires us to use us-east-1 at this time region:us-east-1
# This enables us to use the default stage definition, but override it from the command line stage:${opt:stage,self:provider.stage} # This enables us to prepend the stage name for non-production environments domain: fulldomain: prod:${self:custom.domain.domain} other:${self:custom.stage}.${self:custom.domain.domain} # This value has been customised so I can maintain multiple demonstration sites domain:${self:custom.postname}.${self:custom.domain.zonename} domainname:${self:custom.domain.fulldomain.${self:custom.stage},self:custom.domain.fulldomain.other} # DNS Zone name (this is only required so I can maintain multiple demonstration sites) zonename:alphageek.com.au cacheControlMaxAgeHTMLByStage: # HTML Cache time for production environment prod:3600 # HTML Cache time for other environments other:0 cacheControlMaxAgeHTML:${self:custom.domain.cacheControlMaxAgeHTMLByStage.${self:custom.stage},self:custom.domain.cacheControlMaxAgeHTMLByStage.other} sslCertificateARN:arn:aws:acm:us-east-1:165657443288:certificate/61d202ea-12f2-4282-b602-9c3b83183c7a assets: targets: # Configuration for HTML files (overriding the default cache control age) -bucket: Ref:WebsiteS3Bucket files: -source:./public/ headers: CacheControl:max-age=${self:custom.domain.cacheControlMaxAgeHTML} empty:true globs: -'**/*.html' # Configuration for all assets -bucket: Ref:WebsiteS3Bucket files: -source:./public/ empty:true globs: -'**/*.js' -'**/*.css' -'**/*.jpg' -'**/*.png' -'**/*.gif' # AWS Region to S3 website hostname mapping s3DNSName: us-east-2:s3-website.us-east-2.amazonaws.com us-east-1:s3-website-us-east-1.amazonaws.com us-west-1:s3-website-us-west-1.amazonaws.com us-west-2:s3-website-us-west-2.amazonaws.com ap-south-1:s3-website.ap-south-1.amazonaws.com ap-northeast-3:s3-website.ap-northeast-3.amazonaws.com ap-northeast-2:s3-website.ap-northeast-2.amazonaws.com ap-southeast-1:s3-website-ap-southeast-1.amazonaws.com ap-southeast-2:s3-website-ap-southeast-2.amazonaws.com ap-northeast-1:s3-website-ap-northeast-1.amazonaws.com ca-central-1:s3-website.ca-central-1.amazonaws.com eu-central-1:s3-website.eu-central-1.amazonaws.com eu-west-1:s3-website-eu-west-1.amazonaws.com eu-west-2:s3-website.eu-west-2.amazonaws.com eu-west-3:s3-website.eu-west-3.amazonaws.com eu-north-1:s3-website.eu-north-1.amazonaws.com sa-east-1:s3-website-sa-east-1.amazonaws.com # Determine what resources file to include based on the current stage customConfigFile:${self:custom.customConfigFiles.${self:custom.stage},self:custom.customConfigFiles.other} customConfigFiles: prod:prod other:other
# Define the resources we will need to host the site resources: # Include the resources file -${file(config/resources.yml)} # Include the outputs file -${file(config/outputs.yml)} # Include a custom configuration file based on the environment -${file(config/resources/environment/${self:custom.customConfigFile}.yml)}
version: v1.0 name: HexoServerlessBuildPipeline agent: machine: type: e1-standard-2 os_image: ubuntu1804 blocks: # Prepare the build environment - name: Prepare task: jobs: # Make sure we've got the right Java version configured - name: Set Java Version commands: - change-java-version 8 # Run NPM install, using Semaphore's cache where possible - name: NPMInstall commands: # UpdateNPM because it's so old - cache restore npm - npm i -g npm - cache store npm .nvm/versions/node/v8.11.3/lib/node_modules/npm - checkout # Reuse dependencies from cache and avoid installing them from scratch: - cache restore node-modules-$(checksum package-lock.json) - npm ci - cache store node-modules-$(checksum package-lock.json) node_modules # Run the validation routines that don't require a build - name: Validate task: prologue: commands: - checkout - cache restore npm - cache restore node-modules-$(checksum package-lock.json) jobs: # Run the jest test suite - name: RunJestTests commands: - npm run jest # Run the Markdown linter - name: MDLint commands: - npm run mdlint # Build the deployment files - name: Build task: prologue: commands: - checkout - cache restore npm - cache restore node-modules-$(checksum package-lock.json) jobs: # Ensure we have a clean build directory, generate the files and add asset versioning - name: BuildSite commands: - npm run clean - npm run build - cache store public-$(find source -type f -exec cat {} + | checksum) public # Run tests on the deployment files - name: TestLocally task: prologue: commands: - checkout - cache restore npm - cache restore node-modules-$(checksum package-lock.json) - cache restore public-$(find source -type f -exec cat {} + | checksum) jobs: # Check that all links are valid - name: Test link validity commands: - npm run linkcheck # Check that the HTML is valid - name: TestW3C compatability commands: - npm run htmlcheck # Checkif we meet a11y standards - name: TestAccessibility commands: - npm run a11ycheck promotions: - name: DeployContent to UAT pipeline_file: uat-content.yml auto_promote_on: - result: passed branch: - develop - name: DeployInfra to UAT pipeline_file: uat-infra.yml - name: DeployContent to Production pipeline_file: prod-content.yml auto_promote_on: - result: passed branch: - master - name: DeployInfra to Production pipeline_file: prod-infra.yml
version:v1.0 name:AlphaGeekProductionContentDeploymentPipeline agent: machine: type:e1-standard-2 os_image:ubuntu1804 blocks: # Use Serverless to deploy to Production -name:PublishtoProd task: # Import the secret environment variables secrets: -name:alphageek-aws prologue: commands: -checkout -cacherestorenpm -cacherestorenode-modules-$(checksumpackage-lock.json) -cacherestorepublic-$(findsource-typef-execcat {} +|checksum) jobs: -name:DeployContent commands: -npmrundeploy-prod-site # Run tests on the Production site -name:TestProduction task: prologue: commands: -checkout -cacherestorenpm -cacherestorenode-modules-$(checksumpackage-lock.json) jobs: # Check that all links are valid -name:Testlinkvalidity commands: -npmrunlinkcheck-prod
version:v1.0 name:AlphaGeekProductionInfrastructureDeploymentPipeline agent: machine: type:e1-standard-2 os_image:ubuntu1804 blocks: # Use serverless to deploy to Production -name:PublishtoProduction task: # Import the secret environment variables secrets: -name:alphageek-aws prologue: commands: -checkout -cacherestorenpm -cacherestorenode-modules-$(checksumpackage-lock.json) jobs: -name:DeployInfrastructure commands: -npmrundeploy-prod-infra -cachestoreserverless-$SEMAPHORE_GIT_BRANCH.serverless # Run tests on the UAT site -name:TestProduction task: prologue: commands: -checkout -cacherestorenpm -cacherestorenode-modules-$(checksumpackage-lock.json) jobs: # Check that all links are valid -name:Testlinkvalidity commands: -npmrunlinkcheck-prod
// Load the file to test const urlRewriteTest = require('../../functions/urlRewrite'); // Load some data that can be reused for other lambda@Edge functions const lambdaAtEdgeFixture = require('../fixtures/lambdaAtEdge');