Adding realtime functionality to a blog using Kubeless
You will need a functioning Kubernetes system (1.8+). This tutorial was created and tested using Kubernetes 1.10 inside Docker 18.05 on macOS 10. It assumes you can debug Kubernetes problems on your own system.
Introduction
In this article, we are going to examine how to create a simple blog using a serverless architecture - in specific, Kubeless. This will show how we can connect simple handlers together to make everything work, and how we can add new functionality easily to the system without any major upheaval.
What is a serverless architecture?
Serverless development is a relatively recent architectural pattern that separates the business logic from the connectivity and deployment concerns. As a developer, you need only be interested in writing the actual core functionality of your application - for example, the code that will persist a blog post to the data store. You then deploy this small piece of code into the serverless framework and configure it to be triggered by appropriate events - for example, incoming HTTP requests. The framework then takes care of all the orchestration needed to make everything work together correctly.
Prerequisites
This article focuses on the use of Kubeless as a serverless architecture. This needs a functioning Kubernetes system (at least version 1.8) in order for it to work, and it is assumed that this is already available. If not then it can be set up on a local system using Minikube, or the Edge channel of Docker for Desktop. You will also need to install the kubeless CLI as described on the Kubeless Quick Start page.
Note: it is assumed that Kubernetes is already installed and working on your system, and that you are able to work with it to diagnose any system-specific issues that might come up.
Note: this article was tested using Kubernetes 1.10 running inside Docker 18.05 on macOS 10.13.5.
Note: you don’t need to actually set up kubeless inside your kubernetes cluster. We will cover that later on in this article.
We will be using Node.js to develop the serverless functions and Create React App for the user interface. Whilst there is no need to actually run the functions locally, npm is needed to configure their dependencies, and a full Node.js stack is needed for Create React App to be used, so ensure that these are available for use.
Create a Pusher account
In order to follow along, you will need to create a free Pusher account. This is done by visiting the Pusher dashboard and logging in, creating a new account if needed. Then create a new Pusher Channels app and save the keys for later on.
Creating the blog backend
Our backend architecture will be created using a series of small functions wired up in the Kubeless system. Our overall architecture will eventually look like this:
This looks a little daunting at first, but each of the five functions that we are going to write are very simple, and the rest of the system is handled for us by Kubeless.
Setting up Kubeless
Before we can do anything, we need to set up the underlying Kubeless architecture. This includes Kubeless itself, Kafka, MongoDB and Nginx for ingress.
Note: at the time of writing, the latest version of Kubeless was v1.0.0-alpha.7.
Note: ingress is the setup allowing HTTP calls to come in to the Kubeless infrastructure from outside on clean URLs. There are other alternatives available, but Nginx is easy to work with and does everything we need.
In order to set up Kubeless itself, we need to execute the following:
$ kubectl create ns kubeless
namespace "kubeless" created
$ kubectl create -f https://github.com/kubeless/kubeless/releases/download/v1.0.0-alpha.7/kubeless-non-rbac-v1.0.0-alpha.7.yaml
serviceaccount "controller-acct" created
customresourcedefinition.apiextensions.k8s.io "functions.kubeless.io" created
customresourcedefinition.apiextensions.k8s.io "httptriggers.kubeless.io" created
customresourcedefinition.apiextensions.k8s.io "cronjobtriggers.kubeless.io" created
configmap "kubeless-config" created
deployment.apps "kubeless-controller-manager" created
This creates a Kubernetes namespace in which Kubeless will live, and creates the Kubeless resources from the specified resource definition.
We then can set up Kafka in the cluster in a very similar manner:
$ kubectl create -f https://github.com/kubeless/kubeless/releases/download/v1.0.0-alpha.7/kafka-zookeeper-v1.0.0-alpha.7.yaml
customresourcedefinition.apiextensions.k8s.io "kafkatriggers.kubeless.io" created
service "broker" created
statefulset.apps "kafka" created
service "kafka" created
service "zoo" created
statefulset.apps "zoo" created
clusterrole.rbac.authorization.k8s.io "kafka-controller-deployer" created
clusterrolebinding.rbac.authorization.k8s.io "kafka-controller-deployer" created
service "zookeeper" created
deployment.apps "kafka-trigger-controller" created
And the Nginx ingress resources in the same way:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/bc59b7ddeee6e252974853f167c299005c600781/deploy/mandatory.yaml
namespace "ingress-nginx" created
deployment.extensions "default-http-backend" created
service "default-http-backend" created
configmap "nginx-configuration" created
configmap "tcp-services" created
configmap "udp-services" created
serviceaccount "nginx-ingress-serviceaccount" created
clusterrole.rbac.authorization.k8s.io "nginx-ingress-clusterrole" created
role.rbac.authorization.k8s.io "nginx-ingress-role" created
rolebinding.rbac.authorization.k8s.io "nginx-ingress-role-nisa-binding" created
clusterrolebinding.rbac.authorization.k8s.io "nginx-ingress-clusterrole-nisa-binding" created
deployment.extensions "nginx-ingress-controller" created
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/bc59b7ddeee6e252974853f167c299005c600781/deploy/provider/cloud-generic.yaml
service "ingress-nginx" created
Note: at the time of writing, the current latest release of the ingress-nginx resource files were not working correctly, so this points at the last known commit that did work.
Finally we want to set up MongoDB. There isn’t a convenient kubernetes resource definition for this, so we’ll write our own. Create a new file called mongodb.yml
under your project directory as follows:
apiVersion: v1
kind: Service
metadata:
name: mongo
spec:
ports:
- port: 27017
selector:
app: mongo
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
strategy:
type: Recreate
template:
metadata:
labels:
app: mongo
spec:
containers:
- image: mongo:3.2.20
name: mongo
ports:
- containerPort: 27017
name: mongo
And then execute it:
$ kubectl create -f mongodb.yml
service "mongo" created
deployment.apps "mongo" created
At this point we have all of the infrastructure we need in order to build our application.
Creating articles
Our first handler is the endpoint that will be called to create an article. This will ensure that we have the appropriate values in our request, and put a message onto the Kafka queue for other handlers to deal with.
Firstly, we need to set up a project directory and the dependencies for this:
$ mkdir create-article
$ cd create-article
$ npm init -y
$ npm install --save uuid kafka-node
The uuid
module will be used to generate a new, unique ID for the article, and the kafka-node
module is used to produce a new message onto the Kafka queue.
Our actual handler is then written in create-article/index.js
as follows:
const uuid = require('uuid/v4');
const kafka = require('kafka-node');
const kafkaClient = new kafka.KafkaClient({kafkaHost: 'kafka.kubeless:9092'});
const kafkaProducer = new kafka.Producer(kafkaClient);
module.exports = {
createArticle: function (event, context) {
return new Promise((resolve, reject) => {
if (!event.data.title) {
reject('Missing field: title');
} else if (!event.data.body) {
reject('Missing field: body');
} else {
resolve({
id: uuid(),
created: new Date(),
title: event.data.title,
body: event.data.body
});
}
}).then((article) => {
return new Promise((resolve, reject) => {
kafkaProducer.send([
{ topic: 'new-article-topic', messages: JSON.stringify(article), partition: 0 }
], (err, data) => {
if (err) {
reject(err);
} else {
resolve(article);
}
});
});
}).then((article) => {
event.extensions.response.statusCode = 201;
return article;
}).catch((err) => {
event.extensions.response.statusCode = 400;
return err;
});
}
}
Note: we’re assuming that Kafka is installed on “kafka.kubeless:9092” and that we’re using a topic called “new-article-topic”. This is the default host and port if using the Kafka that deploys as part of Kubeless, but in a real-life situation you should use Kubernetes Configmaps to configure this location.
Now we need to deploy this into our cluster:
$ kubeless function deploy create-article --runtime nodejs8 --dependencies package.json --handler index.createArticle --from-file index.js
This creates a new serverless handler that will execute the exported function createArticle
from the file index.js
whenever it is triggered, and which will determine all of the dependencies that this function needs based on package.json
.
Then we want to set up an Ingress URL to allow incoming HTTP calls to trigger this function:
$ kubeless trigger http create create-article --function-name create-article --path create --hostname localhost
This means that calls to http://localhost/create **will trigger the function named create-article
**- which we’ve just created.
Finally we’ll create the Kafka topic that we are writing to:
$ kubeless topic create new-article-topic
We can test this now as well:
$ curl http://localhost/create --data '{"title": "My first post", "body": "This is my first post"}' -H "Content-type: application/json"
{"id":"6a61513b-06c8-4139-a816-a7188e75728e","created":"2018-07-24T07:14:45.561Z","title":"My first post","body":"This is my first post"}
Persisting articles
Once we can handle the request to create an article, and put the message onto the Kafka topic, we can then handle this message to persist it into the MongoDB store.
Handlers that are triggered by Kafka messages act in the exact same way as HTTP ones, including the fact that they are given an event that looks like an HTTP request. The data of this request is the message from the topic, ready to work with. We can also guarantee the contents of it, since it was put onto the topic by our own code and not by an external party.
Firstly, we need to set up a project directory and the dependencies for this:
$ mkdir persist-article
$ cd persist-article
$ npm init -y
$ npm install --save mongodb
Our actual handler is then written in persist-article/index.js
as follows:
const MongoClient = require('mongodb').MongoClient;
module.exports = {
persistArticle: function (event, context) {
const article = event.data;
const post = {
"_id": article.id,
"created": new Date(article.created),
"title": article.title,
"body": article.body
};
return new Promise((resolve, reject) => {
MongoClient.connect("mongodb://mongo.default:27017", (err, client) => {
if (err) {
console.log(err);
reject(err);
} else {
const db = client.db('kubeless_blog');
db.collection('posts').insert(post, (err, result) => {
client.close();
if (err) {
console.log(err);
reject(err);
} else {
resolve(post);
}
});
}
});
});
}
}
Note: we’re assuming that MongoDB is installed on “mongo.default:27017” and that we’re using a database called “kubeless_blog”. This is the default host and port if using the MongoDB that deploys as part of the earlier deployment resource, but in a real-life situation you should use Kubernetes Configmaps to configure this.
Now we need to deploy this into our cluster:
$ kubeless function deploy persist-article --runtime nodejs8 --dependencies package.json --handler index.persistArticle --from-file index.js
This creates a new serverless handler that will execute the exported function persistArticle
from the file index.js
whenever it is triggered, and which will determine all of the dependencies that this function needs based on package.json
.
Then we want to set up an Ingress URL to allow incoming Kafka messages on our topic to trigger this function:
$ kubeless trigger kafka create persist-article --function-selector created-by=kubeless,function=persist-article --trigger-topic new-article-topic
At this point, we have a setup where all successful calls to our first handler will put messages onto the Kafka topic, and then our second handler will read and process them to write into our MongoDB database.
Listing articles
Now that we can get articles into our system, we need to get them out again. The first part of this is a handler to get a list of all articles.
Firstly, we need to set up a project directory and the dependencies for this:
$ mkdir list-articles
$ cd list-articles
$ npm init -y
$ npm install --save mongodb
Our actual handler is then written in list-articles/index.js
as follows:
const MongoClient = require('mongodb').MongoClient;
module.exports = {
listArticles: function (event, context) {
return new Promise((resolve, reject) => {
MongoClient.connect('mongodb://mongo.default:27017', (err, client) => {
if (err) {
console.log(err);
reject(err);
} else {
const db = client.db('kubeless_blog');
db.collection('posts')
.find({})
.sort({created: -1})
.project({'_id': 1, 'title': 1, 'created': 1})
.toArray((err, docs) => {
client.close();
if (err) {
console.log(err);
reject(err);
} else {
resolve(docs.map((doc) => {
return {
id: doc['_id'],
title: doc.title,
created: doc.created
};
}));
}
});
}
});
});
}
}
This gets every article, with no pagination or filtering, and returns them in order so that the most recent ones are first. It also only returns the title of each article, not the entire text.
Now we need to deploy this into our cluster:
$ kubeless function deploy list-articles --runtime nodejs8 --dependencies package.json --handler index.listArticles --from-file index.js
This creates a new serverless handler that will execute the exported function listArticles
from the file index.js
whenever it is triggered, and which will determine all of the dependencies that this function needs based on package.json
.
Then we want to set up an Ingress URL to allow incoming HTTP calls to trigger this function:
$ kubeless trigger http create list-articles --function-name list-articles --path list --hostname localhost
This means that calls to http://localhost/list **will trigger the function named list-articles
**- which we’ve just created.
Getting individual articles
Finally, we need to be able to get an individual article out so that we can display it.
Firstly, we need to set up a project directory and the dependencies for this:
$ mkdir get-article
$ cd get-article
$ npm init -y
$ npm install --save mongodb
Our actual handler is then written in get-article/index.js
as follows:
const MongoClient = require('mongodb').MongoClient;
module.exports = {
getArticle: function (event, context) {
const url = event.extensions.request.url;
const id = url.substring(1);
return new Promise((resolve, reject) => {
MongoClient.connect('mongodb://mongo.default:27017', (err, client) => {
if (err) {
console.log(err);
reject(err);
} else {
const db = client.db('kubeless_blog');
db.collection('posts')
.findOne({'_id': id}, (err, doc) => {
client.close();
if (err) {
console.log(err);
reject(err);
} else {
if (doc) {
resolve({
id: doc['_id'],
created: doc.created,
title: doc.title,
body: doc.body
});
} else {
event.extensions.response.statusCode = 404;
resolve();
}
}
});
}
});
});
}
}
This expects to be called with a URL containing the article ID, and then retrieves that article from the MongoDB store and returns it. If there is no matching article then an HTTP 404 is returned instead.
Now we need to deploy this into our cluster:
$ kubeless function deploy get-article --runtime nodejs8 --dependencies package.json --handler index.getArticle --from-file index.js
This creates a new serverless handler that will execute the exported function getArticles
from the file index.js
whenever it is triggered, and which will determine all of the dependencies that this function needs based on package.json
.
Then we want to set up an Ingress URL to allow incoming HTTP calls to trigger this function:
$ kubeless trigger http create get-article --function-name get-article --path get --hostname localhost
Note: the ingress mechanism does prefix matching, not exact matching. This means that the above actually matches any calls that start with “/get”, such as “/get/123”.
This means that calls to http://localhost/get/123 will trigger the function named get-article
- which we’ve just created.
Creating the blog UI
Now that we have our backend functionality, we need a UI to actually drive it. This will be a React application, using Semantic UI for some structure and styling.
Firstly we’ll create a new application using the Create React App tool:
$ create-react-app ui
Then we’ll add some dependencies that we need:
$ npm install --save axios semantic-ui-react semantic-ui-css
We can now start up the UI, and it will automatically update as we make changes to it:
$ npm start
Our UI is going to consist of two different parts - the list of articles, and the actual article that we’re looking at.
Firstly, let’s create a component to represent the article list. For this, create a file called src/ArticleList.js
as follows:
import React from 'react';
import { List } from 'semantic-ui-react';
import axios from 'axios';
export class ArticleList extends React.Component {
state = {
articles: []
};
_showArticle = this._handleShowArticle.bind(this);
loadList() {
axios.get('http://localhost/list')
.then((response) => {
this.setState({
articles: response.data
});
});
}
_handleShowArticle(article) {
this.props.showArticle(article.id);
}
componentDidMount() {
this.loadList();
}
render() {
const articleEntries = this.state.articles.map((article) => {
return (
<List.Item key={article.id} onClick={() => this._showArticle(article)}>
<List.Content>
<List.Header as='a'>{article.title}</List.Header>
<List.Description as='a'>{article.created}</List.Description>
</List.Content>
</List.Item>
);
});
return (
<List divided relaxed>
{articleEntries}
<List.Item onClick={this.props.newArticle}>
<List.Content>
<List.Header as='a'>New Article</List.Header>
</List.Content>
</List.Item>
</List>
);
}
}
Note: This gets the list of articles from http://localhost/list, which corresponds to the handler we defined above.
Next we want a component to display a given article. For this, create a new file called src/Article.js
as follows:
import React from 'react';
import { Card, Loader } from 'semantic-ui-react';
import axios from 'axios';
export class Article extends React.Component {
state = {
article: undefined
};
componentDidMount() {
const id = this.props.id;
axios.get(`http://localhost/get/${id}`)
.then((response) => {
this.setState({
article: response.data
});
});
}
render() {
const { article } = this.state;
if (!article) {
return <Loader />;
}
return (
<Card fluid>
<Card.Content header={article.title} />
<Card.Content description={article.body} />
<Card.Content extra>
{article.created}
</Card.Content>
</Card>
);
}
}
Note: This gets the article from http://localhost/get, which corresponds to the handler we defined above.
Finally, we want a component to create a new article. This will be in src/NewArticle.js
as follows:
import React from 'react';
import { Form, Button, Message } from 'semantic-ui-react';
import axios from 'axios';
export class NewArticle extends React.Component {
state = {
title: '',
body: ''
};
_changeTitle = this._handleChangeTitle.bind(this);
_changeBody = this._handleChangeBody.bind(this);
_postArticle = this._handlePostArticle.bind(this);
_handleChangeTitle(e) {
this.setState({
title: e.target.value
});
}
_handleChangeBody(e) {
this.setState({
body: e.target.value
});
}
_handlePostArticle() {
const { title, body } = this.state;
axios({
method: 'post',
url: 'http://localhost/create',
data: {
title,
body
},
headers: {
'content-type': 'application/json'
}
})
.then(() => {
this.setState({
title: '',
body: '',
success: true,
error: undefined
});
}, (e) => {
this.setState({
success: false,
error: e.response.data
});
});
}
render() {
let message;
if (this.state.success) {
message = <Message positive>Article posted successfully</Message>;
} else if (this.state.error) {
message = <Message error>{this.state.error}</Message>
}
return (
<Form error={this.state.error} success={this.state.success}>
{message}
<Form.Field>
<label>Title</label>
<input placeholder='Title' value={this.state.title} onChange={this._changeTitle} autoFocus />
</Form.Field>
<Form.Field>
<label>Article</label>
<textarea placeholder="Article" value={this.state.body} onChange={this._changeBody} />
</Form.Field>
<Button type='submit' onClick={this._postArticle} >Post Article</Button>
</Form>
);
}
}
Note: This creates the article by POSTing to http://localhost/create, which corresponds to the handler we defined above.
Now that we’ve got these components, we need to tie these together. This is done by replacing the existing src/App.js
to read as follows:
import React, { Component } from 'react';
import 'semantic-ui-css/semantic.min.css';
import { Grid, Header, Container } from 'semantic-ui-react';
import { ArticleList } from './ArticleList';
import { Article } from './Article';
import { NewArticle } from './NewArticle';
class App extends Component {
state = {
currentArticle: undefined
};
_newArticle = this._handleNewArticle.bind(this);
_showArticle = this._handleShowArticle.bind(this);
_handleShowArticle(article) {
this.setState({
currentArticle: article
});
}
_handleNewArticle() {
this.setState({
currentArticle: undefined
});
}
render() {
let body;
if (this.state.currentArticle) {
body = <Article id={this.state.currentArticle} />
} else {
body = <NewArticle />;
}
return (
<Container>
<Grid>
<Grid.Row>
<Grid.Column>
<Header as="h2">
Kubeless Blog
</Header>
</Grid.Column>
</Grid.Row>
<Grid.Row>
<Grid.Column width={12}>
{ body }
</Grid.Column>
<Grid.Column width={4}>
<ArticleList showArticle={this._showArticle} newArticle={this._newArticle} />
</Grid.Column>
</Grid.Row>
</Grid>
</Container>
);
}
}
export default App;
At this point, we can use the UI to read and post articles:
Adding realtime functionality to the blog
Currently, we can post articles to the blog and read ones that are posted. What we don’t get is any indication that a post has been made without refreshing the page. This can be achieved by adding Pusher in to the mix.
We are going to add a new handler into our Kubeless system that reacts to the same Kafka messages that are used to persist the messages, and which will trigger Pusher to indicate that a new post has been made.
Broadcasting articles
Our new handler is going to react every time a new article is created, in the exact same way as the persist-article
handler from above.
Firstly, we need to set up a project directory and the dependencies for this:
$ mkdir broadcast-article
$ cd broadcast-article
$ npm init -y
$ npm install --save pusher
Our actual handler is then written in broadcast-article/index.js
as follows:
const Pusher = require('pusher');
const pusher = new Pusher({
appId: 'PUSHER_APP_ID',
key: 'PUSHER_KEY',
secret: 'PUSHER_SECRET',
cluster: 'PUSHER_CLUSTER',
encrypted: true
});
module.exports = {
broadcastArticle: function (event, context) {
const article = event.data;
const post = {
"_id": article.id,
"created": new Date(article.created),
"title": article.title,
"body": article.body
};
pusher.trigger('posts', 'new-post', post);
}
}
Note: we’re hard-coding the Pusher credentials here, which need to be updated to match those you obtained earlier. In a real-life situation you should use Kubernetes Configmaps to configure this.
Now we need to deploy this into our cluster:
$ kubeless function deploy broadcast-article --runtime nodejs8 --dependencies package.json --handler index.broadcastArticle --from-file index.js
This creates a new serverless handler that will execute the exported function broadcastArticle
from the file index.js
whenever it is triggered, and which will determine all of the dependencies that this function needs based on package.json
.
Then we want to set up an Ingress URL to allow incoming Kafka messages on our topic to trigger this function:
$ kubeless trigger kafka create broadcast-article --function-selector created-by=kubeless,function=broadcast-article --trigger-topic new-article-topic
This is the exact same topic as was used before, so every message that triggers the persist-article
handler will also trigger the broadcast-article
one.
Updating the article list
Now that we’re broadcasting events whenever articles are posted, we can automatically update the UI based on this. For this we want to listen to the Pusher events and react to them.
Firstly, we need our Pusher dependency. From inside the UI project:
$ npm install --save pusher-js
Then we need to update src/ArticleList.js
to listen for the events and react accordingly. Firstly add the following to the top of the file:
import Pusher from 'pusher-js';
const pusher = new Pusher('PUSHER_APP_KEY', {
cluster: 'PUSHER_CLUSTER',
encrypted: true
});
Note: make sure you update this to include the App Key and Cluster from your Pusher Application you created earlier. These should exactly match those used in the
broadcast-article
handler.
Finally, add the following to the componentDidMount
method:
pusher.subscribe('posts').bind('new-post', () => {
this.loadList();
});
This will react to the new-post
event that we are broadcasting by loading the full list of articles again. This means that whenever anyone posts an article, all active browsers will be told about it and get their article list updated.
Restart your UI and we can see that the posts now appear automatically:
Cleaning up
One thing that needs to be considered with a serverless application is deployment, and tearing it down if needed. We are actually running a large number of components here: five serverless functions, Kafka, Zookeeper, MongoDB, Nginx and the Kubeless framework itself. Each of these needs to be managed, and shut down, individually and correctly otherwise you leave bits hanging around.
Cleaning up this application can be done as follows if needed:
# Broadcast Article Handler
kubeless trigger kafka delete broadcast-article
kubeless function delete broadcast-article
# Get Article Handler
kubeless trigger http delete get-article
kubeless function delete get-article
# List Articles Handler
kubeless trigger http delete list-articles
kubeless function delete list-articles
# Persist Article Handler
kubeless trigger kafka delete persist-article
kubeless topic delete new-article-topic
kubeless function delete persist-article
# Create Article Handler
kubeless trigger http delete create-article
kubeless function delete create-article
# Nginx Ingress
kubectl delete -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml
kubectl delete -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
# MongoDB
kubectl delete -f mongodb.yml
# Kafka
kubectl delete -f https://github.com/kubeless/kubeless/releases/download/v1.0.0-alpha.7/kafka-zookeeper-v1.0.0-alpha.7.yaml
# Kubeless
kubectl delete -f https://github.com/kubeless/kubeless/releases/download/v1.0.0-alpha.7/kubeless-non-rbac-v1.0.0-alpha.7.yaml
kubectl delete ns kubeless
Summary
This article shows a brief introduction to using Kubeless to build a simple application, and then extending it by adding new functionality. We can easily see how adding this new functionality can be done later on, with no impact to the rest of the service.
Whilst not shown here, there’s no reason that all of these handlers need to be written by the same team, or even in the same language. Serverless architectures, in the same way as Microservices, thrive on a disjoint ecosystem where each component is developed in the way that makes the most sense for that one component, rather than forcing a single language on the entire application.
The full source code for this can be seen on GitHub.
15 August 2018
by Graham Cox