Tuesday, July 7, 2020

Conclusion - CI/CD Series

In this series, we have covered how to implement CI/CD (continuous integration, continuous deployment) using DevOps (development and operations). We have shown how to go from a simple change in the code, to adding tests, building an assembly, planning the deployment, and eventually apply the changes to our live environment:
  1. Introduction
  2. Base Application
  3. Unit Tests
  4. Assembly
  5. PATs
  6. Plan
  7. Apply
Future enhancements to this pipeline could include:
  • Separate DEV and PROD environments
    • DEV deploy on pushes to master
    • PROD deploy on specific tag/release cycles
  • Performance tests for the REST service
  • Ability to destroy the resources using the Terraform destroy command
  • Run PATs against service deployed in the DEV environment
    • Currently, PATs are only run against the service running in-memory

Monday, July 6, 2020

Apply - CI/CD Series

In the final piece of our CI/CD Series puzzle, we will perform the Terraform apply action. This action will perform the actual changes (as shown on the Terraform plan action) to our Heroku app and ensure everything starts up correctly. Once this command finishes, our application will be live in our account and accessible for utilization.

The Terraform apply CI stage is very similar to the plan stage we developed previously. However, there are some differences to note:
  • We will run this command on pushes to the master branch
    • The plan action was ran on pull requests to master
  • We will create a GitHub Release with a specific version of our application
    • The plan action used a hard-coded URL
  • We will point Heroku to the GitHub Release to ensure the correct version is deployed
Create Release

To create a release, we can utilize the GitHub Actions create release template. Our release name will be our application version plus the build number to ensure each release has a unique id. Once we create the release, we will save the version to a file so it can be referenced from other jobs within our CI/CD pipeline.

create_release:
runs-on: ubuntu-latest
steps:
- name: Checkout Repo
uses: actions/checkout@v2

- name: Create Release
id: release
uses: actions/create-release@v1
env:
# This token is provided by Actions, you do not need to create your own token
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: 0.1.0-${{ github.run_number }}
release_name: Release 0.1.0-${{ github.run_number }}
draft: false
prerelease: false

# Heroku needs the .tar.gz URL so modify tag URL to expected format
- name: Create Version File
run: |
export RELEASE_URL=${{ steps.release.outputs.html_url }}
RELEASE_URL+=".tar.gz"
echo "Release URL:"
echo ${RELEASE_URL}
export ARCHIVE_URL=$(echo "$RELEASE_URL" | sed 's~releases/tag~archive~')
echo "Archive URL:"
echo ${ARCHIVE_URL}
echo ${ARCHIVE_URL} >> archive.txt

# Upload version file as build artifact
- name: Upload Version File
uses: actions/upload-artifact@v2
with:
name: archive.txt
path: archive.txt

Pass Release Version

On the stage to perform the apply, we first need to read the version file uploaded when creating the release. Then, we can tell Terraform about this variable so it gets injected at runtime (since it changes on every build). In our example, we expose the variable "build_url" from our Terraform file. 

variable "build_url" {
type = string
}

# Build code & release to the app
resource "heroku_build" "guestbook_build" {
app = heroku_app.guestbook_app.name
buildpacks = ["https://github.com/heroku/heroku-buildpack-scala"]

source = {
url = var.build_url
}

To change this at runtime, we make use of Terraform's variables. This variable gets initialized by:
  1. Reading the version URL from the artifact created via the release.
  2. Exporting the version URL to an environment variable.
export TF_VAR_build_url=$(cat archive.txt)
echo "Archive URL:"
echo ${TF_VAR_build_url}

Apply Action

Now that we have everything setup, we just need to perform the actual apply command via Terraform. In this example, we still running the commands validate and plan just to ensure things are correct, but these can be skipped as we ran them on the pull request itself.

deploy:
runs-on: ubuntu-latest
needs: create_release

steps:
- name: Checkout Repo
uses: actions/checkout@v2

# Download artifact
- name: Download Version File
uses: actions/download-artifact@v2
with:
name: archive.txt

- name: Setup Terraform
uses: hashicorp/setup-terraform@v1
with:
cli_config_credentials_token: ${{ secrets.TF_API_TOKEN }}

- name: Terraform Init
id: init
run: terraform init

- name: Terraform Validate
id: validate
run: terraform validate -no-color

# The build_url is blank for planning since we will create a new URL upon commits
- name: Terraform Plan
id: plan
run: |
export TF_VAR_build_url=$(cat archive.txt)
echo "Archive URL:"
echo ${TF_VAR_build_url}
export HEROKU_API_KEY=${{ secrets.HEROKU_API_KEY }}
export HEROKU_EMAIL=${{ secrets.HEROKU_EMAIL }}
terraform plan -no-color

- name: Terraform Apply
id: apply
run: |
export TF_VAR_build_url=$(cat archive.txt)
echo "Archive URL:"
echo ${TF_VAR_build_url}
export HEROKU_API_KEY=${{ secrets.HEROKU_API_KEY }}
export HEROKU_EMAIL=${{ secrets.HEROKU_EMAIL }}
terraform apply -auto-approve -no-color

Deployed

Once the CI/CD pipeline succeeds on the master branch, the service will be live and available to be used. Also, since we used the Terraform Cloud for our remote state storage, you can browse to see how the state file has changed. This will keep track of all the changes to your application over the history of every deploy.

The live service can be accessed via its health check:
This URL is also output from the apply action:

Conclusion

In conclusion, we were able to fully automate our deploys on pushes to the master branch of our repo. This will ensure that each time a code change happens, the latest version gets automatically pushed to our live service.

The full changeset can be found on this pull request.

Sunday, June 28, 2020

Plan - CI/CD Series

The next step in our CI/CD workflow is to plan out the changes to be made to our production instance. This planning will allow us to:
  • Perform a dry-run of upgrading our environment
  • Run the dry-run on all pull requests to the master branch
  • Ensure our deployment process is repeatable
  • Ensure our deployment process is automated
Heroku

As mentioned in the introduction post, we will be hosting our site on Heroku. Heroku is a Platform as a Service (PaaS) which allows us to just tell the system where our code is, how to build it, and how to run. The rest of the cloud orchestration is taken care for us:
  • Security
  • Compute nodes
  • Logging
  • Access management
  • Optional add-ons
While we could as easily deploy this application to AWS, GCP, Azure, etc, Heroku takes away all the heavy lifting of ensuring our service is managed in a safe and secure way.

Terraform

To help with this planning phase of our application, we will be using Terraform. Terraform is an Infrastructure as Code tool which allows us to specify the resources we need in code definition files rather than scripts, plugins, manual edits, etc. This can really be a benefit if we have many resources across multiple domains because the definitions and ways to apply those definitions are still the same no matter what cloud/resources we need to build. Also, Terraform supports many cloud providers and Heroku is one of them.

In our example, we only have 1 resource on Heroku so it would be possible to use scripts or plugins to perform this same deployment, however I find it easier to be consistent and use Terraform for as much as possible. This way should we add something else (e.g. S3 bucket) to our deployment, we do not need to change the plan/deploy actions, we would only need to add extra definitions to our files to specify we now need something else.

Terraform State

When Terraform runs an action (plan, apply, destroy), it manages the state of the system in a state file. By default, this file is placed in the current directory of the actions being ran. Though, Terraform allows this state file to be saved elsewhere via remote state. Since our CI/CD build runs on top of GitHub Actions using Docker images, the current directory gets wiped each time we do a new build. Thus, for our application, we will be using the free Terraform Cloud which allows us to save our state file to be reused across all of our builds.

Heroku Buildpack

To launch our application on Heroku, we need to specify a build pack to use. This will let Heroku know what kind of application we have, how to build it, and how to run it. For our use-case, we will be using the Heroku Buildpack for Scala. To get this buildpack to work correctly, we need to provide a few things:
  1. A new SBT command of "stage" which can build the application from source.
  2. A URL to the source to be built.
  3. Procfile which specifies how to run our built application.
For the "stage" command, this is as simple as adding an alias to our "build.sbt" file:

addCommandAlias("stage", "clean;compile;assembly")

For the URL, right now we can leave this blank. Since we are only doing the initial planning of resources, we will not actually be deploying anything. Once we add the code to do the final deploy, we will have to modify this URL based upon the git tag we want to use.

Our Procfile for this application is very simple. We just use the same Java commands we have used for our PATs previously:

web: java -jar target/scala-2.13/cicd-series-assembly-*.jar

Heroku Terraform File

Next, we want to start to build our Terraform file which will indicate how we build our resources on Herkou. Our file will consist of the following items:
  1. Specifying we want to use remote state management and what organization/workspace to use.
  2. Specifying that this file uses Heroku resources.
  3. Allowing a variable to be injected for the source URL of the build.
  4. What Heroku application we want to manage.
  5. How our application gets built with Heroku build.
  6. What type of compute resources we want to use specified by Heroku formation.
  7. The output URL of our application when running.
The full Terraform file for this is:

# Example copied from - https://www.terraform.io/docs/github-actions/setup-terraform.html

terraform {
backend "remote" {
organization = "cicd-series"

workspaces {
name = "heroku-prod"
}
}
}

provider "heroku" {
version = "~> 2.0"
}

variable "build_url" {
type = string
}

resource "heroku_app" "guestbook_app" {
name = "cicd-series-guestbook"
region = "us"
}

# Build code & release to the app
resource "heroku_build" "guestbook_build" {
app = heroku_app.guestbook_app.name
buildpacks = ["https://github.com/heroku/heroku-buildpack-scala"]

source = {
url = var.build_url
}
}

# Launch the app's web process by scaling-up
resource "heroku_formation" "guestbook_formation" {
depends_on = [heroku_build.guestbook_build]

app = heroku_app.guestbook_app.name
type = "web"
quantity = 1
size = "free"
}

output "guestbook_url" {
value = "https://${heroku_app.guestbook_app.name}.herokuapp.com"
}

GitHub Action

Now that we have all of the individual pieces setup, we need to integrate this plan into our GitHub Actions. For our use-case, we will run Terraform's plan command on every pull request to master. Terraform has template that can be used to directly integrate with GitHub Actions:
Our setup is very similar to the example provided in that repo, however we need to specify our Terraform variable. The full syntax of our plan is:

# Terraform setup copied from
# https://github.com/hashicorp/setup-terraform
plan:
runs-on: ubuntu-latest
steps:
- name: Checkout Repo
uses: actions/checkout@v2

- name: Setup Terraform
uses: hashicorp/setup-terraform@v1
with:
cli_config_credentials_token: ${{ secrets.TF_API_TOKEN }}

- name: Terraform Init
id: init
run: terraform init

- name: Terraform Validate
id: validate
run: terraform validate -no-color

# The build_url is blank for planning since we will create a new URL upon commits
- name: Terraform Plan
id: plan
run: |
export TF_VAR_build_url=""
terraform plan -no-color

- name: Terraform Report
id: report
uses: actions/github-script@0.9.0
if: github.event_name == 'pull_request'
env:
PLAN: "terraform\n${{ steps.plan.outputs.stdout }}"
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const output = `#### Terraform Format and Style 🖌\`${{ steps.fmt.outcome }}\`
#### Terraform Initialization ⚙️\`${{ steps.init.outcome }}\`
#### Terraform Validation 🤖${{ steps.validate.outputs.stdout }}
#### Terraform Plan 📖\`${{ steps.plan.outcome }}\`

<details><summary>Show Plan</summary>

\`\`\`${process.env.PLAN}\`\`\`

</details>

*Pusher: @${{ github.actor }}, Action: \`${{ github.event_name }}\`, Working Directory: \`${{ env.tf_actions_working_dir }}\`, Workflow: \`${{ github.workflow }}\`*`;

github.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: output
})

GitHub Secrets

Since we are using live accounts on Terraform Cloud and Heroku, we need to specify a few API keys to ensure our builds use our accounts. The following GitHub Secrets are needed to be added to this repo to work correctly:
  • HEROKU_EMAIL
    • Used to specify which email account to be used for Heroku access
  • HEROKU_API_KEY
    • Used to specify which API key to be used for Heroku access
  • TF_API_TOKEN
    • Used to specify which API key to be used for Terraform Cloud access
Conclusion

In this post, we went from having no cloud resources to now having a plan of what cloud resources will be provisioned when we apply our configuration. In the final piece of the puzzle, we will add this apply stage upon pushes to the master branch.

The full code changeset can be found on this pull request.

Tuesday, June 16, 2020

PATs - CI/CD Series

Since we now have a fully runnable assembly (jar), we can add Product Acceptance Tests (PATs) to our automated build. PATs are a type of test which tests the product as a black-box - meaning that we work directly with the contracts defined on the exposed API without internal knowledge of the system. This allows us to test the system with more real-world style tests whereas unit tests usually go for a lot more edge-case tests for all possible input scenarios.

In our use-case, we will run our PATs against the defined REST endpoints and ensure the proper response codes and content are returned for each call. We will also simulate scenarios for our guestbook application.

Design

To help build these PATs, we are going to utilize Behavior Driven Development (BDD) Testing - specifically the Cucumber library:
Cucumber and its language Gherkin are a framework which have multiple implementations. For our use-case, we will be using the python implementation called "behave":
The reason for choosing python is just to use something different than the implementation language of our REST service. Also, it helps show that these PATs are completely separate from the actual service.

Test Setup

To run our tests, we will start our REST service as an in-memory process. This is facilitated via Cucumber with before-all and after-all setup stages:

from behave import *
import requests
import subprocess
import time

def before_all(context):
print('Starting server')
process = subprocess.Popen(['java', '-jar', 'cicd-series-assembly.jar'])
time.sleep(2)
print('Saving process to context')
context.proc = process

def after_all(context):
print('Terminating server')
context.proc.terminate()
print('Server terminated')

Running our service as an in-memory process ensures that we are running our tests against the fully built artifact from our "assemble" CI stage. Thus, these tests are running against the jar that we would push to a DEV or PROD environment instead of an one-off build.

Test Implementation

To build our tests, we write the actual test in the Gherkin language. This is a more natural language than most programming languages and can be understood without having much knowledge of its structure. Also, it uses the "Given/When/Then" style which is familiar to BDD Testing:
  • Given = setting up the service to be in a given state
  • When = the action to perform against the service
  • Then = the assertions to perform after the action
For the actual tests, we are building more real-world style use-cases - some of which are very similar to the unit tests we built previously. For example, we can build a test to ensure we cannot add a duplicate guest to our guestbook:

Scenario: conflict if a guest is added twice
Given a guestbook with one guest
When we add a guest
Then the response should be 409
And a single guest should be found with /guests

And with Cucumber, each "Given/When/Then" line maps to actual code. For the above test, our python code looks like:

from behave import *
import requests

@given('a guestbook with one guest')
def step_impl(context):
url = 'http://localhost:8080/guests'
guest = {'name': 'Dan', 'age': 31}
post_response = requests.post(url, json=guest)
assert post_response.status_code == 201
list_response = requests.get(url)
assert list_response.status_code == 200
assert 'guests' in list_response.json()
guests = list_response.json()['guests']
assert len(guests) == 1
assert guests[0] == guest

@when('we add a guest')
def step_impl(context):
url = 'http://localhost:8080/guests'
json = {'name': 'Dan', 'age': 31}
post_response = requests.post(url, json=json)
context.response = post_response

@then('the response should be 409')
def step_impl(context):
assert context.response.status_code == 409

@then('a single guest should be found with /guests')
def step_impl(context):
url = 'http://localhost:8080/guests'
guest = {'name': 'Dan', 'age': 31}
list_response = requests.get(url)
assert list_response.status_code == 200
assert 'guests' in list_response.json()
guests = list_response.json()['guests']
assert len(guests) == 1
assert guests[0] == guest

CI/CD

Now that we have our tests setup, we can plug them into our automated builds. Again, this will ensure that our system adheres to its contracts on every build and should something fail we will get automated build failure notifications.

The first step in plugging these tests into our build is to pass the built artifact from our "assemble" stage to our "pats" stage. We want to pass the built artifact between stages since we already have a dedicated stage to ensuring the build works as expected, hence, there is no reason to do the build twice. This artifact passing can be done using GitHub Actions:
Next, we want to define our PAT stage within our build. Since we are running both a Java application and Python tests, we need to choose a docker image which has all of our prerequisites installed by default (or build a custom image). Luckily, there is a docker image available which has Java 8 and Python 3 installed:
After that, we just need to install the required python dependencies and run our tests with "behave":

pats:
runs-on: ubuntu-latest
container: openkbs/jre-mvn-py3:v1.0.6
needs: assemble

steps:
- name: Checkout Repo
uses: actions/checkout@v2

# Download artifact
- name: Download Artifact
uses: actions/download-artifact@v2
with:
name: cicd-series-assembly.jar

# Verify artifact
- name: List Files
run: ls -al

# This is needed because the artifact is downloaded with the original file name (includes version)
- name: Rename Artifact
run: mv cicd-series-assembly-*.jar cicd-series-assembly.jar

# This is needed because download artifacts are not runnable
- name: Change Permissions
run: chmod a+rx cicd-series-assembly.jar

# Verify artifact
- name: List Files
run: ls -al

# Install python dependencies
- name: Install Dependencies
run: pip install -r requirements.txt

# Run behave tests
- name: Run PATs
run: behave

Conclusion

We now have added automated PATs to run on every Pull Request to the master branch of our repo. They were built using python and the Cucumber library to perform BDD Tests. Also, should anything fail, we will get automated build failure notifications.

All of the code above and more can be found on this pull request.

Sunday, June 14, 2020

Assembly - CI/CD Series

Now that we have a basic REST service setup and continually running unit tests, we can now work on building a runnable assembly for the service. To do this, we'll leverage:
Project Build

To make use of the sbt-assembly plugin, we first need to enable the plugin by adding the following to our "project/plugins.sbt" file:

addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.10")

After that is done, we need to define how to build the assembly:

mainClass in assembly := Some("com.github.dwiechert.Main"),

// Ignore tests when running the "assembly" task
test in assembly := {}

The definition above indicates:
  • The main class to be ran
  • Ignore unit tests when running the "assembly" phase
The reason why the tests are ignored is because in the previous post we integrated automated testing in a separate GitHub Actions step. Since we already have unit tests running there is no need to re-run them again when we build the final assembly. It is possible to run both the tests and assembly as one stage, but then debugging issues with the build could be more troublesome. With two distinct jobs (unit test and assemble), we can quickly and easily determine which stage failed. Whereas if they both ran on the same stage we would need to look into the logs to see if the build failed due to unit tests or the building of the jar itself.

Once these settings are in place, we can build the jar with a simple sbt command:

sbt "clean;compile;assembly"

To test that everything works, we can run the jar locally and ensure it starts up correct:

java -jar target/scala-2.13/cicd-series-assembly-0.1.0-SNAPSHOT.jar

CI/CD Setup

Given that we now have a runnable jar, we need to integrate this with our current CI/CD setup. The reason to integrate this build into our CI/CD system is to ensure not only the unit tests pass, but also that the jar can be built properly. 

To do this, we're going to add a new job in our GitHub Actions file which will run the same assembly commands mentioned above:

assemble:
# The type of runner that the job will run on
# https://help.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idruns-on
runs-on: ubuntu-latest
# The specific container to use
# https://help.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idcontainer
# https://hub.docker.com/r/hseeberger/scala-sbt/
container: hseeberger/scala-sbt:8u222_1.3.5_2.13.1

# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- name: Checkout Repo
uses: actions/checkout@v2

# Assembly artifact
- name: Create Assembly
run: sbt "clean;compile;assembly"

As mentioned above, this is a new GitHub Actions job. The assembly will run in parallel to the job which runs the unit tests we previously created.

In addition to just building our assembly, we will store the jar as an artifact of the build. This will allow us to re-use this jar in later jobs that will be build in the future (PATs, deployment).

# Upload jar as build artifact
- name: Upload Artifact
uses: actions/upload-artifact@v1
with:
name: cicd-series-assembly.jar
path: target/scala-2.13/cicd-series-assembly-0.1.0-SNAPSHOT.jar

Conclusion

We now have a way to not only build a runnable jar for our service but this step happens on each pull request to master. This will help ensure the state of our system is always tracked and can be deployed at any time.

All of the changes discussed can be found on this pull request.

Sunday, June 7, 2020

Unit Tests - CI/CD Series

In the previous post we created a simple REST service built on top of akka-http. However, as practiced with professional software development, we want to add unit tests to our service to ensure everything is working as planned. These unit tests provide several things for our service:
  1. Automated testing
  2. Reproducible tests
  3. Software assurance
Unit Tests

To build the unit tests for our service, we will leverage:
  • ScalaTest
    • The base testing framework (similar to JUnit for Java)
  • akka-http
    • akka-http provides test harnesses that integrate directly with ScalaTest
With these testing libraries, we can perform tests directly against the endpoints of our service instead of using objects. Also, both the requests and responses are full payloads so we can assert things such as:
  • Response code
  • Application type
  • Response body
This will help provide a full end-to-end unit test instead of directly calling an object and ignoring the serialization aspect of the test.

The simplest endpoint in our service is the health check - it only returns 200. A unit test for this endpoint is as simple as:

it should "return OK for /health" in {
Get("/health") ~> healthCheck.route ~> check {
status shouldBe StatusCodes.OK
}
}

In this test, the actions performed are:
  1. Send a GET request with the path "/health"
  2. The request goes to the route defined in the object "healthCheck"
  3. Assert that the returned status is OK (200)
We can use this same kind of test setup for a more complex case (such as adding a guest to our guestbook):

it should "add a guest" in {
val guestBook = new GuestBook

Post("/guests").withEntity(guestEntity) ~> guestBook.route ~> check {
status shouldBe StatusCodes.Created
}

Get("/guests") ~> guestBook.route ~> check {
status shouldBe StatusCodes.OK
contentType shouldBe ContentTypes.`application/json`
entityAs[Guests] shouldBe Guests(List(guest))
}
}

This test has a similar setup as the one above, just with a few more steps:
  1. Send a POST request with the guest data (defined outside of snippet)
  2. The request goes to the route defined in the object "guestBook"
  3. Assert that the returned status is Created (201)
  4. Send a GET request with the path "/guests"
  5. The request goes to the route defined in the object "guestBook"
  6. Assert that the returned status is OK (200)
  7. Assert that the returned content type is "application/json"
  8. Assert that the returned entity is our guest we added previously (wrapped in a list)
Continuous Integration

Now that we have unit tests available, we can hook up our continuous integration using:
For this simple REST service, the actions we want to perform are:
  • Run unit tests on all pull requests to the "master" branch
  • Run unit tests on all pushes to the "master" branch
  • Running unit tests consists of
    • Using a docker image which has sbt installed
    • Running "sbt test"
The entirety of the above can be expressed with just a few lines of a YAML definition:

name: SBT CI

# Run SBT tests on pushes and pull requests to master branch
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]

# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
# https://help.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idruns-on
runs-on: ubuntu-latest
# The specific container to use
# https://help.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idcontainer
# https://hub.docker.com/r/hseeberger/scala-sbt/
container: hseeberger/scala-sbt:8u222_1.3.5_2.13.1

# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v2

# Runs a set of commands using the runners shell
- name: Run Unit Tests
run: sbt test

Conclusion

Given the above unit tests and GitHub Actions definition, we now have a system which will run all unit tests on every build. This will not only ensure our system performs as expected, but also alert us when a build breaks for any reason.

All of the changes mentioned above (and more) can be found on this pull request.

Monday, June 1, 2020

Base Application - CI/CD Series

As discussed in the introduction, we are working on a basic REST service with the goal of implementing full end-to-end CI/CD. To help build this REST service, we are leveraging:
All of the changes have been completed and the full changeset can be found on this pull request.

The service currently has the following endpoints:
  • GET /health
    • Basic health check for the service
    • Returns 200
  • GET /guests
    • Returns all guests in the guestbook
    • Returns 200
    • Returns JSON list of guests
  • POST /guests
    • Adds a guest to the guestbook
    • Request is JSON guest {"name":"Dan", "age":31}
    • Returns 201 on creation
    • Returns 409 if the guest already exists
  • DELETE /guests/<name>
    • Deletes a guest by name
    • Returns 204 if the guest was deleted
    • Returns 404 if the guest to delete was not found
Overall, this is an extremely basic REST service, however it will serve our purposes for the rest of this CI/CD series.

In the following posts, we will start to:
  1. Add automated unit tests
  2. Run the service via a Docker image
  3. Add automated product acceptance tests (PATs)
  4. Add automated deployments to a cloud service