Processing payments on WordPress with Stripe and Gravity Forms

This post covers creating a web Form in WordPress using the popular (and excellent) Gravity Forms plugin and configuring it to process payments with Stripe:

  • Gravity Forms is a leading commercial Form Builder plugin for WordPress that many of my clients have had a lot of success with. It is a powerful tool when combined with Payments and it supports integration via its webhooks and API.
  • Stripe is a leading payments processor that is popular due to its ease-of-use and ease-of-integration. Stripe is currently my preferred choice for eCommerce projects.

The following assumes:

  • you are logged into your WordPress (/wp-admin) as an Administrator,
  • your WordPress site has SSL enabled (https://),
  • that Gravity Forms and the Gravity Forms Stripe Add-On plugins have been installed and activated, and
  • you have a valid + verified Stripe account and have inputted all necessary Business Settings.

Get API Keys from Stripe

  • Open a tab and login to your Stripe account.
  • Click on Developers in the left nav. A sub-menu will appear underneath.
  • Click on API Keys in the sub-menu under Developers.
  • Generate a set of Test and Live API Keys. Each comes in a pair consisting of a Private Key and a Publishable Key. The keys themselves are strings of random-looking characters prefixed with: ‘sk_live’, ‘pk_live’, ‘sk_test’, and ‘pk_test’.

Make an effort to keep your Private Keys (those prefixed with ‘sk’) confidential and do not send them to anyone over email or other potentially insecure means.

Setup Gravity Forms for Stripe

In the WordPress admin dashboard:

  • Click on Forms in the left nav. A sub-menu will expand underneath.
  • Click on Settings in the sub-menu under Forms.
  • The resulting Settings page has a series of tabs down the left side. Click on Stripe.
  • On the Stripe page complete the form:
    • API: choose “Live” (or “Test” if you wish all Forms to be in Test Mode)
    • Input the Test Publishable Key, Test Secret Key, Live Publishable Key, and Live Secret Key that you obtained from the Stripe Dashboard
    • Under Stripe Webhooks there are instructions for adding a Webhook within your Stripe account that points to a URL in your WordPress site. Click on the “View Instructions” link to reveal the necessary steps and open a tab to login to your Stripe Dashboard and complete them in Stripe. When that is complete:
    • Tick the “Webhooks Enabled” checkbox
    • Input the Test Signing secret and Live Signing secret values that you obtained from Stripe

Finally, click the Update Settings button to save your settings.

Create a Form and Configure Payments

Create the Gravity Form

Create a new Form and add at least a Product Field, Total field, and Credit Card field. In the Product Field, define your products/services and set their prices.

If they are applicable to your situation, you can also use Option and Quantity fields to gather additional information that can influence the price, and you can include a Shipping field as well.

Add a Stripe Feed to the Form

In your Form’s “Edit” screen, navigate to: Settings > Stripe.

Click on the Add New button to add a new Stripe Feed to this Form.

  • Name textbox: input a descriptive name (e.g. “Event Registration 2018”). Note that its good to include a unique identifier in the Name (e.g. such as the year for an annually recurring event) so you can easily identify payments related to this particular Form + Stripe Feed in the future.
  • Transaction Type: choose “Products & Services” from the drop-down menu (“Subscriptions” is the other available option, but it is not covered this post)

A set of new fields will appear:

  • Payment Amount: choose which Form field you would like Gravity Forms to use to determine the total amount to charge the user. In most cases you’d choose the Form Total field (e.g. “Form Total”) rather than any individual product field.
  • Metadata: optionally choose Form fields that you wish to send to Stripe so it can be included in Stripe Reports. This is an optional step but it can make your life easier down the line because you will be able to see more information on the payment side of things that will help you reconcile and perform accounting and customer support tasks. For each Metadata field you would like to add:
    • Input the field Name (as you’d like it to appear in Stripe) and from the drop-down menu choose which Form Field you would like to use as a value. It is also useful for sending along details about what products or choices the user may have made. Examples:
    • Name: “Entry ID” Value: “Entry ID” — the unique database ID of the form submission in your WordPress + Gravity
    • Name: “Email” Value: “Email” — sends the customer’s email to Stripe
  • Stripe Receipt: Choose whether or not you would like Stripe to automatically send an email receipt to the customer. The default option is “No” but it is often desirable to choose “Email”.
  • Conditional Logic: Optionally add logic to conditionally process payments only if certain values/conditions are met. Most forms do not need to use this option.

When you are done, click on the Update Settings button to save your Stripe Feed.

You should be good to go! Ensure your Stripe Feed is Live and your Form is Live, and then add it to a Post or Page to see it up on your site!

If you want to test your form, you can enable Test mode and use one of the fake/test credit cards that Stripe has published here: https://stripe.com/docs/testing. You will be able to see any Test transactions in your Stripe Dashboard when you toggle to Test vs. Live mode.

How to buy neopixel (ws2812b) strips for an Arduino or Raspberry Pi project

Neopixels can be purchased from many sources online. The best prices are often found at Asian deal sites like aliexpress.com.

The vast selection and number of variants of LED products in these catalogues can seem daunting especially for newbies looking for a deal. This post explains what the most common codes and numbers mean to help you identify 5V Neopixel products that are easily compatible with popular microcontrollers like Arduino and single-board-computers like Raspberry Pi.

Neopixel buying guide

The following sections cover different decision points that someone buying Neopixels will be faced with:

Pixels, strands, or rings

Neopixels are usually found a few different form factors:

  • individual pixels: these are usually mounted to a small circuit board barely larger than the pixel itself
  • strips: mounted to flexible strips of plastic tape
  • strings: individual pixels spaced apart with wires, like Christmas lights
  • rings: rigid circuit boards in circular and semi-circular shapes.

Choose what suits your project best. The Neopixel LEDs themselves are the same; this decision purely relates to what material they are mounted to.

Strips are the most common choice. They are flexible and usually come with a peel-off adhesive backing that isn’t usually very strong (hot glue often works better). Strips can be easily cut with scissors – all the way down to individual pixels, so this purchase is flexible to suit the needs of many projects.

Pixel density

Neopixel strips are typically priced by the number of Neopixels spaced along 1 metre of strip. Rings, strings, and individual Neopixels are simply priced based on how many pixels you’re buying.

When it comes to strips, common pixel densities found in online stores include: 30, 60, 90, and 144.

  • 30 pixels/m is a good choice for area lighting and creating an atmosphere
  • 60 pixels/m is a common general purpose choice that is suitable for many display applications as well where multiple strips are placed side-by-side to create a matrix display
  • 144 pixels/m feature nearly back-to-back pixels and are best suited for projects where a continuous band of colour effect is desired (for a continuous look, you will also need a good diffuser material to put over the lights to “wash” between the individual LEDs).

Generally strips are sold by the metre. The longest continuous strip that is generally available is 5m and comes packaged on a reel. Most strips come with a JST connector clip on each end of the individual strips that you purchase.

Driver chip (ws2812b)

Each individual pixel on a strip of neopixels has its own ws2812b integrated driver chip. The chip looks like a little black dot that can be seen somewhere “inside” each pixel (where exactly depends on the manufacturer).

The ws2812b “listens” on the control wire for instructions about what to do with its LED: what colour and how bright to make it.

The magic of neopixels is that each LED is individually addressable because each has its own chip. Every pixel “knows” its position along a string (even if its alone in a “string” of 1 pixel), and its colour and brightness can be controlled independently of any other pixels. Pixels can be programmed to appear to elegantly fade and transition between colours and intensities (at least as far as the human eye can perceive) because the control signal – which is usually 800kHz – updates the pixels many times per second.

The correct neopixel strips for most Arduino projects have 3-pins (or “wires” or “pads”): two for power (+ and -) and one for the control signal. Other types of LED strips can have more pins; 4 and 5 are common for other types of strips.

The order of the 3 pins doesn’t matter and can change depending on the manufacturer. Its important to always read the labels so you understand what each pin is for. The control signal is always directional, so strips are marked with an arrow to show the correct direction.

Voltage

Standard Neopixels are 5V. Its best to stick with the standard for most LED projects outside of large installations or commercial signage.

Both Arduino and Raspberry Pi operate at 5V. It can help reduce a project’s complexity when everything shares the same voltage! Note that while the Raspberry Pi is powered by 5V, its output pins are 3.3V. This is still enough to drive some pixels, or you can incorporate electronic components such as a level converter like the 74AHCT125 to bring output up to 5V.

Many varieties of strips and strings are 12V or more. There are also strips where the control line is 5V but the power lines are 12V or more. Read product details carefully. Again, for most Ardunio / Raspberry Pi or similar projects, its best to stick with classic 5V Neopixel strips.

Water resistance

Neopixel strips often to come in IP30, IP65, IP67, and IP68 varieties. The “IPX” refers to the degree of weather/water protection and is a global standard for electronics and other equipment.

See Wikipedia: IP Code for more information about IP Codes.

IP30 and below strips realistically can’t handle water so you should only use them indoors. IP65, IP67, and IP68 strands are water resistant. IP65 is usually covered in a plastic coating that is applied like hot glue to protect against splashes of water. IP67 and IP68 are inserted into a hollow rectangular plastic tube with a silicon cap covering each end. The tubes in IP68 strips are filled with a silicon or plastic sealant, making them the best bet for scenarios where the strip could be submerged in water.

For Neopixels in plastic tubes, extra silicon caps can be purchased on their own so you can cut the strips and make the ends watertight again. Hot glue is useful to squirt into the caps to ensure a watertight seal, especially around any wires.

LEDs

Many product listings will state the chip is a 5050 or SMD5050. This is the type of RGB LED that is found in neopixel strips as well as many other types of LED strips.

The RGB LED is actually made up of 3x LEDs: one each for Red (R), Green (G), and Blue (B) light. Different colours are produced by mixing different intensities of light. Some variants of these LEDs also include a dedicated white LED in addition to the 3x colours.

SMD5050’s are “dumb” on their own. It is the combination of a SMD5050 LED with a WS2812B integrated driver chip that makes it an awesome neopixel. Since many other types of LED strips use the SMD5050, if you see this somewhere in a product listing, double check to make sure that you’re getting a WS2812B driver with every pixel and that the voltage is 5V.

PCB Colour

PCB is an acronym for printed circuit board. In terms of Neopixel strips this refers to the flexible tape or strip itself that the pixels are mounted to. For rings, this refers to the hard plastic circuit board.

It is common for neopixel strips to be sold with a choice of white or black PCB. Other colours are manufactured but they are pretty rare. The PCB colour doesn’t make a difference when it comes to how a project works.

Other things to buy

You might want to consider a few other things to go along with an LED purchase:

  • Soldering equipment and supplies
  • Heat shrink tubing
  • JST Connectors (clips) with 3-pins/wires
  • AWG22 or AWG20 wire (note: these wire gauges can handle the power needs of smaller projects)
  • Power supplies (make sure 5V output!)

When purchasing a power supply take note of the amperage that its designed for and get one (or more..) that supply a greater amount of current than your project needs.

Adafruit has an excellent guide on Powering Neopixels:

If you need to power a lot of Neopixels, please be safe and do your homework. The current draw required can add up to potentially dangerous levels very quick. Erik Katerborg has written an excellent guide available as a PDF:

Creating a site-specific WordPress plugin

Site-specific plugins (or “site plugins”) are a common part of professional WordPress projects. They are useful for adding functionality to a site without strictly coupling that functionality to a theme’s functions.php file.

Site plugins are ideal for implementing custom post types, taxonomies, meta boxes, shortcodes, and other functionality that should be preserved even if an admin changes the theme. They are also great for specifying tweaks and security enhancements such as disabling XML-RPC, a rarely-used feature that provides a vector for attack.

Plugin basics

A WordPress plugin can be as simple as a single PHP file that begins with a Plugin Header: a specially-formatted comment that WordPress uses to recognize it as a plugin.

Only a “Plugin Name” is required in the Plugin Header, however the example below includes a number of recommended fields:

<?php
/*
Plugin Name: Site plugin for firxworx.example.com
Plugin URI: https://developer-website.example.com/unique-plugin-url
Description: Implements custom post types (or whatever) for firxworx.example.com
Version: 1.0
Author: your_name
Author URI: https://author.example.com 
*/

If you intend to support internationalization, specify your text domain (e.g. “Text Domain: example”) in the Plugin Header as well. It might also be important to specify a “License” and “License URI” for your project.

After the Plugin Header, you can define any functions and hooks as you normally would in a theme’s functions.php file.

Site Plugins as Must-Use Plugins

A Must-Use plugin (or mu-plugin) is a special type of plugin that will not appear on the Plugins page in wp-admin and can only be deactivated by deleting their associated file(s).

On client projects where stakeholders are given full admin access, it can be helpful to deploy a site plugin as an mu-plugin to prevent accidents.

Mu-plugin requirements

Any plugin that does not depend on Activation or Deactivation hooks can be deployed as an mu-plugin. Since an mu-plugin is “always there” these concepts do not apply.

An example of a plugin that depends on Activation/Deactivation hooks and therefore can’t be an mu-plugin is one that needs to create (on activation) and remove (on deactivation) database tables for its functionality.

Updating mu-plugins

Update notifications are not available for mu-plugins and updates to them must be performed manually. This is usually the case anyway for custom client work, however this fact can also be useful for projects that incorporate 3rd-party plugins as part of their overall solution.

Since mu-plugins are effectively “locked” to their current version there is no chance an admin can deploy a major update that could risk breaking compatibility with the rest of their site. Developers can take the opportunity to test new versions before they apply them manually.

The practice of “locking down” distributed plugins should only be used in scenarios where the developer is actively supporting the project and ensuring pending updates are regularly applied. Otherwise it may be a wiser security and stability choice to stick with traditional plugin deployments.

Deploying a plugin as a must-use plugin

WordPress looks for mu-plugins in the wp-content/mu-plugins folder by default. This path can be customized by defining the WPMU_PLUGIN_DIR and WPMU_PLUGIN_URL constants in wp-config.php.

Unlike traditionally-deployed plugins, mu-plugins must be PHP files that exist in the root of the wp-content/mu-plugins/ folder. A more complex plugin can be deployed as an mu-plugin by creating a “loader” script to serve as the required PHP file and then using it to pull in the rest of the plugin’s dependencies.

Mu-plugins are loaded in alphabetical order before any “normal” plugins in the wp-content/plugins folder.

Installing CH340/CH34X drivers on MacOS to load sketches onto cheap Arduino clones

This post details how to get [most] cheap Arduino clones working with MacOS Sierra so you can upload sketches to them.

Many clones are not recognized “out of the box” because they implement their USB to serial interface with a CH340 chip designed by China’s WCH (http://www.wch.cn/) instead of the more costly FTDI chip found in genuine Arduinos.

Most sellers on “China deal sites” like Aliexpress.com are up-front about these chips and include “CH340” in their product titles and descriptions, though the implications of this design modification are not alway understood by purchasers.

The easy installation method covered in this post comes courtesy of one Adrian Mihalko. He has bundled the manufacturer’s latest Sierra-compatible CH340/CH34G/CH34X drivers for installation with brew cask. These drivers are signed by the OEM so its no longer necessary to disable Mac’s System Integrity Protection (SIP) feature.

Github: https://github.com/adrianmihalko/ch340g-ch34g-ch34x-mac-os-x-driver

I had no problem getting a Robotdyn Arduino Uno as well as another cheap clone running on a Mac with High Sierra.

Step by step

Prerequisite: ensure brew installed on your Mac. Verify its presence and version info by executing brew --version in Terminal.

To begin, install the drivers with brew cask:

brew tap mengbo/ch340g-ch34g-ch34x-mac-os-x-driver https://github.com/mengbo/ch340g-ch34g-ch34x-mac-os-x-driver

brew cask install wch-ch34x-usb-serial-driver

(Note: the above is only two commands. The first one runs long, so take care when copying and pasting.)

When the install completes, reboot your machine.

Next, plug your Arduino clone into a free USB port.

Using Terminal, verify that the device is recognized by listing the contents of the /dev directory and looking for cu.wchusbserial1420 or cu.wchusbserial1410 in the output:

ls /dev

For example, I found cu.wchusbserial1420 in the output when I connected my Robotdyn Uno.

Things are promising if you find a similar result.

The Arduino IDE ships with drivers for the Uno itself, and save for the CH340, my clones are otherwise fully Arduino compatible (note: some clones might require additional drivers, and/or a different Board must be specified in the Arduino IDE). For my clones, the following steps were all I needed to upload sketches:

  • Open Arduino IDE with a test Sketch
  • Select the correct port in Tools > Port (e.g. /dev/cu.wchusbserial1420)
  • Verify that the Tools > Board had “Arduino Genuino/Uno” selected
  • Verify/Compile the Sketch (Apple+R)
  • Upload the Sketch (Apple+U)

Done.

In particular, the Robotdyn Uno appears to be decently well made, it’s laid out to support all Arduino-compatible shields, and it comes on an attractive black PCB. Versus a genuine Uno, it uses a micro-USB port instead of a full size one and exposes the ATmega328P microcontroller’s analog 6+7 pins. The company makes a number of similarly slick-looking accessories on black PCB. Their store on AliExpress is: https://robotdyn.aliexpress.com/

Have fun with your cheap clones!

Pulling files off a shared host (CPanel) with a 10K file FTP limit using a python web scraper

This post demonstrates the use of a web scraper to circumvent an imposed limit and download a bunch of the files.

I’ll use a recent case as an example where I had to migrate a client’s site to a new host. The old shared host was running an ancient version of CPanel and had a 10K file limit for FTP. There was no SSH or other tools, almost no disk quota left, and no support that could possibly change any CPanel settings for me. The website had a folder of user uploads with 30K+ image files.

I decided to use a web scraper to pull all of the images. In order to create links to all of the images that I wanted to scrape, I wrote a simple throwaway PHP script to link to all of the files in the uploads folder. I now had a list of all 30K+ files for the first time — no more 10K cap:

<?php
$directory = dirname(__FILE__) . '/_image_uploads';
$dir_contents = array_diff(scandir($directory), array('..', '.'));

echo '<h3>' . count($dir_contents) . '</h3>';
echo '<h5>' . $directory . '</h5>';

echo "<ul>\n";
$counter = 0;
foreach ($dir_contents as $file) {
  echo '<li>' . $counter++ . ' - <a href="/_image_uploads/'. $file . '">' . $file . "</a></li>\n";
}
echo "</ul>";
?>

Next, to get the files, I used a python script to scrape the list of images using the popular urllib and shutil python3 libraries.

I posted a gist containing a slightly more generalized version of the script. It uses the BeautifulSoup library to parse the response from the above PHP script’s URL to build a list of all the image URLs that it links to. This script can be easily modified to suit a variety of applications, such as downloading lists of PDF’s or CSV’s that might be linked to from any arbitrary web page.

The gist is embedded below:

If you need to install the BeautifulSoup library with pip use: pip install beautifulsoup4

In the gist, note the regex in the line soup.findAll('a', attrs={'href': re.compile("^http://")}). This line and its regex can be modified to suit your application, e.g. to filter for certain protocols, file types, etc.

Encrypting a USB Drive in MacOS, including formatting to JHFS+ or APFS

MacOS Sierra doesn’t feature an option to encrypt a USB drive in Disk Utility or in Finder (at least at the time of writing). This post covers how to format a USB drive to either the JHFS+ or the new APFS filesystem and encrypt it using the Terminal and Disk Utility.

Instructions

First, plug your USB drive into your computer and open the Terminal app.

Use the following command to list your disks:

diskutil list

Look for an entry (or entries) like /dev/disk2 (external, physical) and make absolutely sure that you understand the difference between your system’s hard disk and the external USB drive you want to encrypt.

The “IDENTIFIER” for my USB drive, found at the top of the list, was disk2. My system showed a subsequent entry disk2s1 but note how this still refers to disk2. Only the disk2 part is required. Use whatever diskn number corresponds to your target drive, where n is an integer.

Only proceed if you are certain that you have correctly identified your USB drive!

The following command formats the drive to Apple’s HFS+ with Journaling format, JHFS+. GPT is a crucial argument here to specify the GUID Partition Map option vs. the Master Boot Record option. You can replace the text inside the quoted string (“Flash Drive”) with your desired drive name:

diskutil eraseDisk JHFS+ "Flash Drive" GPT disk2

It is now possible to encrypt the drive with Finder (right-click and choose “Encrypt ‘Flash Drive'”) if you wish to simply keep the JHFS+ file system. If you wish to use the newer APFS file system, do not Encrypt the drive just yet, and read on.

Using the new APFS file system

APFS is Apple’s latest filesystem and it features good support for encryption. Before formatting your drive to APFS, be aware that older Macs (i.e. those without MacOS Sierra and up) will not support it.

To proceed with APFS, open the Disk Utility app.

With the drive formatted to JHFS+, Disk Utility will no longer grey out the “Convert to APFS” option when you right/control+click it.

Find your drive and choose “Convert to APFS”.

Once the file system has been converted to APFS you can go back to Finder, right/control+click on your drive, and choose “Encrypt ‘Flash Drive'” from the menu.

Don’t forget your passphrase 😉

Troubleshooting the fast.ai AWS setup scripts (setup_p2.sh)

Fast.ai offers a well-regarded free online course on Deep Learning that I thought I’d check out.

It seems that a lot of people struggle getting the fast.ai setup scripts running. Complaints and requests for help are on reddit, in forums, etc. This doesn’t surprise me because the scripts are not very robust. On top of that, AWS has a learning curve so troubleshooting following a script failure can be a challenge.

Hopefully this post helps other people that have hit snags. It is based on my experience on MacOS, however should be very compatible for those running Linux or Windows with Cygwin.

Understanding the setup script’s behaviour

It leaves a mess when it fails

If running the setup script fails, which is possible for a number of reasons, it will potentially have created a number of AWS resources in your account and a local copy of an SSH key at ~/.ssh/aws-key-fast-ai.pem. It does not clean up after itself in failure cases.

The setup script doesn’t check for existing fast-ai tagged infrastructure, so subsequent runs can create additional VPC’s and related resources on AWS, especially as you attempt to resolve the reason(s) it failed. The setup script might generate fast-ai-remove.sh and fast-ai-commands.txt but it overwrites these each time its run with only its current values, potentially leaving “orphan” infrastructure.

Thankfully all AWS resources are created with the same “fast-ai” tags so they are easy to spot within the AWS Console.

It makes unrealistic assumptions

The setup script assumes your aws config’s defaults specify a default region in one of its three supported regions: us-west-2, eu-west-1, and us-east-1.

I’m not sure why the authors assumed that a global tech crowd interested machine learning would be unlikely to have worked with AWS in the past and thus no existing aws configuration that might conflict.

The commands in the script do not use the --region argument to specify an explicit region so they will use whatever your default is. If your default happens to be one of the three supported ones, but you don’t have a sufficient InstanceLimit or there’s another problem, more issues could follow.

Troubleshooting

If you encountered an error after running the script, prior to re-running the script, take note of the following checks when attempting to resolve:

Check 1: Ensure you have an InstanceLimit > 0

Most AWS users will have a default InstanceLimit of 0 on P2 instances. You may need to apply for an increase and get it approved (this is covered in the fast.ai setup video).

If a first run of the script gave you something like the following, there was an issue with your InstanceLimit:

Error: *An error occurred (InstanceLimitExceeded) when calling the RunInstances operation: You have requested more instances (1) than your current instance limit of 0 allows for the specified instance type. Please visit http://aws.amazon.com/contact-us/ec2-request to request an adjustment to this limit.* 

InstantLimits are specific to a given resource in a given region. Take note of which region your InstanceLimit increase request was for and verify that it was granted in the same region.

Check 2: Ensure the right region

Verify your current default aws region by running: aws configure get region. The script assumes this is one of three supported regions: us-west-2, eu-west-1, or us-east-1.

The script also assumes that you have an InstanceLimit > 0 for P2 instances in whichever region you would like to use (or T2 instances if you are using setup_t2.sh).

To get things running quickly, I personally found it easiest to make the script happy and temporarily set my aws default to a supported region in ~/.aws/config, i.e.:

[default]
region=us-west-2

Another option is to modify the scripts and add an explicit --region argument to every aws command that will override the default region. If you have multiple aws profiles defined as named profiles, and the profile that you wish to use for fast.ai specifies a default region, you can use the --profile PROFILENAME argument instead.

For example, the following hypothetical aws config file (~/.aws/config) specifies a profile called “fastai”. A --profile fastai argument could then be added to every aws command in the setup script:

[default]
region=ca-central-1

[profile fastai]
region=us-west-2

Check 3: Delete cruft from previous failed runs

This check is what inspired me to write this post!

Delete AWS resources

Review any resources were created in your AWS Console, and delete any VPC’s (and any dependencies) that were spun up. They can be identified because they were created with the “fast-ai” tag which is shown in any tables of resources in the AWS Console.

Cruft resources will have been created in any region that the setup script was working with (i.e. whatever your default region was at the time you ran it).

If you’ve found cruft, start by trying to delete the VPC itself, as this generally will delete most if not all dependencies. If this fails because of a dependency issue, you will need to find and delete those dependencies first.

IMPORTANT: AWS creates a default VPC and related dependencies (subnets, etc.) in every region available to your account. Do NOT delete any region’s default VPC. Only delete resources tagged with “fast-ai”.

Delete SSH keys

Check to see if ~/.ssh/aws-key-fast-ai.pem was created, and if so, delete it before running the script again.

The setup script has logic that checks for this pem file. We do not want the script to find the file on a fresh run.

After a successful run

After the setup script ran successfully, I got output similar to:

{
    "Return": true
}
Waiting for instance start...

All done. Find all you need to connect in the fast-ai-commands.txt file and to remove the stack call fast-ai-remove.sh
Connect to your instance: ssh -i /Users/username/.ssh/aws-key-fast-ai.pem ubuntu@ec2-XX-YY-ZZ-XXX.us-west-2.compute.amazonaws.com

Reference fast-ai-commands.txt for information about your VPC and EC2 instance. An ssh command to connect is in the file, and you can find your “InstanceUrl”.

I suggest picking up the video from here and following along from the point where you connect to your new instance. It guides you through checking the video card with the nvidia-smi command and running jupyter: http://course.fast.ai/lessons/aws.html

Starting and stopping your instance

The fast-ai-commands.txt file outlines the commands to start and stop your instance after the setup has completed successfully, e.g.:

aws ec2 start-instances --instance-ids i-0XXXX
aws ec2 stop-instances --instance-ids i-0XXXX

Its important to stop instances when you are finished using them so that you don’t get charged hourly fees for their continued operation. P2 instances run about $0.90/hr at the time of writing.

How to write custom wp-cli commands for next-level WordPress automation

wp-cli is a powerful tool for WordPress admins and developers to control their sites from the command-line and via scripts.

This is an introductory guide to writing custom wp-cli commands using a plugin, to enable you to expand its features and tailor its capabilities to your projects.

Custom commands are invoked like any other:

wp custom_command example_custom_subcommand

Why wp-cli custom functions are useful

Custom commands can help automate the management of more complex sites, enhance one’s development workflow, and enable greater control over other themes and plugins.

Practical real-world example

On a past project, I implemented wp-cli commands that enabled me to quickly load 100’s of pages of multilingual content in English, French, and Chinese, plus the translation associations between them all, to local and remote WP installs. The internationalization plugin in play, the popular and notorious pain-in-the-ass WPML, had no wp-cli support (and still doesn’t!). It otherwise would’ve required the clicking of bajillions of checkboxes every time copy/translation decks or certain features were revised.

Bare-bones plugin implementation

The following assumes that the wp-cli command is present (and accessible via wp), and that the example plugin has been correctly installed and activated on the target WordPress.

This code creates a basic plugin class called ExamplePluginWPCLI that is only loaded when the WP_CLI constant is defined:

if ( defined( 'WP_CLI' ) && WP_CLI ) {

    class ExamplePluginWPCLI {

        public function __construct() {

                // example constructor called when plugin loads

        }

        public function exposed_function() {

                // give output
                WP_CLI::success( 'hello from exposed_function() !' );

        }

        public function exposed_function_with_args( $args, $assoc_args ) {

                // process arguments 

                // do cool stuff

                // give output
                WP_CLI::success( 'hello from exposed_function_with_args() !' );

        }

    }

    WP_CLI::add_command( 'firx', 'ExamplePluginWPCLI' );

}

Adding a custom command to wp-cli

Consider the line WP_CLI::add_command( 'firx', 'ExamplePluginWPCLI' ); from the above example.

The command’s name, firx, is given as the first argument. You can choose any name for custom commands that aren’t already reserved by an existing command, provided that it doesn’t contain any special characters. It is wise to pick a unique name that minimizes the risk of conflicts with any other plugin or theme that might also add commands to wp-cli.

Defining new wp-cli commands in a class like ExamplePluginWPCLI confers a special advantage over defining them using only standalone php functions or closures: all public methods in classes passed to WP_CLI::add_command() are automatically registered with wp-cli as sub-commands.

Executing a custom command

The class’ public function exposed_function() can be called via wp-cli as follows:

wp firx exposed_function

The class’ public function exposed_function_with_args() can be called via wp-cli as follows. This particular function accepts command line arguments that will get passed into it via its $args and $assoc_args variables as appropriate:

wp firx exposed_function_with_args --make-tacos=beef --supersize

The constructor __construct() is optional and is included as an example. This function is called when an instance of the plugin class is loaded and it can be used to define class variables and perform any necessary setup tasks.

Output: success, failure, warnings, and more

Sending information back to the command line

wp-cli has a number of functions for outputting information back to the command line. The most commonly used are:

// works similar to a 'return' and exits after displaying the message to STDOUT with 'SUCCESS' prefix 
WP_CLI::success( 'Message' )

// works similar to a 'return' and exits after displaying the message to STDERR with 'ERROR' prefix
WP_CLI::error( 'Message' )

// display a line of text to STDERR with no prefix 
WP_CLI::log( 'Message' ) 

// display a line of text when the --debug flag is used with your command
WP_CLI::debug( 'Message' ) 

The success() and error() functions generally serve as a return equivalent for wp-cli functions. If either of these functions are called with a single argument containing a message, the script will exit after the message is displayed.

Formatting output as tables, json, etc

wp-cli has a useful helper function called format_items() that makes it a lot easier and cleaner to output detailed information.

Available options to output json, csv, count, and yaml are brilliant for enabling command output to be easily used by scripts and/or digested by web services.

The function accepts 3 arguments:

WP-CLI::format_items( $format, $items, $fields )
  • $format – string that accepts any of: ‘table’, ‘json’, ‘csv’, ‘yaml’, ‘ids’, ‘count’
  • $items – Array of items to output (must be consistently structured)
  • $fields – Array or string containing a csv list to designate as the field names (or table headers) of the items

For example, the following code:

$fields = array ( 'name', 'rank' ); 
$items = array (
    array (
        'name' => 'bob',
        'rank' => 'underling',
    ),
    array (
        'name' => 'sally',
        'rank' => 'boss',
    ),
);
WP_CLI\Utils\format_items( 'table', $items, $fields );

Would output something similar to:

# +-------+-----------+
# | name  | rank      |
# +-------+-----------+
# | bob   | underling |
# +-------+-----------+
# | sally | boss      | 
# +-------+-----------+

Changing the format to ‘json’ or ‘yaml’ is as simple as swapping out the ‘table’ argument.

Input: handling arguments

Positional and associative arguments

wp-cli supports both positional arguments and associative arguments.

Positional arguments are interpreted based on the order they are specified:

wp command arg1 42

Associative arguments may be specified in any order, and they can accept values:

wp command --make-tacos=beef --supersize --fave-dog-name='Trixie the Mutt'

Both positional and associative arguments can be supported by the same command.

A function implemented with two parameters for $args and $assoc_args, such the one from the first big example in this guide exposed_function_with_args( $args, $assoc_args ), will be provided all positional arguments via the $args variable and all associative arguments via the $assoc_args variable.

Retrieving values passed to a command

Suppose we wanted to process all of the arguments for the command:

wp firx exposed_function_with_args arg1 42 --make-tacos=veggie --supersize --fave-dog-name='Trixie the Mutt'

The following example expands on the initial example’s implementation of the exposed_function_with_args() function to demonstrate how to access the values of each argument:

public function exposed_function_with_args( $args, $assoc_args ) {

    // process positional arguments - option 1
    $first_value = $args[0];  // value: "arg1"
    $second_value = $args[1]; // value: 42

    // OR - process positional arguments - option 2
    list( $first_value, $second_value ) = $args;

    // process associative arguments - option 1 
    $tacos = $assoc_args['make-tacos']; // value: "veggie"
    $supersize = $assoc_args['supersize'];  // value: true
    $dog_name = $assoc_args['fave-dog-name'];    // value: "Trixie the Mutt"

    // OR - process associative arguments - option 2 - preferred !! 
    $tacos = WP_CLI\Utils\get_flag_value($assoc_args, 'make-tacos', 'chicken' );
    $supersize = WP_CLI\Utils\get_flag_value($assoc_args, 'supersize', false );
    $dog_name = WP_CLI\Utils\get_flag_value($assoc_args, 'fave-dog-name' );

    // do cool stuff

    // provide output
    WP_CLI::success( 'successfully called exposed_function_with_args() !' );

}

Using get_flag_value() to help with associative arguments

wp-cli comes with a handy helper function for handling associative arguments that serves as the preferred way to access them:

WP_CLI\Utils\get_flag_value($assoc_args, $flag, $default = null)

This function is passed the $assoc_args variable, the $flag (argument name) you wish to access, and optionally a default value to fall-back on if you want it to be something other than null.

What makes get_flag_value() the preferred method for accessing values is that it takes care of implementing a few tricks, including supporting the no prefix on arguments to negate them. Consider that a professional implementation of the above --supersize option should also check for and handle the case if a --no-supersize version is passed. Using the get_flag_value() function to access the values of the argument (e.g. using “supersize” as the identifier for the middle $flag argument) would handle this case and you’d be assured that your function would automatically receive the correct ‘true’ or ‘false’ value to work with.

Working with arguments

This is an introductory guide and the examples are for illustrative purposes only. For a robust implementation, keep in mind that you will likely need to add additional logic to any function that accepts arguments to check if required items are specified or not, if expected/supported values were passed in or not, etc.

These cases can be handled by your php code and can leverage wp-cli’s output functions like warning() or error() to provide user feedback. Another option that can cover many validation cases is to leverage wp-cli’s PHPDoc features (more on that below).

Registering arguments with wp-cli

An easy way to register arguments/options, whether they’re mandatory or not, and specify any default values is to use PHPDoc comment blocks. These play an active role in wp-cli which intelligently interprets them. Refer to the section on PHPDoc near the end of this guide.

wp-cli custom functions will still work in the absence of PHPDoc comments but they won’t be as user-friendly and any arguments won’t be tightly validated.

Errors: error handling with wp-cli

Use the WP_CLI::error() function to throw an error within a class or function that implements wp-cli commands:

// throw an error that will exit the script (default behaviour)
WP_CLI::error( 'Something is afoot!' );

// throw an error that won't exit the script (consider using a warning instead)
WP_CLI::error( 'The special file is missing.', false );

Error output is written to STDERR (exit(1)), which is important to consider when writing scripts that use wp-cli and respond to error conditions.

Use WP_CLI::warning() to write a warning message to STDERR that won’t halt execution:

WP_CLI::warning( $message )

Use WP_CLI::halt() to halt execution with a specific integer return code (this one is mostly for those writing scripts):

WP_CLI::halt ( $return_code )

PHPDoc comments and controls

PHPDoc style comment blocks are more than comments when it comes to wp-cli: it interprets them to provide help text to cli users, document the command’s available options/arguments and their default values, and serve as the basis for a level of validation that that gets performed before a command’s underlying function gets called and any argument data is passed to it. Complete and descriptive comments will result in the enforcement of mandatory vs. optional parameters by wp-cli.

Implementing correct PHPDoc comments simplifies and expedites the implementation of custom commands because the developer defers certain validation checks to wp-cli (e.g. to ensure a required argument was specified) rather than implementing everything on their own.

PHPDoc comments are used as follows:

  • The first comment line corresponds to the “shortDesc” shown on the cli
  • The “meat” of a comment body corresponds to the “longDesc” may be shown when cli users mess up a command, and is shown when they specify the --help flag with a given command
  • Options (aka arguments aka parameters in this context) that are defined specify if each parameter is mandatory vs. optional, and if a single or multiple arguments are accepted

The “shortDesc” and “longDesc” may be displayed to cli users as they interact with the command. To show a full description, cli users can execute:

wp [command_name] --help

Basic PHPDoc comment structure

PHPDoc comments are placed immediately preceding both Class and method (function) declarations, and have a special syntax that starts with /**, has a 2-space-indented * prefixing each subsequent line, and ends with an indented */ on its own line:

/**
 * Implements example command that does xyz
 */
ClassOrMethodName {
    // ...
}

PHPDoc comments with parameters

The wp-cli’s Command Cookbook provides the following example of a PHPDoc comment:

    /**
     * Prints a greeting.
     *
     * ## OPTIONS
     *
     * <name>
     * : The name of the person to greet.
     *
     * [--type=<type>]
     * : Whether or not to greet the person with success or error.
     * ---
     * default: success
     * options:
     *   - success
     *   - error
     * ---
     *
     * ## EXAMPLES
     *
     *     wp example hello Newman
     *
     * @when after_wp_load
     */
    function say-hello( $args, $assoc_args ) {
        // ... 
    }

The #OPTIONS and #EXAMPLES headers within the “meat” of the comment (corresponding to the “longDesc”) are optional but are generally recommended by the wp-cli team.

The arguments/parameters are specified under the #OPTIONS headers:

  • <name> specifies a required positional argument
    • writing it as <name>... means the command accepts 1 or more positional arguments
  • [--type=<type>] specifies an optional associative argument that accepts a value
    • [some-option] without the = sign specifies an optional associative argument that serves as a boolean true/false flag
  • the example’s default: and options: provided for the [--type=<type>] argument can be specified under any argument to communicate that argument’s default value and the options available to cli users

In case word wrapping is a concern for you when help text is presented:

  • Hard-wrap (add a newline) option descriptions at 75 chars after the colon and a space
  • Hard-wrap everything else at 90 chars

More information about PHPDoc syntax in the wp-cli context is available in the docs:

https://make.wordpress.org/cli/handbook/documentation-standards/

Help with running wp-cli commands

There are a couple helpful reminders to keep in mind when using wp-cli that could prove useful to someone following this guide.

Specifying a correct wordpress path

wp-cli must always be run against a WordPress installation.

  • You can cd to the path where WordPress has been installed, so that the shell prompt’s working directory is at the base where all the WordPress files are (e.g. on many systems, this would be something similar to: cd /var/www/example-wordpress-site.com/public_html)
  • Alternatively you can provide the universal argument --path when specifying any wp-cli command (e.g. --path=/var/www/example-wordpress-site.com/public_html) to run a command from any folder

Running wp-cli as a non-root user

It is dangerous to run wp-cli as root and it will happily inform you of this fact should you try. That’s because php scripts related to your WordPress and wp-cli would be executed with full root permissions. They could be malicious (especially risky if your site allows uploads!) or poorly implemented with bugs, and that fact unreasonably puts your whole system at risk.

A popular option is to use sudo with its -u option to run a command as another user. Assuming that you have sudo installed and correctly configured, and assuming that the user ‘www-data’ is the “web server user” on your system and that it has read/write permissions to your WordPress installation folder (these are common defaults found on Ubuntu and Debian systems), you can prefix your commands with: sudo -u www-data --.

The following executes one of wp-cli’s built-in commands, wp post list, as the www-data user:

sudo -u www-data -- wp post list

Further reading

Check out the wp-cli docs and the commands cookbook at:

Avoiding duplicate entries in authorized_keys (ssh) in bash and ansible

Popular methods of adding an ssh public key to a remote host’s authorized_keys file include using the ssh-copy-id command, and using bash operators such as >> to append to the file.

An issue with ssh-copy-id is that this command does not check if a key already exists. This creates a hassle for scripts and automations because subsequent runs can add duplicate key entries. This command is also not bundled with MacOS, creating issues for some Mac users (though it can be installed with Homebrew).

This post covers a solution that adds a given key to authorized_keys only if that key isn’t already present in the file. Examples are provided in bash and for ansible using ansible’s shell module (old versions) and authorized_key module (newer versions).

For shell scripts, there seem to be a lot of solutions out there for this common problem, but I think a lot of them overcomplicate things with sed, awk, uniq, and similar commands; or go overboard by implementing standalone utilities for the task. One thing I don’t like about many of the working solutions that I’ve come across is when the authorized_keys file is reordered as a side-effect.

Note that ssh authentication works fine when there are multiple identical authorized_keys entries. However, accumulating junk in this file can create performance issues, and can make troubleshooting, auditing, and other admin tasks more difficult. When a remote host tries to authenticate, ssh works its way down the authorized_keys file until it comes across a match.

Adding a unique entry to authorized_keys

The following is a one-liner to be run by a user that can authenticate with the remote server.

Modify the snippet below to suit your needs:

ssh -T user@central.example.com "umask 0077 ; mkdir -p ~/.ssh ; grep -q -F \"$PUB_KEY\" ~/.ssh/authorized_keys 2>/dev/null || echo \"$PUB_KEY\" >> ~/.ssh/authorized_keys"

The command adds the public key stored in the shell variable $PUB_KEY to the authorized_keys file of the user on the server central.example.com. A umask ensures the correct file permissions.

To modify, replace user and central.example.com with values relevant to you, and either substitute your public key in place of the $PUB_KEY variable, or define the variable in a bash script or set it as an environment variable prior to executing the command.

Benefits of this approach:

  • unique entries: no duplicate authorized_keys
  • idempotent: subsequent runs given the same input will yield the same result
  • order preserved: entries in authorized_keys retain their order
  • correct permissions: in cases where the .ssh folder and/or authorized_keys file do not already exist, they will be created with the correct permissions for openssh thanks to the umask
  • quiet: the command is quiet
  • automation friendly: fast one-liner that’s easy to add to scripts, with minimized race conditions in situations that involve running automations (e.g. ansible playbooks) in parallel
  • KISS principle: its not as risky or difficult to configure as some other approaches that I have encountered online

Tip: If you want to suppress any motd/welcome banner content that might be outputted when connecting to the remote server via ssh, first touch a .hushlogin file in the target user’s home directory to suppress it.

Ansible implementation

Current: using the authorized_key module

The newer known_hosts module and authorized_key module (featuring numerous feature additions from its introduction through to 2.4+) were introduced to help manage ssh keys on a host.

The authorized_key module has a lot of useful options, including optional exclusivity, supporting sourcing keys from variables (and hence files via a lookup) as well as URL’s, and options to manage the authorized_keys/ folder (e.g. creating it with appropriate permissions if it doesn’t exist).

An example from the docs follows, with one addition: I added the exclusive option in keeping with the theme of this post.

- name: Set authorized key took from file
  authorized_key:
    user: charlie
    state: present
    key: "{{ lookup('file', '/home/charlie/.ssh/id_rsa.pub') }}"
    exclusive: yes

See the ansible documentation for more examples: http://docs.ansible.com/ansible/latest/authorized_key_module.html

Legacy: using bash in the shell module

One of the more annoying aspects of ansible can be getting escape characters right in templates and certain modules like shell, especially when variables are involved. The following example has valid syntax. You can modify the variables and the become and delegate_to args to suit your scenario:

# assume the ansible user on the control machine can access the remote target server via ssh 

- name: set_fact host_pub_key containing current host's pub key from local playbook_dir/keys
  set_fact:
    host_pub_key: "{{ lookup('file', playbook_dir + '/keys/{{ inventory_hostname }}-{{ authorized_user }}-id_rsa.pub') }}"

- name: add current host's pub key to repo server's authorized_keys if its not already present 
  shell: |
    ssh -T {{ example_user }}@{{ example_server }} "umask 0077 ; mkdir -p ~/.ssh ; grep -q -F \"{{ host_pub_key }}\" ~/.ssh/authorized_keys 2>/dev/null || echo \"{{ host_pub_key }}\" >> ~/.ssh/authorized_keys"
  args:
    executable: /bin/bash
  become: "{{ ansible_user_id }}"
  delegate_to: localhost

The first task populates the host_pub_key fact from a hypothetical id_rsa.pub key file.

The second task executes the bash snippet that adds the public key to the remote host’s authorized_keys file in a way that avoids duplicates.

Creating certificates and keys for OpenVPN server with EasyRSA on MacOS

This guide covers how to create certificates and keys for OpenVPN server and clients using the EasyRSA tool on MacOS.

The instructions are very similar for most flavours of linux such as Ubuntu once the correct packages are installed (e.g. on Ubuntu: apt-get install openvpn easy-rsa).

If privacy and security are of the utmost concern, generate all certificates and keys on a “clean” machine and verify the signatures of each download.

Step 1: Resolve MacOS Dependencies

This guide assumes that you’re running MacOS Sierra or later.

XCode and Command Line Tools

Ensure that you have installed the XCode Command Line Tools.

To check, the command xcode-select -p outputs a file path beginning with /Applications/Xcode.app/ if they are already installed.

If Command Line Tools is not installed, open the Terminal app and enter xcode-select --install to trigger the installation app.

Another way to trigger the installation app is to attempt to use a command line developer tool such as the GNU C compiler gcc (e.g. gcc --version). If the tools are not installed, you will be greeted by a graphical MacOS installation prompt instead of the expected Terminal output from gcc. You don’t necessarily need the full XCode so you can click the “install” button for just the command line tools.

Work your way through the installer and follow Apple’s steps until you can start working with the necessary commands. The CLI commands will become available to you after you agree to all of Apple’s terms and conditions.

If you experience troubles with the next step, assuming that it is the result of some future change by Apple, it may be beneficial to install the full XCode in addition to the CLI tools. It’s available for free on the App Store, but take note that it’s a hefty multi-gigabyte download.

OpenSSL

EasyRSA requires a late-version of the open-source openssl library.

Apple bundles its own crypto libraries in MacOS but these are generally out of date. At the time of writing, the openssl command bundled with MacOS is not likely compatible with EasyRSA and will produce errors if you try to use it (note: the binary is at /usr/bin/openssl).

A newer EasyRSA-compatible version of OpenSSL is easy to install with the brew package manager (https://brew.sh/). Installing via brew will not clobber or harm the Apple version that’s already on your system. If you need to install brew, go to the project’s website and follow the simple instructions on the landing page.

Assuming you have brew installed, open a Terminal and run the command:

brew install openssl

Brew will download and install openssl to its default package install directory of /usr/local/Cellar.

The package will be installed in “keg only” mode: brew will not create a symlink for its openssl in /usr/local/bin or anywhere else in your $PATH. You will not have a conflicting openssl command, and Apple’s binary will remain intact.

To get EasyRSA to use the openssl binary installed by the brew package, you will need to know its path. Run brew’s package info command and examine the output:

brew info openssl

In my example, I could see that openssl resolved to /usr/local/Cellar/openssl/1.0.2n. In your case, this may be a different path due to a more recent version being available in the future. Next, inspect this folder to locate the binary and determine the full path to it. In my example case, the full path to the binary was:

/usr/local/Cellar/openssl/1.0.2n/bin/openssl

Note down the correct path to the openssl binary for your case. When configuring EasyRSA in the next step, you will need to specify this path in an EASYRSA_OPENSSL variable.

Step 2: Download EasyRSA

Go to https://github.com/OpenVPN/easy-rsa/releases and download the latest .tgz version for your Mac.

Save the file to a folder that you wish to work from (your certificates and keys will be generated here) and unpack it using the Archive utility (double click on it in Finder).

Note that the easy-rsa tools were written with traditional linux/unix-type environments in mind and therefore assume that all paths to the scripts have no spaces in them.

Going forward I will assume the path of your unpacked EasyRSA folder is: ~/vpn/easyrsa. The ‘~’ character is a shortcut to your home folder that works in Terminal, i.e. on a Mac its a placeholder for /Users/your_username and on a typical linux environment /home/username.

Step 3: Configure EasyRSA

Assuming that the path of your unpacked EasyRSA folder is: ~/vpn/easyrsa, open Terminal and navigate to the unpacked folder:

cd ~/vpn/easyrsa

Copy the vars.example “starter” configuration file to vars:

cp vars.example vars

Now customize the initial “starter” configuration file’s settings in vars to reflect your own.

Open it in a text editor and look for the following lines. Uncomment them (i.e. delete the preceding # character) and fill them in with your appropriate values. Specify something for each field below:

#set_var EASYRSA_REQ_COUNTRY   "US"
#set_var EASYRSA_REQ_PROVINCE  "California"
#set_var EASYRSA_REQ_CITY  "San Francisco"
#set_var EASYRSA_REQ_ORG   "Copyleft Certificate Co"
#set_var EASYRSA_REQ_EMAIL "me@example.net"
#set_var EASYRSA_REQ_OU        "My Organizational Unit"

Look for the following field and uncomment it:

#set_var EASYRSA_KEY_SIZE        2048

We’ll be using a 2048-bit key (the current default) for this example so the value will not be changed.

A larger key size is more secure but will result in longer connection + wait times over the VPN. At the time of writing in late 2017, its generally believed that a 2048-bit key is sufficient for most usage scenarios. A 4096-bit key is believed to provide additional privacy vs. more powerful state-sponsored actors.

EasyRSA by default uses the openssl binary found in the $PATH. Find the following line, uncomment it, and update the value with the path to the brew-installed openssl binary from Step 1. For example, in my case, the following line:

#set_var EASYRSA_OPENSSL   "openssl"

became:

set_var EASYRSA_OPENSSL "/usr/local/Cellar/openssl/1.0.2n/bin/openssl"

Step 4: Generate Certificate Authority (CA)

Navigate into your easyrsa/ folder. For example:

cd ~/vpn/easyrsa

Initialize the PKI (public key infrastructure) with the easyrsa script. This will create a pki/ subfolder:

./easyrsa init-pki

Create the CA (certificate authority):

./easyrsa build-ca nopass

You will be prompted to input a Common Name. Input the name server and hit ENTER.

The generated CA certificate can now be found at pki/ca.crt.

Step 5: Generate Server Certificate + Key + DH Parameters

Assuming you’re still inside your easyrsa/ folder from the previous step, generate your server certificate and key:

./easyrsa build-server-full server nopass

The generated server certificate can now be found at: pki/issued/server.crt

The generated server key can now be found at: pki/private/server.key

Now generate the Diffie-Hellman (DH) parameters for key exchange. This process can take several minutes depending on your system:

./easyrsa gen-dh

The generated DH parameters can be found at: pki/dh.pem.

You now have all of the files necessary to configure an OpenVPN server.

Step 6: Generate client credentials

You should generate a unique set of credentials for each and every client that will connect to your VPN. You can repeat this step for any client that you need to create credentials for.

All clients in your setup should have a unique name. Change exampleclient in the following to something descriptive that you will recognize and be able to associate with the user/client:

./easyrsa build-client-full exampleclient nopass

The generated client certificate: pki/issued/exampleclient.crt

The generated client key can be found at: pki/private/exampleclient.key

When distributing credentials to a client, they will need at least these 3 files:

  • A client certificate (e.g. pki/issued/exampleclient.crt)
  • The corresponding client key (e.g. pki/private/exampleclient.key)
  • A copy of the CA certificate (pki/ca.crt)

These client credentials can be loaded into a VPN app like Tunnelblick or Viscosity along with client configuration information that corresponds to your VPN server’s specific settings.

Understanding client config files

Client configuration information is usually provided in the form of an additional file: a plaintext config file with the .ovpn extension. Both Tunnelblick and Viscosity recognize the .ovpn extension and file format.

Later versions of openvpn support specifying all of the client configuration information, client certificate, client key, and CA certificate as demarcated blocks within the config file itself, so that clients only need to be provided with a single .ovpn file.

Security reminder

It is good to practice to try and keep all .ovpn, certificate, and key files as safe as possible with exposure to as few eyes/hands/hard-disks/clouds/etc as possible. Distribute them as securely as you can to your clients/users.

Next steps

Now you need a working openvpn server and a client that wishes to connect to your VPN!

I hope this guide was helpful to you. For all the server tutorials out there, as far as I know this is one of the few comprehensive guides out there for creating all required certificates and keys on MacOS.

Router as OpenVPN server

If your openvpn server is your router, you can now login to it’s admin control panel and input the server-related certificate + key + DH parameters that you created above.

Before you activate the VPN server, ensure that your router’s firmware is up-to-date and that you have set a long and reasonably secure password for the admin user.

Running your own server

If you are planning to setup your own openvpn server, there are numerous other resources available online to guide you through the server installation and configuration process for a variety of different operating systems.

You will find that you need all the keys and certificates that you created by following this guide.

These resources will generally include guidance for crafting .ovpn client configuration files to include specific settings that correspond to your server’s particular setup, so that clients can successfully connect.