Using Formik 2 with React Material Design

Formik is perhaps the leading choice of library to help implement forms in React. Version 2 was recently released and it introduces new hooks as well as improved support for checkboxes and select fields.

This post covers basic usage of Formik v2 with the TextField, Radio, and Checkbox components provided by the Material UI library.

Starting with a blank Create React App project, add the appropriate dependencies:

yarn add formik
yarn add @material-ui/core

You may also wish to add the Roboto font to Material UI per the installation guide.

Start by importing the Formik component.

import { Formik } from 'formik'

Next add the Formik component to your app. It has two required props: initialValues and onSubmit.

The initialValues prop is for specifying an object with properties that correspond to each field in your form. Each key of the object should match the name of an element in your form.

The onSubmit prop receives a function that is called when the form is submitted. The function is passed a data parameter containing the submitted form’s data, and an object with properties that contain a number of functions that you can use to help disable the submit button, reset the form, and more (refer to the docs). In the example below, the function implementation simply logs the data to the console.

The Formik component accepts a function as a child. Formik provides a number of properties as a parameter to the function. The most immediately relevant properties that can be pulled out using destructuring are values (an object that represents the current state of the form), and the functions handleChange, handleBlur, and handleSubmit.

For Material, import a TextField and a Button component:

import TextField from '@material-ui/core/TextField'
import Button from '@material-ui/core/Button'

And incorporate them into Formik as follows:

function App() {
  return (
    <div>
      <Formik
        initialValues={{ example: '' }}
        onSubmit={(data) => {
          console.log(data)
        }}
      >{({ values, handleChange, handleBlur, handleSubmit }) => (
        <form onSubmit={handleSubmit}>
          <TextField name="example" onChange={handleChange} onBlur={handleBlur} value={values.example} />
          <Button type="submit">Submit</Button>
        </form>
      )}</Formik>
    </div>
  )
}

To simplify the tedious process of adding values, handleChange, handleBlur, and handleSubmit you can use Formik’s helper components Form and Field.

The Form component replaces the standard HTML form tag. It is automagically passed the onSubmit/handleSubmit function (via internal use of the Context API) so you don’t need to add this every time.

The Field component needs to only be passed a name and type prop. It automagically gets the value, onChange, and onBlur.

A Field component with type “text” will render a default HTML5 input by default. To use Material, there’s another prop, as, where you can pass a component that you want the field to render as. As long as the component you pass is capable of accepting value, onChange, and onBlur props (as Material’s TextField does) then you can use it. The Field component will also pass any additional props it is given (e.g. placeholder) to the component specified in the as prop.

import { Formik, Form, Field } from 'formik'
function App() {
  return (
    <div>
      <Formik
        initialValues={{ example: '' }}
        onSubmit={(data) => {
          console.log(data)
        }}
      >{({ values }) => (
        <Form>
          <Field name="example" type="input" as={TextField} />
          <Button type="submit">Submit</Button>
        </Form>
      )}</Formik>
    </div>
  )
}

The same technique works for checkboxes and radio buttons as the following example demonstrates:

import Radio from '@material-ui/core/Radio'
import Checkbox from '@material-ui/core/Checkbox'
function App() {
  return (
    <div>
      <Formik
        initialValues={{ example: '', name: '', bool: false, multi: [], one: '' }}
        onSubmit={(data) => {
          console.log(data)
        }}
      >{({ values }) => (
        <Form>
          <div>
            <Field name="example" type="input" as={TextField} />
          </div>
          <div>
            <Field name="name" type="input" as={TextField} />
          </div>
          <div>
            <Field name="bool" type="checkbox" as={Checkbox} />
          </div>
          <div>
            <Field name="multi" value="asdf" type="checkbox" as={Checkbox} />
            <Field name="multi" value="fdsa" type="checkbox" as={Checkbox} />
            <Field name="multi" value="qwerty" type="checkbox" as={Checkbox} />
          </div>
          <div>
            <Field name="one" value="sun" type="radio" as={Radio} />
            <Field name="one" value="moon" type="radio" as={Radio} />
          </div>
          <Button type="submit">Submit</Button>
        </Form>
      )}</Formik>
    </div>
  )
} 

However, if we want to show labels beside our fields, we run into an issue with how React Material is implemented. It uses a FormControlLabel component that is in turn passed the component to render via its control prop. Check the docs at:

This doesn’t jive well with our current paradigm. It is cleanest to implement a custom field.

Formik v2 adds a very convenient hook called useField() to facilitate creating a custom field. The hook returns an array containing a field object that contains the value, onChange, etc. and a meta object which is useful for form validation. It contains properties such as error and touched.

import { useField } from 'formik'

In the example below, the value, onChange, etc properties are added to the FormControlLabel as props using the spread operator: {...field}.

import FormControlLabel from '@material-ui/core/FormControlLabel'
function ExampleRadio({ label, ...props }) {
  const [ field, meta ] = useField(props)

  return (
    <FormControlLabel {...field} control={<Radio />} label={label} />
  )

}

Now the ExampleRadio component that was implemented with the help of the useField() hook can replace the Field component with type “radio” in the above examples:

<ExampleRadio name="one" value="sun" type="radio" label="sun" />

So there you have it, a basic use of Formik 2 with React Material that works for the most popular form fields.

Refer to the docs to learn more about useField and the meta object and how it is relevant to form validation:

The docs also publish a validation guide:

How to use aws-sdk for NodeJS with AWS Translate

This post covers using the aws-sdk for NodeJS with AWS Translate.

The code examples are written in ES and transpiled with Babel.

Install the AWS SDK

First, install the aws-sdk package in your project using your favourite package manager:

yarn add aws-sdk
# OR
npm i aws-sdk

Ensure There’s a Working AWS Profile

Ensure that you have an AWS profile and configuration properly setup for your user. An AWS Profile is typically stored inside the ~/.aws folder inside your home directory.

Suppose you have a profile named firxworx. An example entry of a useful entry in ~/.aws/config for that profile is:

[profile firxworx]
region = ca-central-1
output = json

A corresponding entry in the ~/.aws/credentials file that specifies credentials for the example firxworx profile looks like this:

[firxworx]
aws_access_key_id=ABCDXXXX
aws_secret_access_key=ABCDXXXX

Refer to the AWS Docs if you need to create a profile and obtain an Access Key ID and Secret Access Key.

Write Your Code

Start by importing the aws-sdk package:

import AWS from 'aws-sdk'

Next, configure AWS by specifying which profile’s credentials to use:

const credentials = new AWS.SharedIniFileCredentials({ profile: 'firxworx' })
AWS.config.credentials = credentials

Specify any other config options. The following line locks AWS to the most current API version (at the time of writing):

AWS.config.apiVersions = {
  translate: '2017-07-01',
}

Reference the AWS Translate homepage and take note of which regions AWS Translate is currently available in. If you need to specify a region that’s different than the default listed in your AWS profile, or you wish for your code to be explicit about which region it’s using, add the following line. Change the region to the valid region that you would like to use:

AWS.config.update({
  region: 'ca-central-1'
})

If you are using any Custom Terminologies, be sure to define them in the same region that you are about to use for AWS Translate. Custom Terminologies are lists of translation overrides that can be uploaded into the AWS Console. They are useful for ensuring that brand names, terms of art, trademarks, etc are translated correctly. Custom Terminology definitions are only available within the region that they were created and saved in.

Next, create an instance of AWS Translate:

const awsTranslate = new AWS.Translate()

At this point everything is setup to write a function that can translate text.

The following implements an async function called awsTranslate(). The function’s params include specifying a hypothetical custom terminology named example-custom-terminology-v1. Do not specify any value in the TerminologyNames array if you are not using any custom terminologies.

A key insight here is the .promise() method in the line containing awsTranslate.translateText(params).promise() which causes the API to return a promise.

async function asyncTranslate(langFrom, langTo, text) {
  const params = {
    SourceLanguageCode: langFrom,
    TargetLanguageCode: langTo,
    Text: text,
    TerminologyNames: [
      'example-custom-terminology-v1'
    ]
  }

  try {
    const translation = await awsTranslate.translateText(params).promise()
    return translation.TranslatedText
  } catch (err) {
    console.log(err, err.stack)
  }
}

The langFrom and langTo must be language codes as understood by AWS Translate. Refer to the docs for a current list of supported language codes: https://docs.aws.amazon.com/translate/latest/dg/what-is.html.

If you had a hypothetical index.js entry point for your NodeJS application and wanted to use the above function, an example invocation could be:

(async () => {

  const translation = await asyncTranslate('en', 'fr', 'Hello World')
  console.log(translation)

})()

Creating an Invoice Component with Dynamic Line Items using React

This post walks through the steps to creating an Invoice component in React that supports adding + removing line items and features automatic calculation of totals as a user inputs values.

The source code to follow along with is available on github at: https://github.com/firxworx/react-simple-invoice

A live demo can be viewed at: https://demo.firxworx.com/react-simple-invoice

I use SCSS Modules for styling but you could easily refactor the code to use your favourite method for styling components.

SCSS Modules are an easy choice because the latest v2 of create-react-app (released Oct 1 2018) introduces out-of-the-box support for CSS Modules that can be written in CSS (default) or SASS/SCSS with the addition of the node-sass package. Version 1 required users to manually customize their webpack configuration if they wanted to use CSS Modules.

The code is relevant to React v16.6.3.

Project Setup

This project is based on the create-react-app starter. To get started with the yarn package manager:

yarn create react-app react-simple-invoice

The following dependencies are installed:

yarn add node-sass
yarn add react-icons

The create-react-app boilerplate can then be customized to use sass modules: all .css files are renamed to .scss and the .module.scss suffix filename convention is applied where applicable.

I added a bare-bones global stylesheet in styles/index.scss where I import Normalize.css (as _normalize.scss).

All of the component styles assume box-sizing border-box and that normalize.css is in place.

Implementing an Invoice Component

The most significant part of an Invoice component are arguably the line items that can be added and removed. The following provides an overview for how this functionality is implemented:

Initial scaffolding

Start by creating components/Invoice.js and components/Invoice.modules.scss.

Tear up the initial Invoice component as a class-based component. Import a couple helpful icons from react-icons and the Invoice scss module:

import React, { Component } from 'react'
import { MdAddCircle as AddIcon, MdCancel as DeleteIcon } from 'react-icons/md'
import styles from './Invoice.module.scss'

class Invoice extends Component {

  locale = 'en-US'
  currency = 'USD'

  render = () => {
    return (
      <div><h1>I am an Invoice</h1></div>
    )
  }

}

export default Invoice

The locale and currency are stored in the class for the sake of example. In a broader app, these might be injected as props and/or come in from a context or global state.

React will move towards functional components across the board in upcoming versions. However, for now, class-based components still reign for interactive/dynamic components that maintain their own state.

Define state

The Invoice’s state maintains a tax rate and an array of line item objects that have the following properties: name, description, quantity, and price.

Define the initial state with a 0% tax rate and a single blank line item:

  state = {
    taxRate: 0.00,
    lineItems: [
      {
        name: '',
        description: '',
        quantity: 0,
        price: 0.00,
      },
    ]
  }

Displaying line items

Inside the component’s render() method, JSX is used to display each line item reflected in the component’s state.

The Array map() function is used to iterate over each line item.

The key for each line item is simply set to its index in the state array. For more information on the necessity of keys in React, refer to the docs regarding Lists and Keys.

Each form input element is created as a Controlled Component. This means that React completely controls the element’s state (including whatever value is currently being stored by the form element Component) rather than leaving this to the element itself. To accomplish this, each input specifies an onChange event handler whose job it is to update the component’s state every time a user changes the value of an input.

Each input’s value is set to its corresponding value in the Invoice’s state.

The various styles and functions referenced will be implemented next:

{this.state.lineItems.map((item, i) => (
    <div className={`${styles.row} ${styles.editable}`} key={i}>
    <div>{i+1}</div>
    <div><input name="name" type="text" value={item.name} onChange={this.handleLineItemChange(i)} /></div>
    <div><input name="description" type="text" value={item.description} onChange={this.handleLineItemChange(i)} /></div>
    <div><input name="quantity" type="number" step="1" value={item.quantity} onChange={this.handleLineItemChange(i)} onFocus={this.handleFocusSelect} /></div>
    <div className={styles.currency}><input name="price" type="number" step="0.01" min="0.00" max="9999999.99" value={item.price} onChange={this.handleLineItemChange(i)} onFocus={this.handleFocusSelect} /></div>
    <div className={styles.currency}>{this.formatCurrency( item.quantity * item.price )}</div>
    <div>
        <button type="button"
        className={styles.deleteItem}
        onClick={this.handleRemoveLineItem(i)}
        ><DeleteIcon size="1.25em" /></button>
    </div>
    </div>
))}

Implement onChange handler

When a user types a value into an input, the onChange event fires and the handleLineItemChange(elementIndex) function is called.

The Invoice’s state is updated to reflect the input’s latest value:

  handleLineItemChange = (elementIndex) => (event) => {

    let lineItems = this.state.lineItems.map((item, i) => {
      if (elementIndex !== i) return item
      return {...item, [event.target.name]: event.target.value}
    })

    this.setState({lineItems})

  }

The handleLineItemChange() handler accepts an elementIndex param that corresponds to the line item’s position in the lineItems array. As an event handler, the function is also passed an event object.

The Invoice’s state is updated by creating a new version of the lineItems array. The new version features a line item object and property (name, description, quantity, price) modified to correspond to the changed input’s new value. The this.setState() function is then called to update the Invoice component with the updated state.

The new array is created by calling map() on the this.state.lineItems‘s Array and passing a function that updates the appropriate value.

As map() loops through each element, our function checks if that element’s index matches that of the input that triggered handleLineItemChange(). When it matches, an updated version of the line item is returned. When it doesn’t match, the line item is returned as-is.

The implementation works because the name of each form input input (available as event.target.name) corresponds to a the property name of the line item.

Implement onFocus Handler

It is sometimes convenient for users to have an input automatically select its entire value whenever it receives focus.

I think this applies to the quantity and price inputs so I added an onFocus handler called onFocusSelect(). It is implemented as follows:

  handleFocusSelect = (event) => {
    event.target.select()
  }

Implement Handler for Adding a Line Item

When the “Add Line Item” button is clicked, the onClick() event calls the handleAddLineItem() function.

A new line item is added to the Invoice by adding a new line item object to the component state’s lineItems array.

The Array concat() method is used to create a new array based on the current lineItems array. It concatenates a second array containing a new blank line item object. setState() is then called to update the state.

  handleAddLineItem = (event) => {

    this.setState({
      lineItems: this.state.lineItems.concat(
        [{ name: '', description: '', quantity: 0, price: 0.00 }]
      )
    })

  }

Implement Handler for Removing a Line Item

Each line item features a Delete button to remove it from the invoice.

Each Delete button’s onClick() event calls this.handleRemoveLineItem(i) where i is the index of line item.

The Array filter() method is used to return a new array that omits the object at the i‘th position of the original array. this.setState() updates the component state.

  handleRemoveLineItem = (elementIndex) => (event) => {
    this.setState({
      lineItems: this.state.lineItems.filter((item, i) => {
        return elementIndex !== i
      })
    })
  }

Implement Calculation and Formatting Functions

The component implements a number of helper functions to calculate and format tax and total amounts:

  formatCurrency = (amount) => {
    return (new Intl.NumberFormat(this.locale, {
      style: 'currency',
      currency: this.currency,
      minimumFractionDigits: 2,
      maximumFractionDigits: 2
    }).format(amount))
  }

  calcTaxAmount = (c) => {
    return c * (this.state.taxRate / 100)
  }

  calcLineItemsTotal = () => {
    return this.state.lineItems.reduce((prev, cur) => (prev + (cur.quantity * cur.price)), 0)
  }

  calcTaxTotal = () => {
    return this.calcLineItemsTotal() * (this.state.taxRate / 100)
  }

  calcGrandTotal = () => {
    return this.calcLineItemsTotal() + this.calcTaxTotal()
  }

Implement Styles

CSS Modules (or SCSS Modules in this case) are great for ensuring there are no naming conflicts in projects with multiple Components that might use the same class names.

The ComponentName.modules.scss file looks and works just like any normal SCSS file except the classes are invoked in JSX slightly differently.

Notice the import line: import styles from './Invoice.module.scss'

To apply a give .example style to a given component, you would refer to styles.example in the className prop:

<ExampleComponent className={styles.example}>

For multiple and/or conditional styles, ES6 strings + interpolation can be used to add additional expressions:

<ExampleComponent className={`${styles.example} ${styles.anotherExample}`} />

Refer to the repo on github to see how it all comes together.

Resolve Google Lighthouse Audit “does not provide fallback content” with GatsbyJS

Google’s Lighthouse Audit Tool is great for evaluating the performance of a site and for confirming just how awesome static sites created with GatsbyJS + React can be.

A common point reduction seen by Gatsby developers is: Does not provide fallback content when JavaScript is not available, with the description: “The page body should render some content if its scripts are not available”.

This post is here to help you resolve that and get one step closer to a perfect score.

The audit requirement

Google explains: “Your app should display some content when JavaScript is disabled, even if it’s just a warning to the user that JavaScript is required to use the app”.

One might think that React Helmet offers a potential solution, however it’s not applicable in this case. Helmet is specifically a document head manager and even though <noscript> tags are valid inside a document head, the audit rule specifically refers to the page body.

Adding tags to the page body above Components injected by Gatsby

Copy html.js from .cache/default-html.js in your Gatsby project folder to your src/ folder, renaming it to html.js:

cp .cache/default-html.js src/html.js

html.js will now take precendence over Gatsby’s boilerplate version.

Open html.js. Between {this.props.preBodyComponents} and before the <div> that contains Gatsby’s body components, you can insert a tag such as:

<noscript>This website requires JavaScript. To contact us, please send us an email at: <a href="mailto:email@example.com">email@example.com</a></noscript>

Voila, one more checkbox on your Lighthouse audit results!

For more information about html.js see: https://www.gatsbyjs.org/docs/custom-html/

Installing gulp4 with babel to support an ES6 gulpfile

This guide covers installing gulp4 with babel to support ES6 syntax in your gulpfile.

Gulp is a task automation tool that has emerged as one of the standard build tools to automate the web development workflow. Babel is a compiler/transpiler that enables developers to use next-generation ECMAScript syntax (ES6 and beyond) instead of older JavaScript (ES5) syntax.

Gulp4 and ES6+ work together swimmingly to help you write cleaner, easier-to-read, and more maintainable gulpfile’s.

Installing gulp4

At the time of writing, the default gulp package installs gulp 3.x. The following will install and configure gulp4.

Gulp has two key parts: gulp and the gulp-cli command line tools. The idea is that gulp-cli should be installed globally on a developer’s machine while gulp should be installed locally on a per-project basis. This helps ensure compatibility with different versions of gulp that will inevitably arise when maintaining projects of different vintages.

To use gulp4, cli version 2.0 or greater is required. Check the version on your system with:

gulp -v

If the command returns a command not found error, then you probably don’t have gulp installed at all (or at least don’t have it available in your PATH).

If the command outputs a version lower than 2.0, you may need to uninstall any globally-installed gulp (and/or gulp-cli) and then install the current version gulp-cli before proceeding.

To install gulp-cli globally, run ONE of the following commands, depending on your preference of package manager. npm is the classic node package management tool and yarn is a newer tool developed by Facebook that addresses certain shortcomings with npm.

yarn global add gulp-cli
# OR
npm install gulp-cli -g

Test the install by running gulp -v and ensuring the version output is greater than 2.0. Next, install the gulp@next package. The @next part specifies the next-generation gulp4.

Assuming you have already run npm init or yarn init and have a package.json file, execute the following command in your project’s root directory:

yarn add gulp@next --dev
npm install gulp@next --save-dev

Installing babel

yarn add @babel/core --dev
yarn add @babel/preset-env --dev
yarn add @babel/register --dev
# OR 
npm install @babel/core --save-dev
npm install @babel/preset-env --save-dev
npm install @babel/register --save-dev

Next, create a .babelrc file in your project’s root folder and specify the current version of node as the target:

{
  "presets": [
    ["@babel/preset-env", {
      "targets": {
        "node": "current"
      }
    }]
  ]
}

Create your gulpfile

Create your gulpfile with the filename gulpfile.babel.js. The babel.js suffix ensures that babel will be used to process the file.

The following example demonstrates a few ES6+ features: optional semicolons, import statements, and the “fat arrow” syntax for defining functions:

'use strict'

import gulp from 'gulp'

gulp.task('task-name', () => {
  // example 
  return gulp.src('/path/to/src')
    .pipe(gulp.dest('/path/to/dest'))
})

Gulp4 features a new task execution system that introduces the functions gulp.series() and gulp.parallel() that can execute gulp tasks in either series (one-after-another) or parallel (at the same time). This makes a lot of workflows much easier to define vs. previous versions!

Another nice feature is that gulp4 supports returning a child process to signal task completion. This makes it cleaner to execute commands within a gulp task, which can help with build and deployment related tasks.

The following example defines a default build task that runs two functions/tasks in series using gulp.series(). The build task is defined using the ES6 const keyword and exported as the default function/task for the gulpfile. The example doSomething() and doAnotherThing() functions/tasks are also exported.

'use strict'

import gulp from 'gulp'

export function doSomething {
  // example 
  return gulp.src('/path/to/src')
    .pipe(gulp.dest('/path/to/dest'))
})

export function doAnotherThing {
  // example 
  return gulp.src('/path/to/src')
    .pipe(gulp.dest('/path/to/dest'))
})

const build = gulp.series(doSomething, doAnotherThing)

export default build

Creating Custom Post Types in WordPress

This guide covers how to create Custom Post Types (CPT’s) in WordPress. CPT’s are important to WordPress developers because they enable the creation of more complex sites + web-apps than is possible with a default WordPress install.

Custom Post Types are frequently defined with additional data fields called meta fields that can be defined and made editable to admins via meta boxes.

Example applications:

  • Jokes — each post contains a joke, which are listed and displayed differently than regular blog posts
  • Job Opportunities — include Salary Range and Location meta
  • Car Listings — registered users post for-sale listings and specify Make, Model, and Year meta via dynamic dropdown menus
  • Beer Reviews — featuring a range of meta fields that include Brewery, Style, and Tasting Score

Custom Post Types can be created (registered) or modified by calling the register_post_type() function within the init action.

Custom Taxonomies and the connection to Custom Post Types

Custom Post Types are closely related to the concept of Custom Taxonomies. Taxonomies are a way to group WordPress objects such as Posts by a certain classification criteria. Developers can define Custom Taxonomies to add to WordPress’ default taxonomies: Categories, Tags, Link Categories, and Post Formats.

Although this guide focuses on CPT’s, its important to note that projects are often implemented using a thoughtful combination of Custom Post Types and Custom Taxonomies.

A classic example of a complementary post type + taxonomy is: Book as a Custom Post Type and Publisher as a Custom Taxonomy.

If your Custom Post Type needs to be related to any Custom Taxonomies, they must be identified via the optional taxonomies argument of the register_post_type() function. This argument only informs WordPress of the relation and does not register any taxonomies as a side-effect. Custom Taxonomies must be registered on their own via WordPress’ register_taxonomy() function.

Registering new Custom Post Types

Registering in a Plugin vs. Theme

Custom Post Types can be registered by plugins or by themes via their functions.php file. It’s generally recommended to go the plugin route to keep a project de-coupled from any particular theme.

In the many cases where CPT’s do not depend on activation or deactivation hooks, they can be defined by a Must-Use Plugin (mu-plugin). This special type of plugin is useful to safeguard against admins (e.g. client stakeholders with admin access) accidentally de-activating any Custom Post Types that are important to their website/app.

If a plugin or theme that registers a CPT becomes deactivated, WordPress’ default behaviour is to preserve the post data in its database, though it will become inaccessible and could break any themes or plugins that assume the CPT exists. The CPT will be restored once whatever plugin or theme that registered it is re-activated.

Basic Definition

Custom Post Types may be registered by calling WordPress’ register_post_type() function during the init action with the following arguments: a required one-word post type key, and an optional array of key => value pairs that specify all optional arguments.

The following example implements a function create_my_new_post_type() that calls register_post_type() to register a CPT called candy. The last line hooks the function to the init action using WordPress’ add_action() function. It could be included as part of a plugin or in a theme’s functions.php.

Some of the most common optional args are specified: user-facing labels for singular and plural, if the CPT is to be public (appear in search, nav, etc) or not, and whether it should have an archive (list of posts) or not.

function create_my_new_post_type() {
    register_post_type( 'candy',
        [
            'labels' => [
                'name' => __( 'Candies' ),
                'singular_name' => __( 'Candy' )
            ],
        'public' => true,
        'has_archive' => true,
        ]
    );
}
add_action( 'init', 'create_my_new_post_type' );

Tip: Namespacing

It is a good practice to namespace any CPT keys by prefixing their names with a few characters relevant to you or your project followed by an underscore, such as xx_candy. This helps avoid naming conflicts with other plugins or themes, and is particularly important if you are planning to distribute your project.

Tip: Use singular form for post type keys

The WordPress codex and Handbooks always use a singular form for post type keys by convention, and WordPress’ default types such as ‘post’ and ‘page’ are singular as well.

Detailed Definition

There are a ton of optional arguments that can be specified when registering a Custom Post Type. The WordPress Developer Documentation is the best source to review all of them: register_post_type().

Some of the more notable options include:

  • labels — array of key => value pairs that correspond to different labels. There are a ton of possible labels but the most commonly specified are ‘name’ (plural) and ‘singular_name’
  • public — boolean indicating if the post type is to be public (shown in search, etc) or not (default: false)
  • has_archive — boolean indicating if an archive (list of posts) view should exist for this post type or not (default: true)
  • supports — array of WordPress core feature(s) to be supported by the post type. Options include ‘title’, ‘editor’, ‘comments’, ‘revisions’, ‘trackbacks’, ‘author’, ‘excerpt’, ‘page-attributes’, ‘thumbnail’, ‘custom-fields’, and ‘post-formats’. The ‘revisions’ option indicates whether the post type will store revisions, and ‘comments’ indicates whether the comments count will show on the edit screen. The default value is an array containing ‘title’ and ‘editor’.
  • register_meta_box_cb — string name of a callback function that will handle creating meta boxes for the CPT so admins have an interface to input meta data
  • taxonomies — an array of string taxonomy identifiers to register with the post type
  • hierarchical — a boolean value that specifies if the CPT behaves more like pages (which can have parent/child relationships) or like posts (which don’t)

The numerous other options enable you to manage rewrite rules (e.g. specify different URL slugs), configure options related to the REST API, and set capabilities as part of managing user permissions.

Adding Meta Fields to a Custom Post Type

Enabling custom-fields

A straightforward way to enable admins to define meta fields as key->value pairs when editing a post is to include the value ‘custom-fields’ in the ‘supports’ array, as part of the args passed to register_post_type().

Adding Meta Boxes to a Custom Post Type

The above ‘custom-fields’ approach works for basic use-cases, however most projects require advanced inputs like dropdown menus, date pickers, repeating fields, etc. and a certain level of data validation.

The solution is defining meta boxes that specify inputs for each of a CPT’s meta fields and handle the validation and save process. Meta boxes must be implemented in a function whose name is passed to register_post_type() via its args as a value of the ‘register_meta_box_cb’ option.

Creating meta boxes can be tricky for the uninitiated… Stay tuned for an upcoming post dedicated solely to them!

In the meantime, I would suggest exploring solutions that simplify the process of creating meta boxes. Two excellent options are the open-source CMB2 (Custom Meta-Box 2) and Advanced Custom Fields (ACF), which offers both free and commercial options. I think the commercial ACF PRO version is well worth the $100 AUD fee to license it for unlimited sites including a lifetime of updates and upgrades.

Displaying a Custom Post Type

Posts belonging to a CPT can be displayed using single and archive templates, and can be queried using the WP_Query object.

Single template: single post view

Single templates present a single post and its content. WordPress looks for the template file single-post_type_name.php for a CPT-specific template and if it doesn’t find it, it defaults to the standard single.php template.

Archive template: list of posts view

Archive templates present lists of posts. A Custom Post Type will have an Archive if it was registered with the optional has_archive argument set to a value of true (default: false).

To create an archive template for your CPT, create a template file that follows the convention: archive-post_type_name.php. If WordPress doesn’t find this file, it defaults to the standard archive.php template.

Using the WP_Query object

WP_Query can be used in widget definitions, in templates, etc. to present posts belonging to a CPT. The following example queries for published posts of the type ‘candy’ and then loops over the results, presenting each one’s title and content as items in a list.

<?php

$args = [
  'post_type'   => 'candy',
  'post_status' => 'publish',
  ]);

$candies = new WP_Query( $args );
if( $candies->have_posts() ) :
?>
  <ul>
    <?php
      while( $candies->have_posts() ) :
        $candies->the_post();
        ?>
          <li><?php printf( '%1$s - %2$s', get_the_title(), get_the_content() );  ?></li>
        <?php
      endwhile;
      wp_reset_postdata();
    ?>
  </ul>
<?php
else :
  esc_html_e( 'No candies... Go get some candy!', 'text-domain' );
endif;
?>

The wp_reset_postdata() call is important to reset WordPress back to the original loop, so other functions that depend on it will work properly. Reference: https://developer.wordpress.org/reference/functions/wp_reset_postdata/

Pulling files off a shared host (CPanel) with a 10K file FTP limit using a python web scraper

This post demonstrates the use of a web scraper to circumvent an imposed limit and download a bunch of the files.

I’ll use a recent case as an example where I had to migrate a client’s site to a new host. The old shared host was running an ancient version of CPanel and had a 10K file limit for FTP. There was no SSH or other tools, almost no disk quota left, and no support that could possibly change any CPanel settings for me. The website had a folder of user uploads with 30K+ image files.

I decided to use a web scraper to pull all of the images. In order to create links to all of the images that I wanted to scrape, I wrote a simple throwaway PHP script to link to all of the files in the uploads folder. I now had a list of all 30K+ files for the first time — no more 10K cap:

<?php
$directory = dirname(__FILE__) . '/_image_uploads';
$dir_contents = array_diff(scandir($directory), array('..', '.'));

echo '<h3>' . count($dir_contents) . '</h3>';
echo '<h5>' . $directory . '</h5>';

echo "<ul>\n";
$counter = 0;
foreach ($dir_contents as $file) {
  echo '<li>' . $counter++ . ' - <a href="/_image_uploads/'. $file . '">' . $file . "</a></li>\n";
}
echo "</ul>";
?>

Next, to get the files, I used a python script to scrape the list of images using the popular urllib and shutil python3 libraries.

I posted a gist containing a slightly more generalized version of the script. It uses the BeautifulSoup library to parse the response from the above PHP script’s URL to build a list of all the image URLs that it links to. This script can be easily modified to suit a variety of applications, such as downloading lists of PDF’s or CSV’s that might be linked to from any arbitrary web page.

The gist is embedded below:

If you need to install the BeautifulSoup library with pip use: pip install beautifulsoup4

In the gist, note the regex in the line soup.findAll('a', attrs={'href': re.compile("^http://")}). This line and its regex can be modified to suit your application, e.g. to filter for certain protocols, file types, etc.

Troubleshooting the fast.ai AWS setup scripts (setup_p2.sh)

Fast.ai offers a well-regarded free online course on Deep Learning that I thought I’d check out.

It seems that a lot of people struggle getting the fast.ai setup scripts running. Complaints and requests for help are on reddit, in forums, etc. This doesn’t surprise me because the scripts are not very robust. On top of that, AWS has a learning curve so troubleshooting following a script failure can be a challenge.

Hopefully this post helps other people that have hit snags. It is based on my experience on MacOS, however should be very compatible for those running Linux or Windows with Cygwin.

Understanding the setup script’s behaviour

It leaves a mess when it fails

If running the setup script fails, which is possible for a number of reasons, it will potentially have created a number of AWS resources in your account and a local copy of an SSH key at ~/.ssh/aws-key-fast-ai.pem. It does not clean up after itself in failure cases.

The setup script doesn’t check for existing fast-ai tagged infrastructure, so subsequent runs can create additional VPC’s and related resources on AWS, especially as you attempt to resolve the reason(s) it failed. The setup script might generate fast-ai-remove.sh and fast-ai-commands.txt but it overwrites these each time its run with only its current values, potentially leaving “orphan” infrastructure.

Thankfully all AWS resources are created with the same “fast-ai” tags so they are easy to spot within the AWS Console.

It makes unrealistic assumptions

The setup script assumes your aws config’s defaults specify a default region in one of its three supported regions: us-west-2, eu-west-1, and us-east-1.

I’m not sure why the authors assumed that a global tech crowd interested machine learning would be unlikely to have worked with AWS in the past and thus no existing aws configuration that might conflict.

The commands in the script do not use the --region argument to specify an explicit region so they will use whatever your default is. If your default happens to be one of the three supported ones, but you don’t have a sufficient InstanceLimit or there’s another problem, more issues could follow.

Troubleshooting

If you encountered an error after running the script, prior to re-running the script, take note of the following checks when attempting to resolve:

Check 1: Ensure you have an InstanceLimit > 0

Most AWS users will have a default InstanceLimit of 0 on P2 instances. You may need to apply for an increase and get it approved (this is covered in the fast.ai setup video).

If a first run of the script gave you something like the following, there was an issue with your InstanceLimit:

Error: *An error occurred (InstanceLimitExceeded) when calling the RunInstances operation: You have requested more instances (1) than your current instance limit of 0 allows for the specified instance type. Please visit http://aws.amazon.com/contact-us/ec2-request to request an adjustment to this limit.* 

InstantLimits are specific to a given resource in a given region. Take note of which region your InstanceLimit increase request was for and verify that it was granted in the same region.

Check 2: Ensure the right region

Verify your current default aws region by running: aws configure get region. The script assumes this is one of three supported regions: us-west-2, eu-west-1, or us-east-1.

The script also assumes that you have an InstanceLimit > 0 for P2 instances in whichever region you would like to use (or T2 instances if you are using setup_t2.sh).

To get things running quickly, I personally found it easiest to make the script happy and temporarily set my aws default to a supported region in ~/.aws/config, i.e.:

[default]
region=us-west-2

Another option is to modify the scripts and add an explicit --region argument to every aws command that will override the default region. If you have multiple aws profiles defined as named profiles, and the profile that you wish to use for fast.ai specifies a default region, you can use the --profile PROFILENAME argument instead.

For example, the following hypothetical aws config file (~/.aws/config) specifies a profile called “fastai”. A --profile fastai argument could then be added to every aws command in the setup script:

[default]
region=ca-central-1

[profile fastai]
region=us-west-2

Check 3: Delete cruft from previous failed runs

This check is what inspired me to write this post!

Delete AWS resources

Review any resources were created in your AWS Console, and delete any VPC’s (and any dependencies) that were spun up. They can be identified because they were created with the “fast-ai” tag which is shown in any tables of resources in the AWS Console.

Cruft resources will have been created in any region that the setup script was working with (i.e. whatever your default region was at the time you ran it).

If you’ve found cruft, start by trying to delete the VPC itself, as this generally will delete most if not all dependencies. If this fails because of a dependency issue, you will need to find and delete those dependencies first.

IMPORTANT: AWS creates a default VPC and related dependencies (subnets, etc.) in every region available to your account. Do NOT delete any region’s default VPC. Only delete resources tagged with “fast-ai”.

Delete SSH keys

Check to see if ~/.ssh/aws-key-fast-ai.pem was created, and if so, delete it before running the script again.

The setup script has logic that checks for this pem file. We do not want the script to find the file on a fresh run.

After a successful run

After the setup script ran successfully, I got output similar to:

{
    "Return": true
}
Waiting for instance start...

All done. Find all you need to connect in the fast-ai-commands.txt file and to remove the stack call fast-ai-remove.sh
Connect to your instance: ssh -i /Users/username/.ssh/aws-key-fast-ai.pem ubuntu@ec2-XX-YY-ZZ-XXX.us-west-2.compute.amazonaws.com

Reference fast-ai-commands.txt for information about your VPC and EC2 instance. An ssh command to connect is in the file, and you can find your “InstanceUrl”.

I suggest picking up the video from here and following along from the point where you connect to your new instance. It guides you through checking the video card with the nvidia-smi command and running jupyter: http://course.fast.ai/lessons/aws.html

Starting and stopping your instance

The fast-ai-commands.txt file outlines the commands to start and stop your instance after the setup has completed successfully, e.g.:

aws ec2 start-instances --instance-ids i-0XXXX
aws ec2 stop-instances --instance-ids i-0XXXX

Its important to stop instances when you are finished using them so that you don’t get charged hourly fees for their continued operation. P2 instances run about $0.90/hr at the time of writing.