Creating Custom Post Types in WordPress

This guide covers how to create Custom Post Types (CPT’s) in WordPress. CPT’s are important to WordPress developers because they enable the creation of more complex sites + web-apps than is possible with a default WordPress install.

Custom Post Types are frequently defined with additional data fields called meta fields that can be defined and made editable to admins via meta boxes.

Example applications:

  • Jokes — each post contains a joke, which are listed and displayed differently than regular blog posts
  • Job Opportunities — include Salary Range and Location meta
  • Car Listings — registered users post for-sale listings and specify Make, Model, and Year meta via dynamic dropdown menus
  • Beer Reviews — featuring a range of meta fields that include Brewery, Style, and Tasting Score

Custom Post Types can be created (registered) or modified by calling the register_post_type() function within the init action.

Custom Taxonomies and the connection to Custom Post Types

Custom Post Types are closely related to the concept of Custom Taxonomies. Taxonomies are a way to group WordPress objects such as Posts by a certain classification criteria. Developers can define Custom Taxonomies to add to WordPress’ default taxonomies: Categories, Tags, Link Categories, and Post Formats.

Although this guide focuses on CPT’s, its important to note that projects are often implemented using a thoughtful combination of Custom Post Types and Custom Taxonomies.

A classic example of a complementary post type + taxonomy is: Book as a Custom Post Type and Publisher as a Custom Taxonomy.

If your Custom Post Type needs to be related to any Custom Taxonomies, they must be identified via the optional taxonomies argument of the register_post_type() function. This argument only informs WordPress of the relation and does not register any taxonomies as a side-effect. Custom Taxonomies must be registered on their own via WordPress’ register_taxonomy() function.

Registering new Custom Post Types

Registering in a Plugin vs. Theme

Custom Post Types can be registered by plugins or by themes via their functions.php file. It’s generally recommended to go the plugin route to keep a project de-coupled from any particular theme.

In the many cases where CPT’s do not depend on activation or deactivation hooks, they can be defined by a Must-Use Plugin (mu-plugin). This special type of plugin is useful to safeguard against admins (e.g. client stakeholders with admin access) accidentally de-activating any Custom Post Types that are important to their website/app.

If a plugin or theme that registers a CPT becomes deactivated, WordPress’ default behaviour is to preserve the post data in its database, though it will become inaccessible and could break any themes or plugins that assume the CPT exists. The CPT will be restored once whatever plugin or theme that registered it is re-activated.

Basic Definition

Custom Post Types may be registered by calling WordPress’ register_post_type() function during the init action with the following arguments: a required one-word post type key, and an optional array of key => value pairs that specify all optional arguments.

The following example implements a function create_my_new_post_type() that calls register_post_type() to register a CPT called candy. The last line hooks the function to the init action using WordPress’ add_action() function. It could be included as part of a plugin or in a theme’s functions.php.

Some of the most common optional args are specified: user-facing labels for singular and plural, if the CPT is to be public (appear in search, nav, etc) or not, and whether it should have an archive (list of posts) or not.

function create_my_new_post_type() {
    register_post_type( 'candy',
            'labels' => [
                'name' => __( 'Candies' ),
                'singular_name' => __( 'Candy' )
        'public' => true,
        'has_archive' => true,
add_action( 'init', 'create_my_new_post_type' );

Tip: Namespacing

It is a good practice to namespace any CPT keys by prefixing their names with a few characters relevant to you or your project followed by an underscore, such as xx_candy. This helps avoid naming conflicts with other plugins or themes, and is particularly important if you are planning to distribute your project.

Tip: Use singular form for post type keys

The WordPress codex and Handbooks always use a singular form for post type keys by convention, and WordPress’ default types such as ‘post’ and ‘page’ are singular as well.

Detailed Definition

There are a ton of optional arguments that can be specified when registering a Custom Post Type. The WordPress Developer Documentation is the best source to review all of them: register_post_type().

Some of the more notable options include:

  • labels — array of key => value pairs that correspond to different labels. There are a ton of possible labels but the most commonly specified are ‘name’ (plural) and ‘singular_name’
  • public — boolean indicating if the post type is to be public (shown in search, etc) or not (default: false)
  • has_archive — boolean indicating if an archive (list of posts) view should exist for this post type or not (default: true)
  • supports — array of WordPress core feature(s) to be supported by the post type. Options include ‘title’, ‘editor’, ‘comments’, ‘revisions’, ‘trackbacks’, ‘author’, ‘excerpt’, ‘page-attributes’, ‘thumbnail’, ‘custom-fields’, and ‘post-formats’. The ‘revisions’ option indicates whether the post type will store revisions, and ‘comments’ indicates whether the comments count will show on the edit screen. The default value is an array containing ‘title’ and ‘editor’.
  • register_meta_box_cb — string name of a callback function that will handle creating meta boxes for the CPT so admins have an interface to input meta data
  • taxonomies — an array of string taxonomy identifiers to register with the post type
  • hierarchical — a boolean value that specifies if the CPT behaves more like pages (which can have parent/child relationships) or like posts (which don’t)

The numerous other options enable you to manage rewrite rules (e.g. specify different URL slugs), configure options related to the REST API, and set capabilities as part of managing user permissions.

Adding Meta Fields to a Custom Post Type

Enabling custom-fields

A straightforward way to enable admins to define meta fields as key->value pairs when editing a post is to include the value ‘custom-fields’ in the ‘supports’ array, as part of the args passed to register_post_type().

Adding Meta Boxes to a Custom Post Type

The above ‘custom-fields’ approach works for basic use-cases, however most projects require advanced inputs like dropdown menus, date pickers, repeating fields, etc. and a certain level of data validation.

The solution is defining meta boxes that specify inputs for each of a CPT’s meta fields and handle the validation and save process. Meta boxes must be implemented in a function whose name is passed to register_post_type() via its args as a value of the ‘register_meta_box_cb’ option.

Creating meta boxes can be tricky for the uninitiated… Stay tuned for an upcoming post dedicated solely to them!

In the meantime, I would suggest exploring solutions that simplify the process of creating meta boxes. Two excellent options are the open-source CMB2 (Custom Meta-Box 2) and Advanced Custom Fields (ACF), which offers both free and commercial options. I think the commercial ACF PRO version is well worth the $100 AUD fee to license it for unlimited sites including a lifetime of updates and upgrades.

Displaying a Custom Post Type

Posts belonging to a CPT can be displayed using single and archive templates, and can be queried using the WP_Query object.

Single template: single post view

Single templates present a single post and its content. WordPress looks for the template file single-post_type_name.php for a CPT-specific template and if it doesn’t find it, it defaults to the standard single.php template.

Archive template: list of posts view

Archive templates present lists of posts. A Custom Post Type will have an Archive if it was registered with the optional has_archive argument set to a value of true (default: false).

To create an archive template for your CPT, create a template file that follows the convention: archive-post_type_name.php. If WordPress doesn’t find this file, it defaults to the standard archive.php template.

Using the WP_Query object

WP_Query can be used in widget definitions, in templates, etc. to present posts belonging to a CPT. The following example queries for published posts of the type ‘candy’ and then loops over the results, presenting each one’s title and content as items in a list.


$args = [
  'post_type'   => 'candy',
  'post_status' => 'publish',

$candies = new WP_Query( $args );
if( $candies->have_posts() ) :
      while( $candies->have_posts() ) :
          <li><?php printf( '%1$s - %2$s', get_the_title(), get_the_content() );  ?></li>
else :
  esc_html_e( 'No candies... Go get some candy!', 'text-domain' );

The wp_reset_postdata() call is important to reset WordPress back to the original loop, so other functions that depend on it will work properly. Reference:

Include the client IP in apache logs for servers behind a load balancer

When servers are behind a load balancer, Apache’s default configuration produces logs that show the load balancer’s IP address instead of the IP of the remote client that initiated the request. Furthermore, if multiple server’s logs are consolidated into one, it can be difficult to determine which server created a given log entry.

This post covers how to improve on Apache’s default logging situation such that:

  • every web server behind the load balancer includes a unique identifier for itself in its log entries, and that;
  • access log entries include the client’s remote IP address as found in the X-Forwarded-For header set by the load balancer.

This setup is more helpful for troubleshooting, configuring monitoring and alerts, etc. and it helps to maximize the value of 3rd-party log aggregation and analysis services like Papertrail or Loggly).

The example commands in this post are applicable to Ubuntu/Debian however they are easily adapted to other environments.

Enable required modules

Start by ensuring that the required Apache modules: env and remoteip, are enabled:

a2enmod env
a2enmod remoteip 
service apache2 restart

Identify each web server

Add a SetEnv directive in the site/app’s apache conf to instruct the env module to set a new environment variable with a value that uniquely identifies each server behind the load balancer.

The example below is a snippet from an Apache VirtualHost’s conf file. It defines a variable called APP_LB_WORKER with the value ‘unique_identifier’. If you were using a devops automation tool such as Ansible, you could use the template module and use a handy variable such as {{ ansible_host }} in place of the example’s hard-coded ‘unique_identifier’ value.

<VirtualHost *:443>
    SetEnv APP_LB_WORKER unique_identifier

Configure the Apache RemoteIP module

Create a file named /etc/apache2/conf-available/remoteip.conf and use it to set: the RemoteIPHeader to ‘X-Forwarded-For’, any appropriate RemoteIPInternalProxy or other RemoteIP directives, and to define a new LogFormat that includes both the environment variable that contains the server’s identifier and the client’s remote IP as sourced from the X-Forwarded-For header.

The RemoteIPInternalProxy directive tells the RemoteIP module which IP address(es) or IP address blocks it can trust to provide a valid RemoteIPHeader that contains the client’s IP.

The following example’s RemoteIPInternalProxy value is representative of an environment where the load balancer’s internal network IP address belongs to a public subnet with the CIDR block Choose an appropriate value (or values) for your environment.

A full list of configuration directives for RemoteIP can be found in the Apache docs:

The following example names the new LogFormat as “loadbalance_combined”. You can choose any name you like that isn’t already in use.


RemoteIPHeader X-Forwarded-For

LogFormat "%a %v %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" \"%{APP_LB_WORKER}e\"" loadbalance_combined

Finally, enable the conf:

a2enconf /etc/apache2/conf-available/remoteip.conf

The Apache LogFormat is extensively customizable. Learn more about the placeholders and options from here: Remote log aggregation service Loggly also has an excellent overview:

Tell Apache to use the new LogFormat

Next, tell Apache to use the custom LogFormat that we named loadbalance_combined by editing your site/app’s conf file. The following example builds upon the previous example of a VirtualHost conf:

<VirtualHost *:443>
    SetEnv APP_LB_WORKER unique_identifier
    CustomLog /var/log/app_name/access.log loadbalance_combined

The following example is a more elaborate case that uses the tee command to send the log entry to both an access.log file and to the /usr/bin/logger command to include it in the syslog, only if an environment variable named “dontlog” is not set. An example use-case for an env variable like “dontlog” is to set it (e.g. via the SetEnvIf directive) for any requests that correspond to a “health check” from the load balancer to the web server. This helps keep logs clean and clutter-free.

CustomLog "|$/usr/bin/tee -a /var/log/app_name/access.log | /usr/bin/logger -t apache2 -p local6.notice" loadbalance_combined env=!dontlog

Restart Apache

Confirm the validity of your confs using apachectl configtest and restart apache for the new configuration to take effect:

service apache2 restart

Processing payments on WordPress with Stripe and Gravity Forms

This post covers creating a web Form in WordPress using the popular (and excellent) Gravity Forms plugin and configuring it to process payments with Stripe:

  • Gravity Forms is a leading commercial Form Builder plugin for WordPress that many of my clients have had a lot of success with. It is a powerful tool when combined with Payments and it supports integration via its webhooks and API.
  • Stripe is a leading payments processor that is popular due to its ease-of-use and ease-of-integration. Stripe is currently my preferred choice for eCommerce projects.

The following assumes:

  • you are logged into your WordPress (/wp-admin) as an Administrator,
  • your WordPress site has SSL enabled (https://),
  • that Gravity Forms and the Gravity Forms Stripe Add-On plugins have been installed and activated, and
  • you have a valid + verified Stripe account and have inputted all necessary Business Settings.

Get API Keys from Stripe

  • Open a tab and login to your Stripe account.
  • Click on Developers in the left nav. A sub-menu will appear underneath.
  • Click on API Keys in the sub-menu under Developers.
  • Generate a set of Test and Live API Keys. Each comes in a pair consisting of a Private Key and a Publishable Key. The keys themselves are strings of random-looking characters prefixed with: ‘sk_live’, ‘pk_live’, ‘sk_test’, and ‘pk_test’.

Make an effort to keep your Private Keys (those prefixed with ‘sk’) confidential and do not send them to anyone over email or other potentially insecure means.

Setup Gravity Forms for Stripe

In the WordPress admin dashboard:

  • Click on Forms in the left nav. A sub-menu will expand underneath.
  • Click on Settings in the sub-menu under Forms.
  • The resulting Settings page has a series of tabs down the left side. Click on Stripe.
  • On the Stripe page complete the form:
    • API: choose “Live” (or “Test” if you wish all Forms to be in Test Mode)
    • Input the Test Publishable Key, Test Secret Key, Live Publishable Key, and Live Secret Key that you obtained from the Stripe Dashboard
    • Under Stripe Webhooks there are instructions for adding a Webhook within your Stripe account that points to a URL in your WordPress site. Click on the “View Instructions” link to reveal the necessary steps and open a tab to login to your Stripe Dashboard and complete them in Stripe. When that is complete:
    • Tick the “Webhooks Enabled” checkbox
    • Input the Test Signing secret and Live Signing secret values that you obtained from Stripe

Finally, click the Update Settings button to save your settings.

Create a Form and Configure Payments

Create the Gravity Form

Create a new Form and add at least a Product Field, Total field, and Credit Card field. In the Product Field, define your products/services and set their prices.

If they are applicable to your situation, you can also use Option and Quantity fields to gather additional information that can influence the price, and you can include a Shipping field as well.

Add a Stripe Feed to the Form

In your Form’s “Edit” screen, navigate to: Settings > Stripe.

Click on the Add New button to add a new Stripe Feed to this Form.

  • Name textbox: input a descriptive name (e.g. “Event Registration 2018”). Note that its good to include a unique identifier in the Name (e.g. such as the year for an annually recurring event) so you can easily identify payments related to this particular Form + Stripe Feed in the future.
  • Transaction Type: choose “Products & Services” from the drop-down menu (“Subscriptions” is the other available option, but it is not covered this post)

A set of new fields will appear:

  • Payment Amount: choose which Form field you would like Gravity Forms to use to determine the total amount to charge the user. In most cases you’d choose the Form Total field (e.g. “Form Total”) rather than any individual product field.
  • Metadata: optionally choose Form fields that you wish to send to Stripe so it can be included in Stripe Reports. This is an optional step but it can make your life easier down the line because you will be able to see more information on the payment side of things that will help you reconcile and perform accounting and customer support tasks. For each Metadata field you would like to add:
    • Input the field Name (as you’d like it to appear in Stripe) and from the drop-down menu choose which Form Field you would like to use as a value. It is also useful for sending along details about what products or choices the user may have made. Examples:
    • Name: “Entry ID” Value: “Entry ID” — the unique database ID of the form submission in your WordPress + Gravity
    • Name: “Email” Value: “Email” — sends the customer’s email to Stripe
  • Stripe Receipt: Choose whether or not you would like Stripe to automatically send an email receipt to the customer. The default option is “No” but it is often desirable to choose “Email”.
  • Conditional Logic: Optionally add logic to conditionally process payments only if certain values/conditions are met. Most forms do not need to use this option.

When you are done, click on the Update Settings button to save your Stripe Feed.

You should be good to go! Ensure your Stripe Feed is Live and your Form is Live, and then add it to a Post or Page to see it up on your site!

If you want to test your form, you can enable Test mode and use one of the fake/test credit cards that Stripe has published here: You will be able to see any Test transactions in your Stripe Dashboard when you toggle to Test vs. Live mode.

How to buy neopixel (ws2812b) strips for an Arduino or Raspberry Pi project

Neopixels can be purchased from many sources online. The best prices are often found at Asian deal sites like

The vast selection and number of variants of LED products in these catalogues can seem daunting especially for newbies looking for a deal. This post explains what the most common codes and numbers mean to help you identify 5V Neopixel products that are easily compatible with popular microcontrollers like Arduino and single-board-computers like Raspberry Pi.

Neopixel buying guide

The following sections cover different decision points that someone buying Neopixels will be faced with:

Pixels, strands, or rings

Neopixels are usually found a few different form factors:

  • individual pixels: these are usually mounted to a small circuit board barely larger than the pixel itself
  • strips: mounted to flexible strips of plastic tape
  • strings: individual pixels spaced apart with wires, like Christmas lights
  • rings: rigid circuit boards in circular and semi-circular shapes.

Choose what suits your project best. The Neopixel LEDs themselves are the same; this decision purely relates to what material they are mounted to.

Strips are the most common choice. They are flexible and usually come with a peel-off adhesive backing that isn’t usually very strong (hot glue often works better). Strips can be easily cut with scissors – all the way down to individual pixels, so this purchase is flexible to suit the needs of many projects.

Pixel density

Neopixel strips are typically priced by the number of Neopixels spaced along 1 metre of strip. Rings, strings, and individual Neopixels are simply priced based on how many pixels you’re buying.

When it comes to strips, common pixel densities found in online stores include: 30, 60, 90, and 144.

  • 30 pixels/m is a good choice for area lighting and creating an atmosphere
  • 60 pixels/m is a common general purpose choice that is suitable for many display applications as well where multiple strips are placed side-by-side to create a matrix display
  • 144 pixels/m feature nearly back-to-back pixels and are best suited for projects where a continuous band of colour effect is desired (for a continuous look, you will also need a good diffuser material to put over the lights to “wash” between the individual LEDs).

Generally strips are sold by the metre. The longest continuous strip that is generally available is 5m and comes packaged on a reel. Most strips come with a JST connector clip on each end of the individual strips that you purchase.

Driver chip (ws2812b)

Each individual pixel on a strip of neopixels has its own ws2812b integrated driver chip. The chip looks like a little black dot that can be seen somewhere “inside” each pixel (where exactly depends on the manufacturer).

The ws2812b “listens” on the control wire for instructions about what to do with its LED: what colour and how bright to make it.

The magic of neopixels is that each LED is individually addressable because each has its own chip. Every pixel “knows” its position along a string (even if its alone in a “string” of 1 pixel), and its colour and brightness can be controlled independently of any other pixels. Pixels can be programmed to appear to elegantly fade and transition between colours and intensities (at least as far as the human eye can perceive) because the control signal – which is usually 800kHz – updates the pixels many times per second.

The correct neopixel strips for most Arduino projects have 3-pins (or “wires” or “pads”): two for power (+ and -) and one for the control signal. Other types of LED strips can have more pins; 4 and 5 are common for other types of strips.

The order of the 3 pins doesn’t matter and can change depending on the manufacturer. Its important to always read the labels so you understand what each pin is for. The control signal is always directional, so strips are marked with an arrow to show the correct direction.


Standard Neopixels are 5V. Its best to stick with the standard for most LED projects outside of large installations or commercial signage.

Both Arduino and Raspberry Pi operate at 5V. It can help reduce a project’s complexity when everything shares the same voltage! Note that while the Raspberry Pi is powered by 5V, its output pins are 3.3V. This is still enough to drive some pixels, or you can incorporate electronic components such as a level converter like the 74AHCT125 to bring output up to 5V.

Many varieties of strips and strings are 12V or more. There are also strips where the control line is 5V but the power lines are 12V or more. Read product details carefully. Again, for most Ardunio / Raspberry Pi or similar projects, its best to stick with classic 5V Neopixel strips.

Water resistance

Neopixel strips often to come in IP30, IP65, IP67, and IP68 varieties. The “IPX” refers to the degree of weather/water protection and is a global standard for electronics and other equipment.

See Wikipedia: IP Code for more information about IP Codes.

IP30 and below strips realistically can’t handle water so you should only use them indoors. IP65, IP67, and IP68 strands are water resistant. IP65 is usually covered in a plastic coating that is applied like hot glue to protect against splashes of water. IP67 and IP68 are inserted into a hollow rectangular plastic tube with a silicon cap covering each end. The tubes in IP68 strips are filled with a silicon or plastic sealant, making them the best bet for scenarios where the strip could be submerged in water.

For Neopixels in plastic tubes, extra silicon caps can be purchased on their own so you can cut the strips and make the ends watertight again. Hot glue is useful to squirt into the caps to ensure a watertight seal, especially around any wires.


Many product listings will state the chip is a 5050 or SMD5050. This is the type of RGB LED that is found in neopixel strips as well as many other types of LED strips.

The RGB LED is actually made up of 3x LEDs: one each for Red (R), Green (G), and Blue (B) light. Different colours are produced by mixing different intensities of light. Some variants of these LEDs also include a dedicated white LED in addition to the 3x colours.

SMD5050’s are “dumb” on their own. It is the combination of a SMD5050 LED with a WS2812B integrated driver chip that makes it an awesome neopixel. Since many other types of LED strips use the SMD5050, if you see this somewhere in a product listing, double check to make sure that you’re getting a WS2812B driver with every pixel and that the voltage is 5V.

PCB Colour

PCB is an acronym for printed circuit board. In terms of Neopixel strips this refers to the flexible tape or strip itself that the pixels are mounted to. For rings, this refers to the hard plastic circuit board.

It is common for neopixel strips to be sold with a choice of white or black PCB. Other colours are manufactured but they are pretty rare. The PCB colour doesn’t make a difference when it comes to how a project works.

Other things to buy

You might want to consider a few other things to go along with an LED purchase:

  • Soldering equipment and supplies
  • Heat shrink tubing
  • JST Connectors (clips) with 3-pins/wires
  • AWG22 or AWG20 wire (note: these wire gauges can handle the power needs of smaller projects)
  • Power supplies (make sure 5V output!)

When purchasing a power supply take note of the amperage that its designed for and get one (or more..) that supply a greater amount of current than your project needs.

Adafruit has an excellent guide on Powering Neopixels:

If you need to power a lot of Neopixels, please be safe and do your homework. The current draw required can add up to potentially dangerous levels very quick. Erik Katerborg has written an excellent guide available as a PDF:

Creating a site-specific WordPress plugin

Site-specific plugins (or “site plugins”) are a common part of professional WordPress projects. They are useful for adding functionality to a site without strictly coupling that functionality to a theme’s functions.php file.

Site plugins are ideal for implementing custom post types, taxonomies, meta boxes, shortcodes, and other functionality that should be preserved even if an admin changes the theme. They are also great for specifying tweaks and security enhancements such as disabling XML-RPC, a rarely-used feature that provides a vector for attack.

Plugin basics

A WordPress plugin can be as simple as a single PHP file that begins with a Plugin Header: a specially-formatted comment that WordPress uses to recognize it as a plugin.

Only a “Plugin Name” is required in the Plugin Header, however the example below includes a number of recommended fields:

Plugin Name: Site plugin for
Plugin URI:
Description: Implements custom post types (or whatever) for
Version: 1.0
Author: your_name
Author URI: 

If you intend to support internationalization, specify your text domain (e.g. “Text Domain: example”) in the Plugin Header as well. It might also be important to specify a “License” and “License URI” for your project.

After the Plugin Header, you can define any functions and hooks as you normally would in a theme’s functions.php file.

Site Plugins as Must-Use Plugins

A Must-Use plugin (or mu-plugin) is a special type of plugin that will not appear on the Plugins page in wp-admin and can only be deactivated by deleting their associated file(s).

On client projects where stakeholders are given full admin access, it can be helpful to deploy a site plugin as an mu-plugin to prevent accidents.

Mu-plugin requirements

Any plugin that does not depend on Activation or Deactivation hooks can be deployed as an mu-plugin. Since an mu-plugin is “always there” these concepts do not apply.

An example of a plugin that depends on Activation/Deactivation hooks and therefore can’t be an mu-plugin is one that needs to create (on activation) and remove (on deactivation) database tables for its functionality.

Updating mu-plugins

Update notifications are not available for mu-plugins and updates to them must be performed manually. This is usually the case anyway for custom client work, however this fact can also be useful for projects that incorporate 3rd-party plugins as part of their overall solution.

Since mu-plugins are effectively “locked” to their current version there is no chance an admin can deploy a major update that could risk breaking compatibility with the rest of their site. Developers can take the opportunity to test new versions before they apply them manually.

The practice of “locking down” distributed plugins should only be used in scenarios where the developer is actively supporting the project and ensuring pending updates are regularly applied. Otherwise it may be a wiser security and stability choice to stick with traditional plugin deployments.

Deploying a plugin as a must-use plugin

WordPress looks for mu-plugins in the wp-content/mu-plugins folder by default. This path can be customized by defining the WPMU_PLUGIN_DIR and WPMU_PLUGIN_URL constants in wp-config.php.

Unlike traditionally-deployed plugins, mu-plugins must be PHP files that exist in the root of the wp-content/mu-plugins/ folder. A more complex plugin can be deployed as an mu-plugin by creating a “loader” script to serve as the required PHP file and then using it to pull in the rest of the plugin’s dependencies.

Mu-plugins are loaded in alphabetical order before any “normal” plugins in the wp-content/plugins folder.

Installing CH340/CH34X drivers on MacOS to load sketches onto cheap Arduino clones

This post details how to get [most] cheap Arduino clones working with MacOS Sierra so you can upload sketches to them.

Many clones are not recognized “out of the box” because they implement their USB to serial interface with a CH340 chip designed by China’s WCH ( instead of the more costly FTDI chip found in genuine Arduinos.

Most sellers on “China deal sites” like are up-front about these chips and include “CH340” in their product titles and descriptions, though the implications of this design modification are not alway understood by purchasers.

The easy installation method covered in this post comes courtesy of one Adrian Mihalko. He has bundled the manufacturer’s latest Sierra-compatible CH340/CH34G/CH34X drivers for installation with brew cask. These drivers are signed by the OEM so its no longer necessary to disable Mac’s System Integrity Protection (SIP) feature.


I had no problem getting a Robotdyn Arduino Uno as well as another cheap clone running on a Mac with High Sierra.

Step by step

Prerequisite: ensure brew installed on your Mac. Verify its presence and version info by executing brew --version in Terminal.

To begin, install the drivers with brew cask:

brew tap mengbo/ch340g-ch34g-ch34x-mac-os-x-driver

brew cask install wch-ch34x-usb-serial-driver

(Note: the above is only two commands. The first one runs long, so take care when copying and pasting.)

When the install completes, reboot your machine.

Next, plug your Arduino clone into a free USB port.

Using Terminal, verify that the device is recognized by listing the contents of the /dev directory and looking for cu.wchusbserial1420 or cu.wchusbserial1410 in the output:

ls /dev

For example, I found cu.wchusbserial1420 in the output when I connected my Robotdyn Uno.

Things are promising if you find a similar result.

The Arduino IDE ships with drivers for the Uno itself, and save for the CH340, my clones are otherwise fully Arduino compatible (note: some clones might require additional drivers, and/or a different Board must be specified in the Arduino IDE). For my clones, the following steps were all I needed to upload sketches:

  • Open Arduino IDE with a test Sketch
  • Select the correct port in Tools > Port (e.g. /dev/cu.wchusbserial1420)
  • Verify that the Tools > Board had “Arduino Genuino/Uno” selected
  • Verify/Compile the Sketch (Apple+R)
  • Upload the Sketch (Apple+U)


In particular, the Robotdyn Uno appears to be decently well made, it’s laid out to support all Arduino-compatible shields, and it comes on an attractive black PCB. Versus a genuine Uno, it uses a micro-USB port instead of a full size one and exposes the ATmega328P microcontroller’s analog 6+7 pins. The company makes a number of similarly slick-looking accessories on black PCB. Their store on AliExpress is:

Have fun with your cheap clones!

Pulling files off a shared host (CPanel) with a 10K file FTP limit using a python web scraper

This post demonstrates the use of a web scraper to circumvent an imposed limit and download a bunch of the files.

I’ll use a recent case as an example where I had to migrate a client’s site to a new host. The old shared host was running an ancient version of CPanel and had a 10K file limit for FTP. There was no SSH or other tools, almost no disk quota left, and no support that could possibly change any CPanel settings for me. The website had a folder of user uploads with 30K+ image files.

I decided to use a web scraper to pull all of the images. In order to create links to all of the images that I wanted to scrape, I wrote a simple throwaway PHP script to link to all of the files in the uploads folder. I now had a list of all 30K+ files for the first time — no more 10K cap:

$directory = dirname(__FILE__) . '/_image_uploads';
$dir_contents = array_diff(scandir($directory), array('..', '.'));

echo '<h3>' . count($dir_contents) . '</h3>';
echo '<h5>' . $directory . '</h5>';

echo "<ul>\n";
$counter = 0;
foreach ($dir_contents as $file) {
  echo '<li>' . $counter++ . ' - <a href="/_image_uploads/'. $file . '">' . $file . "</a></li>\n";
echo "</ul>";

Next, to get the files, I used a python script to scrape the list of images using the popular urllib and shutil python3 libraries.

I posted a gist containing a slightly more generalized version of the script. It uses the BeautifulSoup library to parse the response from the above PHP script’s URL to build a list of all the image URLs that it links to. This script can be easily modified to suit a variety of applications, such as downloading lists of PDF’s or CSV’s that might be linked to from any arbitrary web page.

The gist is embedded below:

If you need to install the BeautifulSoup library with pip use: pip install beautifulsoup4

In the gist, note the regex in the line soup.findAll('a', attrs={'href': re.compile("^http://")}). This line and its regex can be modified to suit your application, e.g. to filter for certain protocols, file types, etc.

Encrypting a USB Drive in MacOS, including formatting to JHFS+ or APFS

MacOS Sierra doesn’t feature an option to encrypt a USB drive in Disk Utility or in Finder (at least at the time of writing). This post covers how to format a USB drive to either the JHFS+ or the new APFS filesystem and encrypt it using the Terminal and Disk Utility.


First, plug your USB drive into your computer and open the Terminal app.

Use the following command to list your disks:

diskutil list

Look for an entry (or entries) like /dev/disk2 (external, physical) and make absolutely sure that you understand the difference between your system’s hard disk and the external USB drive you want to encrypt.

The “IDENTIFIER” for my USB drive, found at the top of the list, was disk2. My system showed a subsequent entry disk2s1 but note how this still refers to disk2. Only the disk2 part is required. Use whatever diskn number corresponds to your target drive, where n is an integer.

Only proceed if you are certain that you have correctly identified your USB drive!

The following command formats the drive to Apple’s HFS+ with Journaling format, JHFS+. GPT is a crucial argument here to specify the GUID Partition Map option vs. the Master Boot Record option. You can replace the text inside the quoted string (“Flash Drive”) with your desired drive name:

diskutil eraseDisk JHFS+ "Flash Drive" GPT disk2

It is now possible to encrypt the drive with Finder (right-click and choose “Encrypt ‘Flash Drive'”) if you wish to simply keep the JHFS+ file system. If you wish to use the newer APFS file system, do not Encrypt the drive just yet, and read on.

Using the new APFS file system

APFS is Apple’s latest filesystem and it features good support for encryption. Before formatting your drive to APFS, be aware that older Macs (i.e. those without MacOS Sierra and up) will not support it.

To proceed with APFS, open the Disk Utility app.

With the drive formatted to JHFS+, Disk Utility will no longer grey out the “Convert to APFS” option when you right/control+click it.

Find your drive and choose “Convert to APFS”.

Once the file system has been converted to APFS you can go back to Finder, right/control+click on your drive, and choose “Encrypt ‘Flash Drive'” from the menu.

Don’t forget your passphrase 😉

Troubleshooting the AWS setup scripts ( offers a well-regarded free online course on Deep Learning that I thought I’d check out.

It seems that a lot of people struggle getting the setup scripts running. Complaints and requests for help are on reddit, in forums, etc. This doesn’t surprise me because the scripts are not very robust. On top of that, AWS has a learning curve so troubleshooting following a script failure can be a challenge.

Hopefully this post helps other people that have hit snags. It is based on my experience on MacOS, however should be very compatible for those running Linux or Windows with Cygwin.

Understanding the setup script’s behaviour

It leaves a mess when it fails

If running the setup script fails, which is possible for a number of reasons, it will potentially have created a number of AWS resources in your account and a local copy of an SSH key at ~/.ssh/aws-key-fast-ai.pem. It does not clean up after itself in failure cases.

The setup script doesn’t check for existing fast-ai tagged infrastructure, so subsequent runs can create additional VPC’s and related resources on AWS, especially as you attempt to resolve the reason(s) it failed. The setup script might generate and fast-ai-commands.txt but it overwrites these each time its run with only its current values, potentially leaving “orphan” infrastructure.

Thankfully all AWS resources are created with the same “fast-ai” tags so they are easy to spot within the AWS Console.

It makes unrealistic assumptions

The setup script assumes your aws config’s defaults specify a default region in one of its three supported regions: us-west-2, eu-west-1, and us-east-1.

I’m not sure why the authors assumed that a global tech crowd interested machine learning would be unlikely to have worked with AWS in the past and thus no existing aws configuration that might conflict.

The commands in the script do not use the --region argument to specify an explicit region so they will use whatever your default is. If your default happens to be one of the three supported ones, but you don’t have a sufficient InstanceLimit or there’s another problem, more issues could follow.


If you encountered an error after running the script, prior to re-running the script, take note of the following checks when attempting to resolve:

Check 1: Ensure you have an InstanceLimit > 0

Most AWS users will have a default InstanceLimit of 0 on P2 instances. You may need to apply for an increase and get it approved (this is covered in the setup video).

If a first run of the script gave you something like the following, there was an issue with your InstanceLimit:

Error: *An error occurred (InstanceLimitExceeded) when calling the RunInstances operation: You have requested more instances (1) than your current instance limit of 0 allows for the specified instance type. Please visit to request an adjustment to this limit.* 

InstantLimits are specific to a given resource in a given region. Take note of which region your InstanceLimit increase request was for and verify that it was granted in the same region.

Check 2: Ensure the right region

Verify your current default aws region by running: aws configure get region. The script assumes this is one of three supported regions: us-west-2, eu-west-1, or us-east-1.

The script also assumes that you have an InstanceLimit > 0 for P2 instances in whichever region you would like to use (or T2 instances if you are using

To get things running quickly, I personally found it easiest to make the script happy and temporarily set my aws default to a supported region in ~/.aws/config, i.e.:


Another option is to modify the scripts and add an explicit --region argument to every aws command that will override the default region. If you have multiple aws profiles defined as named profiles, and the profile that you wish to use for specifies a default region, you can use the --profile PROFILENAME argument instead.

For example, the following hypothetical aws config file (~/.aws/config) specifies a profile called “fastai”. A --profile fastai argument could then be added to every aws command in the setup script:


[profile fastai]

Check 3: Delete cruft from previous failed runs

This check is what inspired me to write this post!

Delete AWS resources

Review any resources were created in your AWS Console, and delete any VPC’s (and any dependencies) that were spun up. They can be identified because they were created with the “fast-ai” tag which is shown in any tables of resources in the AWS Console.

Cruft resources will have been created in any region that the setup script was working with (i.e. whatever your default region was at the time you ran it).

If you’ve found cruft, start by trying to delete the VPC itself, as this generally will delete most if not all dependencies. If this fails because of a dependency issue, you will need to find and delete those dependencies first.

IMPORTANT: AWS creates a default VPC and related dependencies (subnets, etc.) in every region available to your account. Do NOT delete any region’s default VPC. Only delete resources tagged with “fast-ai”.

Delete SSH keys

Check to see if ~/.ssh/aws-key-fast-ai.pem was created, and if so, delete it before running the script again.

The setup script has logic that checks for this pem file. We do not want the script to find the file on a fresh run.

After a successful run

After the setup script ran successfully, I got output similar to:

    "Return": true
Waiting for instance start...

All done. Find all you need to connect in the fast-ai-commands.txt file and to remove the stack call
Connect to your instance: ssh -i /Users/username/.ssh/aws-key-fast-ai.pem

Reference fast-ai-commands.txt for information about your VPC and EC2 instance. An ssh command to connect is in the file, and you can find your “InstanceUrl”.

I suggest picking up the video from here and following along from the point where you connect to your new instance. It guides you through checking the video card with the nvidia-smi command and running jupyter:

Starting and stopping your instance

The fast-ai-commands.txt file outlines the commands to start and stop your instance after the setup has completed successfully, e.g.:

aws ec2 start-instances --instance-ids i-0XXXX
aws ec2 stop-instances --instance-ids i-0XXXX

Its important to stop instances when you are finished using them so that you don’t get charged hourly fees for their continued operation. P2 instances run about $0.90/hr at the time of writing.

How to write custom wp-cli commands for next-level WordPress automation

wp-cli is a powerful tool for WordPress admins and developers to control their sites from the command-line and via scripts.

This is an introductory guide to writing custom wp-cli commands using a plugin, to enable you to expand its features and tailor its capabilities to your projects.

Custom commands are invoked like any other:

wp custom_command example_custom_subcommand

Why wp-cli custom functions are useful

Custom commands can help automate the management of more complex sites, enhance one’s development workflow, and enable greater control over other themes and plugins.

Practical real-world example

On a past project, I implemented wp-cli commands that enabled me to quickly load 100’s of pages of multilingual content in English, French, and Chinese, plus the translation associations between them all, to local and remote WP installs. The internationalization plugin in play, the popular and notorious pain-in-the-ass WPML, had no wp-cli support (and still doesn’t!). It otherwise would’ve required the clicking of bajillions of checkboxes every time copy/translation decks or certain features were revised.

Bare-bones plugin implementation

The following assumes that the wp-cli command is present (and accessible via wp), and that the example plugin has been correctly installed and activated on the target WordPress.

This code creates a basic plugin class called ExamplePluginWPCLI that is only loaded when the WP_CLI constant is defined:

if ( defined( 'WP_CLI' ) && WP_CLI ) {

    class ExamplePluginWPCLI {

        public function __construct() {

                // example constructor called when plugin loads


        public function exposed_function() {

                // give output
                WP_CLI::success( 'hello from exposed_function() !' );


        public function exposed_function_with_args( $args, $assoc_args ) {

                // process arguments 

                // do cool stuff

                // give output
                WP_CLI::success( 'hello from exposed_function_with_args() !' );



    WP_CLI::add_command( 'firx', 'ExamplePluginWPCLI' );


Adding a custom command to wp-cli

Consider the line WP_CLI::add_command( 'firx', 'ExamplePluginWPCLI' ); from the above example.

The command’s name, firx, is given as the first argument. You can choose any name for custom commands that aren’t already reserved by an existing command, provided that it doesn’t contain any special characters. It is wise to pick a unique name that minimizes the risk of conflicts with any other plugin or theme that might also add commands to wp-cli.

Defining new wp-cli commands in a class like ExamplePluginWPCLI confers a special advantage over defining them using only standalone php functions or closures: all public methods in classes passed to WP_CLI::add_command() are automatically registered with wp-cli as sub-commands.

Executing a custom command

The class’ public function exposed_function() can be called via wp-cli as follows:

wp firx exposed_function

The class’ public function exposed_function_with_args() can be called via wp-cli as follows. This particular function accepts command line arguments that will get passed into it via its $args and $assoc_args variables as appropriate:

wp firx exposed_function_with_args --make-tacos=beef --supersize

The constructor __construct() is optional and is included as an example. This function is called when an instance of the plugin class is loaded and it can be used to define class variables and perform any necessary setup tasks.

Output: success, failure, warnings, and more

Sending information back to the command line

wp-cli has a number of functions for outputting information back to the command line. The most commonly used are:

// works similar to a 'return' and exits after displaying the message to STDOUT with 'SUCCESS' prefix 
WP_CLI::success( 'Message' )

// works similar to a 'return' and exits after displaying the message to STDERR with 'ERROR' prefix
WP_CLI::error( 'Message' )

// display a line of text to STDERR with no prefix 
WP_CLI::log( 'Message' ) 

// display a line of text when the --debug flag is used with your command
WP_CLI::debug( 'Message' ) 

The success() and error() functions generally serve as a return equivalent for wp-cli functions. If either of these functions are called with a single argument containing a message, the script will exit after the message is displayed.

Formatting output as tables, json, etc

wp-cli has a useful helper function called format_items() that makes it a lot easier and cleaner to output detailed information.

Available options to output json, csv, count, and yaml are brilliant for enabling command output to be easily used by scripts and/or digested by web services.

The function accepts 3 arguments:

WP-CLI::format_items( $format, $items, $fields )
  • $format – string that accepts any of: ‘table’, ‘json’, ‘csv’, ‘yaml’, ‘ids’, ‘count’
  • $items – Array of items to output (must be consistently structured)
  • $fields – Array or string containing a csv list to designate as the field names (or table headers) of the items

For example, the following code:

$fields = array ( 'name', 'rank' ); 
$items = array (
    array (
        'name' => 'bob',
        'rank' => 'underling',
    array (
        'name' => 'sally',
        'rank' => 'boss',
WP_CLI\Utils\format_items( 'table', $items, $fields );

Would output something similar to:

# +-------+-----------+
# | name  | rank      |
# +-------+-----------+
# | bob   | underling |
# +-------+-----------+
# | sally | boss      | 
# +-------+-----------+

Changing the format to ‘json’ or ‘yaml’ is as simple as swapping out the ‘table’ argument.

Input: handling arguments

Positional and associative arguments

wp-cli supports both positional arguments and associative arguments.

Positional arguments are interpreted based on the order they are specified:

wp command arg1 42

Associative arguments may be specified in any order, and they can accept values:

wp command --make-tacos=beef --supersize --fave-dog-name='Trixie the Mutt'

Both positional and associative arguments can be supported by the same command.

A function implemented with two parameters for $args and $assoc_args, such the one from the first big example in this guide exposed_function_with_args( $args, $assoc_args ), will be provided all positional arguments via the $args variable and all associative arguments via the $assoc_args variable.

Retrieving values passed to a command

Suppose we wanted to process all of the arguments for the command:

wp firx exposed_function_with_args arg1 42 --make-tacos=veggie --supersize --fave-dog-name='Trixie the Mutt'

The following example expands on the initial example’s implementation of the exposed_function_with_args() function to demonstrate how to access the values of each argument:

public function exposed_function_with_args( $args, $assoc_args ) {

    // process positional arguments - option 1
    $first_value = $args[0];  // value: "arg1"
    $second_value = $args[1]; // value: 42

    // OR - process positional arguments - option 2
    list( $first_value, $second_value ) = $args;

    // process associative arguments - option 1 
    $tacos = $assoc_args['make-tacos']; // value: "veggie"
    $supersize = $assoc_args['supersize'];  // value: true
    $dog_name = $assoc_args['fave-dog-name'];    // value: "Trixie the Mutt"

    // OR - process associative arguments - option 2 - preferred !! 
    $tacos = WP_CLI\Utils\get_flag_value($assoc_args, 'make-tacos', 'chicken' );
    $supersize = WP_CLI\Utils\get_flag_value($assoc_args, 'supersize', false );
    $dog_name = WP_CLI\Utils\get_flag_value($assoc_args, 'fave-dog-name' );

    // do cool stuff

    // provide output
    WP_CLI::success( 'successfully called exposed_function_with_args() !' );


Using get_flag_value() to help with associative arguments

wp-cli comes with a handy helper function for handling associative arguments that serves as the preferred way to access them:

WP_CLI\Utils\get_flag_value($assoc_args, $flag, $default = null)

This function is passed the $assoc_args variable, the $flag (argument name) you wish to access, and optionally a default value to fall-back on if you want it to be something other than null.

What makes get_flag_value() the preferred method for accessing values is that it takes care of implementing a few tricks, including supporting the no prefix on arguments to negate them. Consider that a professional implementation of the above --supersize option should also check for and handle the case if a --no-supersize version is passed. Using the get_flag_value() function to access the values of the argument (e.g. using “supersize” as the identifier for the middle $flag argument) would handle this case and you’d be assured that your function would automatically receive the correct ‘true’ or ‘false’ value to work with.

Working with arguments

This is an introductory guide and the examples are for illustrative purposes only. For a robust implementation, keep in mind that you will likely need to add additional logic to any function that accepts arguments to check if required items are specified or not, if expected/supported values were passed in or not, etc.

These cases can be handled by your php code and can leverage wp-cli’s output functions like warning() or error() to provide user feedback. Another option that can cover many validation cases is to leverage wp-cli’s PHPDoc features (more on that below).

Registering arguments with wp-cli

An easy way to register arguments/options, whether they’re mandatory or not, and specify any default values is to use PHPDoc comment blocks. These play an active role in wp-cli which intelligently interprets them. Refer to the section on PHPDoc near the end of this guide.

wp-cli custom functions will still work in the absence of PHPDoc comments but they won’t be as user-friendly and any arguments won’t be tightly validated.

Errors: error handling with wp-cli

Use the WP_CLI::error() function to throw an error within a class or function that implements wp-cli commands:

// throw an error that will exit the script (default behaviour)
WP_CLI::error( 'Something is afoot!' );

// throw an error that won't exit the script (consider using a warning instead)
WP_CLI::error( 'The special file is missing.', false );

Error output is written to STDERR (exit(1)), which is important to consider when writing scripts that use wp-cli and respond to error conditions.

Use WP_CLI::warning() to write a warning message to STDERR that won’t halt execution:

WP_CLI::warning( $message )

Use WP_CLI::halt() to halt execution with a specific integer return code (this one is mostly for those writing scripts):

WP_CLI::halt ( $return_code )

PHPDoc comments and controls

PHPDoc style comment blocks are more than comments when it comes to wp-cli: it interprets them to provide help text to cli users, document the command’s available options/arguments and their default values, and serve as the basis for a level of validation that that gets performed before a command’s underlying function gets called and any argument data is passed to it. Complete and descriptive comments will result in the enforcement of mandatory vs. optional parameters by wp-cli.

Implementing correct PHPDoc comments simplifies and expedites the implementation of custom commands because the developer defers certain validation checks to wp-cli (e.g. to ensure a required argument was specified) rather than implementing everything on their own.

PHPDoc comments are used as follows:

  • The first comment line corresponds to the “shortDesc” shown on the cli
  • The “meat” of a comment body corresponds to the “longDesc” may be shown when cli users mess up a command, and is shown when they specify the --help flag with a given command
  • Options (aka arguments aka parameters in this context) that are defined specify if each parameter is mandatory vs. optional, and if a single or multiple arguments are accepted

The “shortDesc” and “longDesc” may be displayed to cli users as they interact with the command. To show a full description, cli users can execute:

wp [command_name] --help

Basic PHPDoc comment structure

PHPDoc comments are placed immediately preceding both Class and method (function) declarations, and have a special syntax that starts with /**, has a 2-space-indented * prefixing each subsequent line, and ends with an indented */ on its own line:

 * Implements example command that does xyz
ClassOrMethodName {
    // ...

PHPDoc comments with parameters

The wp-cli’s Command Cookbook provides the following example of a PHPDoc comment:

     * Prints a greeting.
     * ## OPTIONS
     * <name>
     * : The name of the person to greet.
     * [--type=<type>]
     * : Whether or not to greet the person with success or error.
     * ---
     * default: success
     * options:
     *   - success
     *   - error
     * ---
     * ## EXAMPLES
     *     wp example hello Newman
     * @when after_wp_load
    function say-hello( $args, $assoc_args ) {
        // ... 

The #OPTIONS and #EXAMPLES headers within the “meat” of the comment (corresponding to the “longDesc”) are optional but are generally recommended by the wp-cli team.

The arguments/parameters are specified under the #OPTIONS headers:

  • <name> specifies a required positional argument
    • writing it as <name>... means the command accepts 1 or more positional arguments
  • [--type=<type>] specifies an optional associative argument that accepts a value
    • [some-option] without the = sign specifies an optional associative argument that serves as a boolean true/false flag
  • the example’s default: and options: provided for the [--type=<type>] argument can be specified under any argument to communicate that argument’s default value and the options available to cli users

In case word wrapping is a concern for you when help text is presented:

  • Hard-wrap (add a newline) option descriptions at 75 chars after the colon and a space
  • Hard-wrap everything else at 90 chars

More information about PHPDoc syntax in the wp-cli context is available in the docs:

Help with running wp-cli commands

There are a couple helpful reminders to keep in mind when using wp-cli that could prove useful to someone following this guide.

Specifying a correct wordpress path

wp-cli must always be run against a WordPress installation.

  • You can cd to the path where WordPress has been installed, so that the shell prompt’s working directory is at the base where all the WordPress files are (e.g. on many systems, this would be something similar to: cd /var/www/
  • Alternatively you can provide the universal argument --path when specifying any wp-cli command (e.g. --path=/var/www/ to run a command from any folder

Running wp-cli as a non-root user

It is dangerous to run wp-cli as root and it will happily inform you of this fact should you try. That’s because php scripts related to your WordPress and wp-cli would be executed with full root permissions. They could be malicious (especially risky if your site allows uploads!) or poorly implemented with bugs, and that fact unreasonably puts your whole system at risk.

A popular option is to use sudo with its -u option to run a command as another user. Assuming that you have sudo installed and correctly configured, and assuming that the user ‘www-data’ is the “web server user” on your system and that it has read/write permissions to your WordPress installation folder (these are common defaults found on Ubuntu and Debian systems), you can prefix your commands with: sudo -u www-data --.

The following executes one of wp-cli’s built-in commands, wp post list, as the www-data user:

sudo -u www-data -- wp post list

Further reading

Check out the wp-cli docs and the commands cookbook at: