Using dnsmasq on MacOS to setup a local domain for development

This guide covers using dnsmasq as a local DNS server on MacOS to resolve all URL’s with the .test domain name to localhost (127.0.0.1).

This can be very useful to developers wishing to setup an efficient local development and test environment on their machine. For example, if http://myproject.com.test resolved to localhost, a local apache, nginx, or other server could respond to the request.

While any DNS server could be employed for this job, dnsmasq is a perfect choice for this type of lightweight local routing.

Regarding the .test domain, note that this is a recommended choice per the IETF RFC’s (e.g. RFC-2606, RFC-6761). While it’s possible to configure dnsmasq for any arbitrary domain name, only .test, .example, .invalid, and .localhost are reserved for non-Internet use. A popular choice used to be .dev until Google purchased the top-level-domain (TLD) and changed how Chrome handles .dev URL’s.

Install and configure dnsmasq

To get started, use brew to install dnsmasq for your user:

brew install dnsmasq

At the time of writing, the brew package installer creates a sample conf file with all entries commented out. It can be found at ./etc/dnsmasq.conf under brew’s default install folder.

Execute following commands in sequence to configure dnsmasq to resolve *.test requests to 127.0.0.1:

echo "" >> $(brew --prefix)/etc/dnsmasq.conf
echo "# Local development server (custom setting)" >> $(brew --prefix)/etc/dnsmasq.conf
echo 'address=/.test/127.0.0.1' >> $(brew --prefix)/etc/dnsmasq.conf
echo 'port=53' >> $(brew --prefix)/etc/dnsmasq.conf

The above commands simply append the required lines to the dnsmasq.conf configuration file.

Next, use brew’s services feature to start dnsmasq as a service. Use sudo to ensure that it is started when your Mac boots, otherwise it will only start after you login:

sudo brew services start dnsmasq

Add a resolver

Add a resolver for .test TLD’s:

sudo mkdir -pv /etc/resolver
sudo bash -c 'echo "nameserver 127.0.0.1" > /etc/resolver/test'

This tells your system to use localhost (and therefore dnsmasq) as the DNS resolver for the .test domain.

Test your configuration

Flush your DNS cache:

sudo dscacheutil -flushcache
sudo killall -HUP mDNSResponder

Check your configuration by running:

scutil --dns

Inspect the output. One of the entries should specify that for domain test the system will use the nameserver 127.0.0.1. This means that dnsmasq will now be consulted as the nameserver for any .test URLs.

Finally, test the full stack by trying to ping random *.test URL’s such as example.test or myproject.test and confirming that they resolve to 127.0.0.1. For example:

ping -c2 example.test

Provides the output:

PING example.test (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.022 ms
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms

OR (in cases where your Mac’s firewall settings are configured to not respond to ping):

PING example.test (127.0.0.1): 56 data bytes
Request timeout for icmp_seq 0
...

The important part to confirm is that example.test resolved to (127.0.0.1).

Using the WordPress REST API with JWT Authentication

The WordPress core now supports a new REST API as of version 4.8.x. Among a sea of new possibilities, one can now build a front-end for a website or app with a framework like React or Vue.js and use WordPress and its familiar admin dashboard to manage the back-end.

This guide covers adding JSON Web Tokens (JWT) authentication support with the JWT Authentication for WP REST API plugin, and sending requests to the API using Postman.

To access the REST interface of a WordPress-powered site append /wp-json/wp/v2/ to the URL. For example, this blog’s REST API is at: https://firxworx.com/wp-json/wp/v2/

Resources:

REST API Authentication

Default cookie authentication

WordPress’ REST API only supports cookie authentication out-of-the-box. This is the same method that WordPress uses by default to authenticate users that use the login form. The idea is that theme and plugin developers can authenticate themselves, write javascript with the JS API, and be on their merry way.

Enabling remote applications with Basic Auth, OAuth, and/or JWT

To support remote applications, we need to add a new REST API authentication method using a plugin. Currently supported options are Basic Auth, OAuth, and JWT:

  • Basic Auth with a username and password is considered insecure and should only be used in development scenarios
  • OAuth is great but it can be a pain to authenticate
  • JWT is awesome and works great with front-end frameworks

JSON Web Tokens are an open industry standard: IETF RFC 7519

Adding JWT Authentication to the REST API

If your WordPress is accessible via the Internet, it is important to enable SSL/https before proceeding.

Start by installing the JWT Authentication for WP REST API plugin but don’t activate it just yet.

Next, ensure your web server supports the HTTP Authorization Header. If you are using a shared host, this is often disabled by default. To enable it, add the following to your WordPress’ .htaccess file:

RewriteEngine on
RewriteCond %{HTTP:Authorization} ^(.*)
RewriteRule ^(.*) - [E=HTTP_AUTHORIZATION:%1]

Edit your wp-config.php file and add the following lines before the comment that says “That’s all, stop editing!”:

define('JWT_AUTH_SECRET_KEY', 'really-secret-key-here');
define('JWT_AUTH_CORS_ENABLE', true);

Change really-secret-key-here in the above to a random string. If you need help obtaining some randomness, copy-and-paste some output from WordPress.org’s secret key service: https://api.wordpress.org/secret-key/1.1/salt/

JWT uses the secret JWT_AUTH_SECRET_KEY to sign JSON Web Tokens. If it is compromised, then your site’s security is compromised! Keep it safe!

The JWT_AUTH_CORS_ENABLE line activates CORs to enhance security. Refer to the plugin docs if you need to modify the default available headers: ‘Access-Control-Allow-Headers, Content-Type, Authorization’

Finally, activate the JWT Authentication for WP REST API plugin!

New endpoints for JWT authentication

The plugin will add the /jwt-auth/v1 namespace with two new endpoints:

  1. /wp-json/jwt-auth/v1/token (POST)
  2. /wp-json/jwt-auth/v1/token/validate (POST)

The first endpoint accepts POST requests that contain a username and password in the body.

  • If the credentials check out, the plugin will issue a 200 response containing a JSON object with: token, user_display_name, user_email and user_nicename.
  • If authentication fails, the response includes an object with the properties: code, data, and message.

The token included in the API’s response can then be included in the HTTP Authorization header of any subsequent requests to WordPress’ REST API. Front-end applications will need to store it somewhere, such as in a cookie or localstorage.

The second endpoint simply validates tokens. If sent a POST request with a valid token in the HTTP Auth header, it will return the following response:

{
  "code": "jwt_auth_valid_token",
  "data": {
    "status": 200
  }
}

You can confirm that JWT is now available by accessing https://yoursite.com/wp-json/ and inspecting the response body to see if the /jwt-auth/v1 and /jwt-auth/v1/token endpoints are available.

Sending requests to the REST API

Unauthenticated request

Start by making a request to the REST API that doesn’t require authentication. This “smoke check” helps confirm that the API is working and that your site’s permalinks are configured correctly.

Open up Postman to send a new GET request to https://your-site.com/wp-json/wp/v2/posts.

You should expect a successful response with a body that contains a JSON representation of your blog’s posts.

If you need to troubleshoot, check your permalink settings in the admin at: Settings -> Permalinks -> Common Settings. Make sure that your server supports URL rewriting (e.g. mod_rewrite for Apache) and that the API’s URL’s resolve correctly.

Authenticating with JWT

In Postman, prepare a new POST request to the jwt-auth/v1/token endpoint (e.g. https://your-site.com/wp-json/jwt-auth/v1/token). Navigate to the Body tab and:

  • ensure the type of request is form-data
  • specify the username and password fields with valid user credentials
  • Fire the request.

If authentication is successful, you’ll get a reply like the following (note: I truncated the token in my example so expect a much longer string).

{
    "token": "eyJ0eXAiOiJKV1QiLC_A_RIDICULOUSLY_LONG_STRING_Ky4Y",
    "user_email": "example@firxworx.com",
    "user_nicename": "example",
    "user_display_name": "Mr Example"
}

Save the token so that you can include it in subsequent requests to the API.

Using the JWT token in subsequent requests

Now send an API request where authentication is required: creating a post.

In Postman, prepare a new POST request to the URL https://your-site.com/wp-json/wp/v2/posts and set the following:

  • in the Headers tab, add the following key value pairs:
    • Content-Type + application/json
    • accept + application/json
    • Authorization + Bearer YOUR_TOKEN_HERE
  • in the Body tab, add the following key value pairs:
    • title + ‘your post title’
    • content + ‘your post body’
    • status + draft (or publish if you prefer)

The crucial entry is the Authorization header. It is imperative that the token be prefixed with the string Bearer followed by a space. Do not forget the space character!

A successful response body will include a JSON object with the new post id, date, and a bunch of other data about your new post.

If you get an error response, double check all of the key value pairs in the Headers and Body. Ensure that you copied your token exactly and in its entirety. You can use the JWT endpoints to validate your token or you could try re-authenticating to obtain a fresh token.

The following example shows how to create a post via the REST API using JS/ES with the fetch() API:

var token = YOUR_TOKEN_HERE;
fetch('https://your-site/wp-json/wp/v2/posts', {
    method: "POST",
    headers: {
        'Content-Type': 'application/json',
        'Accept': 'application/json',
        'Authorization': 'Bearer ' + token
    },
    body: JSON.stringify({
        title: 'Lorem ipsum',
        content: 'Lorem ipsum dolor sit amet.',
        status: 'draft'
    })
}).then(function(response){
    return response.json();
}).then(function(post){
    console.log(post);
});

Troubleshooting

If you find you can’t make authenticated requests, such as when you try to create a post, you may have an issue with the request’s Authorization header not being passed to PHP.

It turns out that many shared web hosts disable the Authorization header by default on Apache (or Apache-like) web servers. This can also be related to using fcgi_module instead of php5_module to support PHP.

A solution is to add the following directive to the top of your WordPress’ .htaccess file. If you are not on a shared hosting environment, other possible locations for this directive are in a VirtualHost configuration or apache server conf:

SetEnvIf Authorization "(.*)" HTTP_AUTHORIZATION=$1

Note that some commenters have reported that the lowercase authorization worked for them vs. Authorization, so if you experience an issue, try the alternate capitalization.

This directive tells Apache to pass the Authorization header to php by setting an environment variable, such that it is available in PHP as $_SERVER['HTTP_AUTHORIZATION'].

If you update a conf file rather than your .htaccess file, don’t forget to restart apache for the setting to take effect!

If that approach fails, another solution to try is adding the following to the top of the .htaccess file in your WordPress’ root folder:

RewriteEngine on
RewriteCond %{HTTP:Authorization} ^(.*)
RewriteRule ^(.*) - [E=HTTP_AUTHORIZATION:%1]

This approach relies on mod_rewrite and may not be as appropriate for some cases because the $_SERVER['REDIRECT_HTTP_AUTHORIZATION'] variable is set for PHP instead of the standard $_SERVER['HTTP_AUTHORIZATION'].

Creating Custom Post Types in WordPress

This guide covers how to create Custom Post Types (CPT’s) in WordPress. CPT’s are important to WordPress developers because they enable the creation of more complex sites + web-apps than is possible with a default WordPress install.

Custom Post Types are frequently defined with additional data fields called meta fields that can be defined and made editable to admins via meta boxes.

Example applications:

  • Jokes — each post contains a joke, which are listed and displayed differently than regular blog posts
  • Job Opportunities — include Salary Range and Location meta
  • Car Listings — registered users post for-sale listings and specify Make, Model, and Year meta via dynamic dropdown menus
  • Beer Reviews — featuring a range of meta fields that include Brewery, Style, and Tasting Score

Custom Post Types can be created (registered) or modified by calling the register_post_type() function within the init action.

Custom Taxonomies and the connection to Custom Post Types

Custom Post Types are closely related to the concept of Custom Taxonomies. Taxonomies are a way to group WordPress objects such as Posts by a certain classification criteria. Developers can define Custom Taxonomies to add to WordPress’ default taxonomies: Categories, Tags, Link Categories, and Post Formats.

Although this guide focuses on CPT’s, its important to note that projects are often implemented using a thoughtful combination of Custom Post Types and Custom Taxonomies.

A classic example of a complementary post type + taxonomy is: Book as a Custom Post Type and Publisher as a Custom Taxonomy.

If your Custom Post Type needs to be related to any Custom Taxonomies, they must be identified via the optional taxonomies argument of the register_post_type() function. This argument only informs WordPress of the relation and does not register any taxonomies as a side-effect. Custom Taxonomies must be registered on their own via WordPress’ register_taxonomy() function.

Registering new Custom Post Types

Registering in a Plugin vs. Theme

Custom Post Types can be registered by plugins or by themes via their functions.php file. It’s generally recommended to go the plugin route to keep a project de-coupled from any particular theme.

In the many cases where CPT’s do not depend on activation or deactivation hooks, they can be defined by a Must-Use Plugin (mu-plugin). This special type of plugin is useful to safeguard against admins (e.g. client stakeholders with admin access) accidentally de-activating any Custom Post Types that are important to their website/app.

If a plugin or theme that registers a CPT becomes deactivated, WordPress’ default behaviour is to preserve the post data in its database, though it will become inaccessible and could break any themes or plugins that assume the CPT exists. The CPT will be restored once whatever plugin or theme that registered it is re-activated.

Basic Definition

Custom Post Types may be registered by calling WordPress’ register_post_type() function during the init action with the following arguments: a required one-word post type key, and an optional array of key => value pairs that specify all optional arguments.

The following example implements a function create_my_new_post_type() that calls register_post_type() to register a CPT called candy. The last line hooks the function to the init action using WordPress’ add_action() function. It could be included as part of a plugin or in a theme’s functions.php.

Some of the most common optional args are specified: user-facing labels for singular and plural, if the CPT is to be public (appear in search, nav, etc) or not, and whether it should have an archive (list of posts) or not.

function create_my_new_post_type() {
    register_post_type( 'candy',
        [
            'labels' => [
                'name' => __( 'Candies' ),
                'singular_name' => __( 'Candy' )
            ],
        'public' => true,
        'has_archive' => true,
        ]
    );
}
add_action( 'init', 'create_my_new_post_type' );

Tip: Namespacing

It is a good practice to namespace any CPT keys by prefixing their names with a few characters relevant to you or your project followed by an underscore, such as xx_candy. This helps avoid naming conflicts with other plugins or themes, and is particularly important if you are planning to distribute your project.

Tip: Use singular form for post type keys

The WordPress codex and Handbooks always use a singular form for post type keys by convention, and WordPress’ default types such as ‘post’ and ‘page’ are singular as well.

Detailed Definition

There are a ton of optional arguments that can be specified when registering a Custom Post Type. The WordPress Developer Documentation is the best source to review all of them: register_post_type().

Some of the more notable options include:

  • labels — array of key => value pairs that correspond to different labels. There are a ton of possible labels but the most commonly specified are ‘name’ (plural) and ‘singular_name’
  • public — boolean indicating if the post type is to be public (shown in search, etc) or not (default: false)
  • has_archive — boolean indicating if an archive (list of posts) view should exist for this post type or not (default: true)
  • supports — array of WordPress core feature(s) to be supported by the post type. Options include ‘title’, ‘editor’, ‘comments’, ‘revisions’, ‘trackbacks’, ‘author’, ‘excerpt’, ‘page-attributes’, ‘thumbnail’, ‘custom-fields’, and ‘post-formats’. The ‘revisions’ option indicates whether the post type will store revisions, and ‘comments’ indicates whether the comments count will show on the edit screen. The default value is an array containing ‘title’ and ‘editor’.
  • register_meta_box_cb — string name of a callback function that will handle creating meta boxes for the CPT so admins have an interface to input meta data
  • taxonomies — an array of string taxonomy identifiers to register with the post type
  • hierarchical — a boolean value that specifies if the CPT behaves more like pages (which can have parent/child relationships) or like posts (which don’t)

The numerous other options enable you to manage rewrite rules (e.g. specify different URL slugs), configure options related to the REST API, and set capabilities as part of managing user permissions.

Adding Meta Fields to a Custom Post Type

Enabling custom-fields

A straightforward way to enable admins to define meta fields as key->value pairs when editing a post is to include the value ‘custom-fields’ in the ‘supports’ array, as part of the args passed to register_post_type().

Adding Meta Boxes to a Custom Post Type

The above ‘custom-fields’ approach works for basic use-cases, however most projects require advanced inputs like dropdown menus, date pickers, repeating fields, etc. and a certain level of data validation.

The solution is defining meta boxes that specify inputs for each of a CPT’s meta fields and handle the validation and save process. Meta boxes must be implemented in a function whose name is passed to register_post_type() via its args as a value of the ‘register_meta_box_cb’ option.

Creating meta boxes can be tricky for the uninitiated… Stay tuned for an upcoming post dedicated solely to them!

In the meantime, I would suggest exploring solutions that simplify the process of creating meta boxes. Two excellent options are the open-source CMB2 (Custom Meta-Box 2) and Advanced Custom Fields (ACF), which offers both free and commercial options. I think the commercial ACF PRO version is well worth the $100 AUD fee to license it for unlimited sites including a lifetime of updates and upgrades.

Displaying a Custom Post Type

Posts belonging to a CPT can be displayed using single and archive templates, and can be queried using the WP_Query object.

Single template: single post view

Single templates present a single post and its content. WordPress looks for the template file single-post_type_name.php for a CPT-specific template and if it doesn’t find it, it defaults to the standard single.php template.

Archive template: list of posts view

Archive templates present lists of posts. A Custom Post Type will have an Archive if it was registered with the optional has_archive argument set to a value of true (default: false).

To create an archive template for your CPT, create a template file that follows the convention: archive-post_type_name.php. If WordPress doesn’t find this file, it defaults to the standard archive.php template.

Using the WP_Query object

WP_Query can be used in widget definitions, in templates, etc. to present posts belonging to a CPT. The following example queries for published posts of the type ‘candy’ and then loops over the results, presenting each one’s title and content as items in a list.

<?php

$args = [
  'post_type'   => 'candy',
  'post_status' => 'publish',
  ]);

$candies = new WP_Query( $args );
if( $candies->have_posts() ) :
?>
  <ul>
    <?php
      while( $candies->have_posts() ) :
        $candies->the_post();
        ?>
          <li><?php printf( '%1$s - %2$s', get_the_title(), get_the_content() );  ?></li>
        <?php
      endwhile;
      wp_reset_postdata();
    ?>
  </ul>
<?php
else :
  esc_html_e( 'No candies... Go get some candy!', 'text-domain' );
endif;
?>

The wp_reset_postdata() call is important to reset WordPress back to the original loop, so other functions that depend on it will work properly. Reference: https://developer.wordpress.org/reference/functions/wp_reset_postdata/

Include the client IP in apache logs for servers behind a load balancer

When servers are behind a load balancer, Apache’s default configuration produces logs that show the load balancer’s IP address instead of the IP of the remote client that initiated the request. Furthermore, if multiple server’s logs are consolidated into one, it can be difficult to determine which server created a given log entry.

This post covers how to improve on Apache’s default logging situation such that:

  • every web server behind the load balancer includes a unique identifier for itself in its log entries, and that;
  • access log entries include the client’s remote IP address as found in the X-Forwarded-For header set by the load balancer.

This setup is more helpful for troubleshooting, configuring monitoring and alerts, etc. and it helps to maximize the value of 3rd-party log aggregation and analysis services like Papertrail or Loggly).

The example commands in this post are applicable to Ubuntu/Debian however they are easily adapted to other environments.

Enable required modules

Start by ensuring that the required Apache modules: env and remoteip, are enabled:

a2enmod env
a2enmod remoteip 
service apache2 restart

Identify each web server

Add a SetEnv directive in the site/app’s apache conf to instruct the env module to set a new environment variable with a value that uniquely identifies each server behind the load balancer.

The example below is a snippet from an Apache VirtualHost’s conf file. It defines a variable called APP_LB_WORKER with the value ‘unique_identifier’. If you were using a devops automation tool such as Ansible, you could use the template module and use a handy variable such as {{ ansible_host }} in place of the example’s hard-coded ‘unique_identifier’ value.

<VirtualHost *:443>
...
    SetEnv APP_LB_WORKER unique_identifier
...
</VirtualHost>

Configure the Apache RemoteIP module

Create a file named /etc/apache2/conf-available/remoteip.conf and use it to set: the RemoteIPHeader to ‘X-Forwarded-For’, any appropriate RemoteIPInternalProxy or other RemoteIP directives, and to define a new LogFormat that includes both the environment variable that contains the server’s identifier and the client’s remote IP as sourced from the X-Forwarded-For header.

The RemoteIPInternalProxy directive tells the RemoteIP module which IP address(es) or IP address blocks it can trust to provide a valid RemoteIPHeader that contains the client’s IP.

The following example’s RemoteIPInternalProxy value is representative of an environment where the load balancer’s internal network IP address belongs to a public subnet with the CIDR block 10.0.1.0/24. Choose an appropriate value (or values) for your environment.

A full list of configuration directives for RemoteIP can be found in the Apache docs: https://httpd.apache.org/docs/2.4/mod/mod_remoteip.html.

The following example names the new LogFormat as “loadbalance_combined”. You can choose any name you like that isn’t already in use.

/etc/apache2/conf-available/remoteip.conf:

RemoteIPHeader X-Forwarded-For
RemoteIPInternalProxy 10.0.1.0/24

LogFormat "%a %v %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" \"%{APP_LB_WORKER}e\"" loadbalance_combined

Finally, enable the conf:

a2enconf /etc/apache2/conf-available/remoteip.conf

The Apache LogFormat is extensively customizable. Learn more about the placeholders and options from here: https://httpd.apache.org/docs/2.4/logs.html. Remote log aggregation service Loggly also has an excellent overview: https://www.loggly.com/ultimate-guide/apache-logging-basics/.

Tell Apache to use the new LogFormat

Next, tell Apache to use the custom LogFormat that we named loadbalance_combined by editing your site/app’s conf file. The following example builds upon the previous example of a VirtualHost conf:

<VirtualHost *:443>
...
    SetEnv APP_LB_WORKER unique_identifier
...
    CustomLog /var/log/app_name/access.log loadbalance_combined
...
</VirtualHost>

The following example is a more elaborate case that uses the tee command to send the log entry to both an access.log file and to the /usr/bin/logger command to include it in the syslog, only if an environment variable named “dontlog” is not set. An example use-case for an env variable like “dontlog” is to set it (e.g. via the SetEnvIf directive) for any requests that correspond to a “health check” from the load balancer to the web server. This helps keep logs clean and clutter-free.

CustomLog "|$/usr/bin/tee -a /var/log/app_name/access.log | /usr/bin/logger -t apache2 -p local6.notice" loadbalance_combined env=!dontlog

Restart Apache

Confirm the validity of your confs using apachectl configtest and restart apache for the new configuration to take effect:

service apache2 restart

Processing payments on WordPress with Stripe and Gravity Forms

This post covers creating a web Form in WordPress using the popular (and excellent) Gravity Forms plugin and configuring it to process payments with Stripe:

  • Gravity Forms is a leading commercial Form Builder plugin for WordPress that many of my clients have had a lot of success with. It is a powerful tool when combined with Payments and it supports integration via its webhooks and API.
  • Stripe is a leading payments processor that is popular due to its ease-of-use and ease-of-integration. Stripe is currently my preferred choice for eCommerce projects.

The following assumes:

  • you are logged into your WordPress (/wp-admin) as an Administrator,
  • your WordPress site has SSL enabled (https://),
  • that Gravity Forms and the Gravity Forms Stripe Add-On plugins have been installed and activated, and
  • you have a valid + verified Stripe account and have inputted all necessary Business Settings.

Get API Keys from Stripe

  • Open a tab and login to your Stripe account.
  • Click on Developers in the left nav. A sub-menu will appear underneath.
  • Click on API Keys in the sub-menu under Developers.
  • Generate a set of Test and Live API Keys. Each comes in a pair consisting of a Private Key and a Publishable Key. The keys themselves are strings of random-looking characters prefixed with: ‘sk_live’, ‘pk_live’, ‘sk_test’, and ‘pk_test’.

Make an effort to keep your Private Keys (those prefixed with ‘sk’) confidential and do not send them to anyone over email or other potentially insecure means.

Setup Gravity Forms for Stripe

In the WordPress admin dashboard:

  • Click on Forms in the left nav. A sub-menu will expand underneath.
  • Click on Settings in the sub-menu under Forms.
  • The resulting Settings page has a series of tabs down the left side. Click on Stripe.
  • On the Stripe page complete the form:
    • API: choose “Live” (or “Test” if you wish all Forms to be in Test Mode)
    • Input the Test Publishable Key, Test Secret Key, Live Publishable Key, and Live Secret Key that you obtained from the Stripe Dashboard
    • Under Stripe Webhooks there are instructions for adding a Webhook within your Stripe account that points to a URL in your WordPress site. Click on the “View Instructions” link to reveal the necessary steps and open a tab to login to your Stripe Dashboard and complete them in Stripe. When that is complete:
    • Tick the “Webhooks Enabled” checkbox
    • Input the Test Signing secret and Live Signing secret values that you obtained from Stripe

Finally, click the Update Settings button to save your settings.

Create a Form and Configure Payments

Create the Gravity Form

Create a new Form and add at least a Product Field, Total field, and Credit Card field. In the Product Field, define your products/services and set their prices.

If they are applicable to your situation, you can also use Option and Quantity fields to gather additional information that can influence the price, and you can include a Shipping field as well.

Add a Stripe Feed to the Form

In your Form’s “Edit” screen, navigate to: Settings > Stripe.

Click on the Add New button to add a new Stripe Feed to this Form.

  • Name textbox: input a descriptive name (e.g. “Event Registration 2018”). Note that its good to include a unique identifier in the Name (e.g. such as the year for an annually recurring event) so you can easily identify payments related to this particular Form + Stripe Feed in the future.
  • Transaction Type: choose “Products & Services” from the drop-down menu (“Subscriptions” is the other available option, but it is not covered this post)

A set of new fields will appear:

  • Payment Amount: choose which Form field you would like Gravity Forms to use to determine the total amount to charge the user. In most cases you’d choose the Form Total field (e.g. “Form Total”) rather than any individual product field.
  • Metadata: optionally choose Form fields that you wish to send to Stripe so it can be included in Stripe Reports. This is an optional step but it can make your life easier down the line because you will be able to see more information on the payment side of things that will help you reconcile and perform accounting and customer support tasks. For each Metadata field you would like to add:
    • Input the field Name (as you’d like it to appear in Stripe) and from the drop-down menu choose which Form Field you would like to use as a value. It is also useful for sending along details about what products or choices the user may have made. Examples:
    • Name: “Entry ID” Value: “Entry ID” — the unique database ID of the form submission in your WordPress + Gravity
    • Name: “Email” Value: “Email” — sends the customer’s email to Stripe
  • Stripe Receipt: Choose whether or not you would like Stripe to automatically send an email receipt to the customer. The default option is “No” but it is often desirable to choose “Email”.
  • Conditional Logic: Optionally add logic to conditionally process payments only if certain values/conditions are met. Most forms do not need to use this option.

When you are done, click on the Update Settings button to save your Stripe Feed.

You should be good to go! Ensure your Stripe Feed is Live and your Form is Live, and then add it to a Post or Page to see it up on your site!

If you want to test your form, you can enable Test mode and use one of the fake/test credit cards that Stripe has published here: https://stripe.com/docs/testing. You will be able to see any Test transactions in your Stripe Dashboard when you toggle to Test vs. Live mode.

How to buy neopixel (ws2812b) strips for an Arduino or Raspberry Pi project

Neopixels can be purchased from many sources online. The best prices are often found at Asian deal sites like aliexpress.com.

The vast selection and number of variants of LED products in these catalogues can seem daunting especially for newbies looking for a deal. This post explains what the most common codes and numbers mean to help you identify 5V Neopixel products that are easily compatible with popular microcontrollers like Arduino and single-board-computers like Raspberry Pi.

Neopixel buying guide

The following sections cover different decision points that someone buying Neopixels will be faced with:

Pixels, strands, or rings

Neopixels are usually found a few different form factors:

  • individual pixels: these are usually mounted to a small circuit board barely larger than the pixel itself
  • strips: mounted to flexible strips of plastic tape
  • strings: individual pixels spaced apart with wires, like Christmas lights
  • rings: rigid circuit boards in circular and semi-circular shapes.

Choose what suits your project best. The Neopixel LEDs themselves are the same; this decision purely relates to what material they are mounted to.

Strips are the most common choice. They are flexible and usually come with a peel-off adhesive backing that isn’t usually very strong (hot glue often works better). Strips can be easily cut with scissors – all the way down to individual pixels, so this purchase is flexible to suit the needs of many projects.

Pixel density

Neopixel strips are typically priced by the number of Neopixels spaced along 1 metre of strip. Rings, strings, and individual Neopixels are simply priced based on how many pixels you’re buying.

When it comes to strips, common pixel densities found in online stores include: 30, 60, 90, and 144.

  • 30 pixels/m is a good choice for area lighting and creating an atmosphere
  • 60 pixels/m is a common general purpose choice that is suitable for many display applications as well where multiple strips are placed side-by-side to create a matrix display
  • 144 pixels/m feature nearly back-to-back pixels and are best suited for projects where a continuous band of colour effect is desired (for a continuous look, you will also need a good diffuser material to put over the lights to “wash” between the individual LEDs).

Generally strips are sold by the metre. The longest continuous strip that is generally available is 5m and comes packaged on a reel. Most strips come with a JST connector clip on each end of the individual strips that you purchase.

Driver chip (ws2812b)

Each individual pixel on a strip of neopixels has its own ws2812b integrated driver chip. The chip looks like a little black dot that can be seen somewhere “inside” each pixel (where exactly depends on the manufacturer).

The ws2812b “listens” on the control wire for instructions about what to do with its LED: what colour and how bright to make it.

The magic of neopixels is that each LED is individually addressable because each has its own chip. Every pixel “knows” its position along a string (even if its alone in a “string” of 1 pixel), and its colour and brightness can be controlled independently of any other pixels. Pixels can be programmed to appear to elegantly fade and transition between colours and intensities (at least as far as the human eye can perceive) because the control signal – which is usually 800kHz – updates the pixels many times per second.

The correct neopixel strips for most Arduino projects have 3-pins (or “wires” or “pads”): two for power (+ and -) and one for the control signal. Other types of LED strips can have more pins; 4 and 5 are common for other types of strips.

The order of the 3 pins doesn’t matter and can change depending on the manufacturer. Its important to always read the labels so you understand what each pin is for. The control signal is always directional, so strips are marked with an arrow to show the correct direction.

Voltage

Standard Neopixels are 5V. Its best to stick with the standard for most LED projects outside of large installations or commercial signage.

Both Arduino and Raspberry Pi operate at 5V. It can help reduce a project’s complexity when everything shares the same voltage! Note that while the Raspberry Pi is powered by 5V, its output pins are 3.3V. This is still enough to drive some pixels, or you can incorporate electronic components such as a level converter like the 74AHCT125 to bring output up to 5V.

Many varieties of strips and strings are 12V or more. There are also strips where the control line is 5V but the power lines are 12V or more. Read product details carefully. Again, for most Ardunio / Raspberry Pi or similar projects, its best to stick with classic 5V Neopixel strips.

Water resistance

Neopixel strips often to come in IP30, IP65, IP67, and IP68 varieties. The “IPX” refers to the degree of weather/water protection and is a global standard for electronics and other equipment.

See Wikipedia: IP Code for more information about IP Codes.

IP30 and below strips realistically can’t handle water so you should only use them indoors. IP65, IP67, and IP68 strands are water resistant. IP65 is usually covered in a plastic coating that is applied like hot glue to protect against splashes of water. IP67 and IP68 are inserted into a hollow rectangular plastic tube with a silicon cap covering each end. The tubes in IP68 strips are filled with a silicon or plastic sealant, making them the best bet for scenarios where the strip could be submerged in water.

For Neopixels in plastic tubes, extra silicon caps can be purchased on their own so you can cut the strips and make the ends watertight again. Hot glue is useful to squirt into the caps to ensure a watertight seal, especially around any wires.

LEDs

Many product listings will state the chip is a 5050 or SMD5050. This is the type of RGB LED that is found in neopixel strips as well as many other types of LED strips.

The RGB LED is actually made up of 3x LEDs: one each for Red (R), Green (G), and Blue (B) light. Different colours are produced by mixing different intensities of light. Some variants of these LEDs also include a dedicated white LED in addition to the 3x colours.

SMD5050’s are “dumb” on their own. It is the combination of a SMD5050 LED with a WS2812B integrated driver chip that makes it an awesome neopixel. Since many other types of LED strips use the SMD5050, if you see this somewhere in a product listing, double check to make sure that you’re getting a WS2812B driver with every pixel and that the voltage is 5V.

PCB Colour

PCB is an acronym for printed circuit board. In terms of Neopixel strips this refers to the flexible tape or strip itself that the pixels are mounted to. For rings, this refers to the hard plastic circuit board.

It is common for neopixel strips to be sold with a choice of white or black PCB. Other colours are manufactured but they are pretty rare. The PCB colour doesn’t make a difference when it comes to how a project works.

Other things to buy

You might want to consider a few other things to go along with an LED purchase:

  • Soldering equipment and supplies
  • Heat shrink tubing
  • JST Connectors (clips) with 3-pins/wires
  • AWG22 or AWG20 wire (note: these wire gauges can handle the power needs of smaller projects)
  • Power supplies (make sure 5V output!)

When purchasing a power supply take note of the amperage that its designed for and get one (or more..) that supply a greater amount of current than your project needs.

Adafruit has an excellent guide on Powering Neopixels:

If you need to power a lot of Neopixels, please be safe and do your homework. The current draw required can add up to potentially dangerous levels very quick. Erik Katerborg has written an excellent guide available as a PDF:

Creating a site-specific WordPress plugin

Site-specific plugins (or “site plugins”) are a common part of professional WordPress projects. They are useful for adding functionality to a site without strictly coupling that functionality to a theme’s functions.php file.

Site plugins are ideal for implementing custom post types, taxonomies, meta boxes, shortcodes, and other functionality that should be preserved even if an admin changes the theme. They are also great for specifying tweaks and security enhancements such as disabling XML-RPC, a rarely-used feature that provides a vector for attack.

Plugin basics

A WordPress plugin can be as simple as a single PHP file that begins with a Plugin Header: a specially-formatted comment that WordPress uses to recognize it as a plugin.

Only a “Plugin Name” is required in the Plugin Header, however the example below includes a number of recommended fields:

<?php
/*
Plugin Name: Site plugin for firxworx.example.com
Plugin URI: https://developer-website.example.com/unique-plugin-url
Description: Implements custom post types (or whatever) for firxworx.example.com
Version: 1.0
Author: your_name
Author URI: https://author.example.com 
*/

If you intend to support internationalization, specify your text domain (e.g. “Text Domain: example”) in the Plugin Header as well. It might also be important to specify a “License” and “License URI” for your project.

After the Plugin Header, you can define any functions and hooks as you normally would in a theme’s functions.php file.

Site Plugins as Must-Use Plugins

A Must-Use plugin (or mu-plugin) is a special type of plugin that will not appear on the Plugins page in wp-admin and can only be deactivated by deleting their associated file(s).

On client projects where stakeholders are given full admin access, it can be helpful to deploy a site plugin as an mu-plugin to prevent accidents.

Mu-plugin requirements

Any plugin that does not depend on Activation or Deactivation hooks can be deployed as an mu-plugin. Since an mu-plugin is “always there” these concepts do not apply.

An example of a plugin that depends on Activation/Deactivation hooks and therefore can’t be an mu-plugin is one that needs to create (on activation) and remove (on deactivation) database tables for its functionality.

Updating mu-plugins

Update notifications are not available for mu-plugins and updates to them must be performed manually. This is usually the case anyway for custom client work, however this fact can also be useful for projects that incorporate 3rd-party plugins as part of their overall solution.

Since mu-plugins are effectively “locked” to their current version there is no chance an admin can deploy a major update that could risk breaking compatibility with the rest of their site. Developers can take the opportunity to test new versions before they apply them manually.

The practice of “locking down” distributed plugins should only be used in scenarios where the developer is actively supporting the project and ensuring pending updates are regularly applied. Otherwise it may be a wiser security and stability choice to stick with traditional plugin deployments.

Deploying a plugin as a must-use plugin

WordPress looks for mu-plugins in the wp-content/mu-plugins folder by default. This path can be customized by defining the WPMU_PLUGIN_DIR and WPMU_PLUGIN_URL constants in wp-config.php.

Unlike traditionally-deployed plugins, mu-plugins must be PHP files that exist in the root of the wp-content/mu-plugins/ folder. A more complex plugin can be deployed as an mu-plugin by creating a “loader” script to serve as the required PHP file and then using it to pull in the rest of the plugin’s dependencies.

Mu-plugins are loaded in alphabetical order before any “normal” plugins in the wp-content/plugins folder.

Installing CH340/CH34X drivers on MacOS to load sketches onto cheap Arduino clones

This post details how to get [most] cheap Arduino clones working with MacOS Sierra so you can upload sketches to them.

Many clones are not recognized “out of the box” because they implement their USB to serial interface with a CH340 chip designed by China’s WCH (http://www.wch.cn/) instead of the more costly FTDI chip found in genuine Arduinos.

Most sellers on “China deal sites” like Aliexpress.com are up-front about these chips and include “CH340” in their product titles and descriptions, though the implications of this design modification are not alway understood by purchasers.

The easy installation method covered in this post comes courtesy of one Adrian Mihalko. He has bundled the manufacturer’s latest Sierra-compatible CH340/CH34G/CH34X drivers for installation with brew cask. These drivers are signed by the OEM so its no longer necessary to disable Mac’s System Integrity Protection (SIP) feature.

Github: https://github.com/adrianmihalko/ch340g-ch34g-ch34x-mac-os-x-driver

I had no problem getting a Robotdyn Arduino Uno as well as another cheap clone running on a Mac with High Sierra.

Step by step

Prerequisite: ensure brew installed on your Mac. Verify its presence and version info by executing brew --version in Terminal.

To begin, install the drivers with brew cask:

brew tap mengbo/ch340g-ch34g-ch34x-mac-os-x-driver https://github.com/mengbo/ch340g-ch34g-ch34x-mac-os-x-driver

brew cask install wch-ch34x-usb-serial-driver

(Note: the above is only two commands. The first one runs long, so take care when copying and pasting.)

When the install completes, reboot your machine.

Next, plug your Arduino clone into a free USB port.

Using Terminal, verify that the device is recognized by listing the contents of the /dev directory and looking for cu.wchusbserial1420 or cu.wchusbserial1410 in the output:

ls /dev

For example, I found cu.wchusbserial1420 in the output when I connected my Robotdyn Uno.

Things are promising if you find a similar result.

The Arduino IDE ships with drivers for the Uno itself, and save for the CH340, my clones are otherwise fully Arduino compatible (note: some clones might require additional drivers, and/or a different Board must be specified in the Arduino IDE). For my clones, the following steps were all I needed to upload sketches:

  • Open Arduino IDE with a test Sketch
  • Select the correct port in Tools > Port (e.g. /dev/cu.wchusbserial1420)
  • Verify that the Tools > Board had “Arduino Genuino/Uno” selected
  • Verify/Compile the Sketch (Apple+R)
  • Upload the Sketch (Apple+U)

Done.

In particular, the Robotdyn Uno appears to be decently well made, it’s laid out to support all Arduino-compatible shields, and it comes on an attractive black PCB. Versus a genuine Uno, it uses a micro-USB port instead of a full size one and exposes the ATmega328P microcontroller’s analog 6+7 pins. The company makes a number of similarly slick-looking accessories on black PCB. Their store on AliExpress is: https://robotdyn.aliexpress.com/

Have fun with your cheap clones!

Pulling files off a shared host (CPanel) with a 10K file FTP limit using a python web scraper

This post demonstrates the use of a web scraper to circumvent an imposed limit and download a bunch of the files.

I’ll use a recent case as an example where I had to migrate a client’s site to a new host. The old shared host was running an ancient version of CPanel and had a 10K file limit for FTP. There was no SSH or other tools, almost no disk quota left, and no support that could possibly change any CPanel settings for me. The website had a folder of user uploads with 30K+ image files.

I decided to use a web scraper to pull all of the images. In order to create links to all of the images that I wanted to scrape, I wrote a simple throwaway PHP script to link to all of the files in the uploads folder. I now had a list of all 30K+ files for the first time — no more 10K cap:

<?php
$directory = dirname(__FILE__) . '/_image_uploads';
$dir_contents = array_diff(scandir($directory), array('..', '.'));

echo '<h3>' . count($dir_contents) . '</h3>';
echo '<h5>' . $directory . '</h5>';

echo "<ul>\n";
$counter = 0;
foreach ($dir_contents as $file) {
  echo '<li>' . $counter++ . ' - <a href="/_image_uploads/'. $file . '">' . $file . "</a></li>\n";
}
echo "</ul>";
?>

Next, to get the files, I used a python script to scrape the list of images using the popular urllib and shutil python3 libraries.

I posted a gist containing a slightly more generalized version of the script. It uses the BeautifulSoup library to parse the response from the above PHP script’s URL to build a list of all the image URLs that it links to. This script can be easily modified to suit a variety of applications, such as downloading lists of PDF’s or CSV’s that might be linked to from any arbitrary web page.

The gist is embedded below:

If you need to install the BeautifulSoup library with pip use: pip install beautifulsoup4

In the gist, note the regex in the line soup.findAll('a', attrs={'href': re.compile("^http://")}). This line and its regex can be modified to suit your application, e.g. to filter for certain protocols, file types, etc.

Encrypting a USB Drive in MacOS, including formatting to JHFS+ or APFS

MacOS Sierra doesn’t feature an option to encrypt a USB drive in Disk Utility or in Finder (at least at the time of writing). This post covers how to format a USB drive to either the JHFS+ or the new APFS filesystem and encrypt it using the Terminal and Disk Utility.

Instructions

First, plug your USB drive into your computer and open the Terminal app.

Use the following command to list your disks:

diskutil list

Look for an entry (or entries) like /dev/disk2 (external, physical) and make absolutely sure that you understand the difference between your system’s hard disk and the external USB drive you want to encrypt.

The “IDENTIFIER” for my USB drive, found at the top of the list, was disk2. My system showed a subsequent entry disk2s1 but note how this still refers to disk2. Only the disk2 part is required. Use whatever diskn number corresponds to your target drive, where n is an integer.

Only proceed if you are certain that you have correctly identified your USB drive!

The following command formats the drive to Apple’s HFS+ with Journaling format, JHFS+. GPT is a crucial argument here to specify the GUID Partition Map option vs. the Master Boot Record option. You can replace the text inside the quoted string (“Flash Drive”) with your desired drive name:

diskutil eraseDisk JHFS+ "Flash Drive" GPT disk2

It is now possible to encrypt the drive with Finder (right-click and choose “Encrypt ‘Flash Drive'”) if you wish to simply keep the JHFS+ file system. If you wish to use the newer APFS file system, do not Encrypt the drive just yet, and read on.

Using the new APFS file system

APFS is Apple’s latest filesystem and it features good support for encryption. Before formatting your drive to APFS, be aware that older Macs (i.e. those without MacOS Sierra and up) will not support it.

To proceed with APFS, open the Disk Utility app.

With the drive formatted to JHFS+, Disk Utility will no longer grey out the “Convert to APFS” option when you right/control+click it.

Find your drive and choose “Convert to APFS”.

Once the file system has been converted to APFS you can go back to Finder, right/control+click on your drive, and choose “Encrypt ‘Flash Drive'” from the menu.

Don’t forget your passphrase 😉