Ceiba Web

When Cookie-Cutter Doesn't Cut It

With Ceiba Web, you get the full power of technology without the confusion.

From idea to execution, we build digital tools to overcome challenges and forge new opportunities.

Let's Connect

Design

Develop

Educate

Collaborate

We work with commercial and philanthropic partners to build custom web-based technology.

We also publish content to augment and contextualize the role of developers in a global society.

case_studies

Designing a Continuous Pricing Model

After working with Shopify, WooCommerce and other eCommerce platforms, we're building a feature to calculate a continuous pricing curve, which isn't well supported under discrete pricing models.

Building a Continuous Pricing Ecommerce Platform

Mario Vega

ECommerce

React

JavaScript

Pricing

Linear Algebra

Polynomial

Charts

analyses

Dipping Our Toes into ARK Core

We'll begin diving into the Ark Core v2 codebase, both as an introduction to blockchain technology and as a primer for understanding GitHub as a non-developer.

Understanding Code: ARK Core

Mario Vega

Blockchain

ARK

JavaScript

GitHub

Git

NodeJS

tutorials

Creating a Statamic Bootstrapper

Bringing automation to your development workflow can give you some easy productivity wins. We set up a straightforward command to start Statamic sites quickly so you can get to building.

Automating Statamic Development

Mario Vega

Statamic

Automation

Command Line

Dipping Our Toes into ARK Core

Mario Vega

06 July 2018

Outline

Without a doubt, the best way I've found to understand code is to write it; reading it ranks as a close second. Accordingly, the best way to understand code for those who don't code is to read code, and that's what we'll focus on in this series.

For the first installment of Understanding Code, we'll be focusing on ARK Core v2. A recently released blockchain protocol written in JavaScript, ARK Core will provide an accessible jumping-off point into understanding blockchain for developers whose experience might be limited to web development (such as myself) as well as those who've only dabbled in programming or have no experience at all. I will assume very basic knowledge of programming concepts: if you can differentiate between an if statement and a for loop, you'll be fine.

There are two primary goals of the Understanding Code: ARK series. The first, as you might guess, is to understand how ARK Core works. We'll be working from the outside in: beginning with how ARK Core makes its information available to external developers, we'll explore deeper levels of the codebase until we understand the core mechanisms of the blockchain itself.

The second, broader goal is to impart technology enthusiasts of all levels of programming expertise with the necessary skillsets to begin understanding code on their own terms. We'll explore how to decipher a codebase you've never worked with before — how to understand developers' goals and the strategies they implement to achieve those goals.

The purpose of virtually all code is to solve a problem. Understand problems and their solutions and you've come that much closer to understanding code.

Before I begin, some necessary disclaimers. I chose ARK Core not because of some desire to pump my massive holdings: in fact, my newly-minted wallet address (AJAAfMJj1w6U5A3t6BGA7NYZsaVve6isMm) reveals that I have no ARK whatsoever. I selected ARK primarily at the suggestion of close friends, and also because of my previous familiarity with JavaScript through front-end development. I've discussed this column with precisely nobody who works for ARK in any sort of capacity — not even developers to ask their feedback on my analysis. So if you're looking for price predictions, trading advice, shilling or any related activities, I recommend looking elsewhere; you'll find nothing here but code and words explaining code.

Getting Started — Where To Look

As you probably already know if you've read this far, the best way of exploring open-source code is through exploring a project's GitHub page. You can find the code for ARK Core v2 here.

Trying to navigate at GitHub as a non-coder can be pretty intimidating. Let's break it down.

I've created a video for this column that goes through the basics of GitHub using Ark Core v2 as the base project. In another column, I'll go through GitHub in detail, including its relation to Git, the program that powers GitHub's functionality and allows developer to release code in versions. Until then, here's the video breakdown:

<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/quyLwJzw8cA" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>

I'll assume some basic knowledge of GitHub, either through your own experience or through watching my totally-awesome-and-not-improvised-at-all video above. With no further ado, the code!

This is what the Ark Core v2 repo looks like at first glance. Lots of files with periods in front of their names, and a couple folders.

The important thing to understand here is that the "core" repo is simply a central place to organize all of the packages that comprise Ark Core. You can find all of the individual packages within the packages directory, and we'll dive much more into each individual package later in this column as well as later in this series. Suffice it to say for now that most of the files at this top level involve configuration options to make the process of developing all of these packages easier.

With any JavaScript project, the first place to look to start understanding how the different pieces fit together is the package.json file. This file is what JavaScript uses to figure out which external libraries need to be installed for this library to work. A feature of virtually all modern programming languages is the ability to leverage code written by third parties to avoid having to reinvent the wheel every time your project requires new functionality. By inspecting the package.json, we can piece together what functions a given library is leveraging from the broader JavaScript community — an important clue in understanding the purpose and architecture of the codebase. Accordingly, we'll be looking at lots of package.jsons in our quest to understand the various libraries that make up Ark Core.

Here is a screenshot of the top-level package.json, as taken from my computer:

First off, you should note that this file, like all other files ending in .json, is formatted in a ubiquitous file format called, perhaps unsurprisingly, JSON. Short for JavaScript Object Notation, JSON is used to present information in a way that's equally accessible to humans and computers. As easy as it is to look at this file and understand the information hierarchy (and if it's not easy, it will be), it's equally easy for the vast majority of programming languages to parse this document and take action accordingly.

The two keys we're most interested in here are devDependencies and scripts. The devDependencies key refers to a list of packages that this library depends on, as well as what versions of those packages the library requires. For example, we know from reading this document that this library requires lerna to function properly, and that the version of lerna must be at least 2.11.0. The scripts key is a list of commands that can be run on this package. In this case, there are commands like test, which runs tests across the entire Ark Core, commit, which commits code to GitHub using commitzen, and more.

As mentioned previously, most of the libraries we'll find in this top-level package.json help with managing all of the Ark Core packages efficiently. You can find out more information about any of these libraries by searching the central JavaScript package library (NPM, or Node Package Manager) at https://www.npmjs.com/. Briefly, I'll go over the most important ones here:

Notable Packages

Lerna

Probably the most relevant package here is Lerna. In addition to having a badass logo, Lerna is the industry-standard package for packages that have multiple packages. All "Yo dawg, I heard you like packages..." jokes aside, Lerna is an absolute must-have when building JavaScript libraries at the scale of Ark Core, as it makes managing dependencies across the different projects easier. From looking at the devDependencies key, you can see that every dependency has its own version — Lerna helps keep those dependencies consistent across your project to coordinate work across all packages. You'll notice that many commands in the scripts section refer to Lerna, such as bootstrap, clean and others.

Eslint

You'll notice lots of references to Eslint. This package is important for two primary reasons: maintaining a consistent coding style, and preventing some classes of bugs from making their way into code. When you have many different developers working on a single project with their own coding styles, the resulting effort can feel a bit disjointed, like a book with multiple authors who don't talk to each other before publishing. Eslint fixes those inconsistencies by applying a single coding style to every package in the repository.

In addition, unlike some other programming languages, JavaScript is dynamically typed, meaning that programmers don't have to declare a type for each variable they create. While this allows for faster and more agile programming, dynamic types also introduce the possibility that your program will try to interact with a variable in a way that its type does not allow.

Without Eslint, you wouldn't discover that error until your code is actually run. For example, if you defined two variables as "apple" and "banana", and you then tried to divide "apple" by "banana" within your code, JavaScript wouldn't catch that error until you (or worse, your application's users) ran the program and encountered it. Eslint solves both of these problems elegantly and flexibly using customizable standards determined by each project, making it a must-use in a large program like Ark Core.

Jest

Jest, and the closely-related Jest-Extended, are testing libraries. We'll cover tests in short order, but the SparkNotes is that tests are often the quickest and easiest way to understand how a library operates. That's because, when designed well, tests are describe how your code functions at the highest level possible. The how of your code is hidden, but the what of your code is easily readable and accessible.

Putting It Together

From this description, it immediately becomes clearer what some of the mysterious "dot files" are doing for this repository. lerna.json contains customizations that ARK Core has specified for Lerna to use: where to find the packages it should bundle within the repository, for example. .eslint specifies how eslint should lint the repository, and .eslintignore tells eslint which files it should ignore while doing so. Similarly, jest.config.js provides Jest with information that allows it to run tests within the specific context of ARK Core: which tests to always run, which tests should only run occasionally, where it should find tests to run, etc.

Importantly, there's nothing in the top-level folder that refers to anything blockchain-related. To keep things organized, all code that actually powers the ARK blockchain is contained within the packages folder. The docker folder is mostly important to developers looking to replicate the ARK toolchain on their own systems, and we'll ignore it for the purposes of this tutorial. With the plugins folder being empty, let's inspect the packages folder and get to the meat and potatoes.

Previewing the Packages

Whoa. That's a lot of folders. And if we poke around a bit, we see that each package has a handful of their own files and folders, and even their own package.json. What do we do now? Do we give up? Do we jump straight into the core-blockchain folder and see if we can shortcut our way to blockchain enlightenment?

Not quite! We'll dive a little bit into our first package to start the next column in this series, but keep in mind our objective here is to work from the outside in. With that in mind, let's take another look at the ARK Core Readme to figure out what the best entry point is to experience the blockchain from a public perspective before delving into the blockchain's internals.

This is a screenshot from the relevant section in the Readme. At a glance, there are two packages that seem to fit our bill of requirements: client and core-api. Both of these seem to deal with how the public interacts with the blockchain — client is a client-side library that anybody can integrate into their own website, and core-api describes how an ARK node should respond to requests from clients.

As mentioned before, looking at a package's tests is a good way of preview what its functionality is. With that, let's take a brief look at each packages' tests to see which might be better to start with.

Client Tests

Core-API Tests


Above, I've pulled out some of the first tests from each folder. We'll go more into these tests, as well as how to find them and understand them, in the next column. But at a glance we can see that the client package tests lots of non-blockchain-related functionality: how to properly send an HTTP request, how to make sure URL typos won't break the package, etc. These are important concepts, no doubt, but understanding them won't get us much closer to understanding how ARK works. By contrast, the core-api gets right into the details of the blockchain itself. We can see that this test expects to retrieve a list of all blocks. If we figure out what code this test is designed to cover — in other words, what code needs to be working properly for this test to pass — we'll know how the API retrieves blocks from the blockchain. From there, our journey into understanding how those blocks are created, stored and retrieved can begin in earnest.

So, we'll be looking at core-api in the next column! If space permits, perhaps we'll cover another package as well, but let's not get too ambitious. It's more important that we get it right than get it quickly.

Designing a Continuous Pricing Model

Mario Vega

26 June 2018

Let me lead this off by stating my love for the eCommerce platforms I've used during my time as a web developer. I've worked so far with WooCommerce and Shopify, and I couldn't be happier with how easy they make it for anybody to start their own online store quickly.

Typically, I'm not one to try and re-invent the wheel when it comes to code. With client work and with my own side projects, I've learned that the easiest code to maintain is the code you don't have to write. Whenever possible, I recommend solutions that leverage existing code, because you outsource the responsibility of maintaining your code to someone else -- in the best scenarios, someone whose job it is to exclusively maintain code. Open-source flips this model on its head a bit in that the technology's users are also its maintainers, but for the average, non-technical user, this rule holds up nearly 100% of the time. In fact, when it comes to software working as intended, I personally have more faith in open-source projects that can leverage entire communities of intelligent developers than I do in closed-source projects where only a handful of people even see the code, let alone know how it works.

However, I've recently come to the conclusion, both through my experience working with clients and as an indirect product of my nascent studies in machine learning, that there is a category of eCommerce products that aren't well served by existing solutions. And the more I dug into the issue, the more I came to believe that this wasn't a problem that could be solved incrementally, by adding or modifying code for an existing solution. In fact, the difference digs right into the inner workings of how every eCommerce application currently works, and to make the changes I'm suggesting would require a restructuring of how inventory is managed, priced, and published to the web.

So, what is this difference? As alluded to in the title, it involves the ability to price items on a curve. But what does that mean, and why isn't it possible in current eCommerce applications?

Continuous vs. Discrete Pricing

The simplest way I've thought of to explain the difference is selling t-shirts vs selling jelly beans.

With t-shirts, it's fairly easy to price each t-shirt your store offers individually. Perhaps you have black tees and red tees, or perhaps you have some tees with graphics on them and some without. In any case, assuming your customers are mostly retail -- that is, assuming you're not selling t-shirts to businesses -- your pricing is going to be fairly consistent. Black tees might be $20 and red tees might be $10, graphics might be $25 and plain tees might be $90 if you're Kanye West. Whatever your pricing might be, each thing that your store sells has a specific price: in other words, a discrete price.

This is the model that eCommerce software is built upon. For most businesses, this works perfectly fine, as this model matches up well with how their products are bought and sold. However, imagine for a moment that you sell jelly beans. Imagine also that you sell jelly beans to all sorts of customers, from individual gift baskets to massive corporate shipments. 

You might think that pricing your jelly beans would become difficult quickly -- and you'd be right. The price of an individual jelly bean will vary drastically from order to order, depending on how many are ordered and by whom. Perhaps individual orders of jelly beans are more expensive because your jelly bean factory is geared up primarily for large customers. Or perhaps jelly bean orders above a certain threshold are much more expensive, as you have to acquire a special sort of jelly bean barrel to package and ship them. In this case, it's probably easier for you to think of your jelly beans in terms of a continuous price, where the price of each jelly bean rises or falls depending on the context of the order. 

Now, as with most contrasts you can think of, the difference between discrete and continuous is not set in stone. Selling t-shirts could fit a continuous model at a certain scale, and jelly-beans could be thought of discretely if you narrowed down the range of options by which you sold them.

However, my experience in the field has shown me that the further your business model skews towards continuous pricing, the less well-served it is by traditional eCommerce solutions. Many clients I've worked with end up implementing an awkward combination of product variants and discount structures to make their continuous pricing model fit into a discrete model that can work with their platform of choice. This setup makes changing your pricing a difficult, time-intensive, and potentially dangerous endeavor, reducing your ability to adapt and increasing the likelihood that a competitor outmaneuvers you.

What's the Solution?

I've implemented a continuous pricing model in Shopify and WooCommerce enough times to know there must be a better way.

The most straightforward solution I've thought of is to allow store owners to create their own price curve. In the simplest cases, you have one independent variable and one dependent variable, making it fairly simple to create price curves on a per-product basis. For example, for our hypothetical jelly bean vendor, the independent variable might be the number of jelly beans, and the dependent variable might be the price per jelly bean. If a customer orders 10 jelly beans, our vendor might charge 10 cents per jelly bean. Order 1,000 jelly beans, and the price drops to 5 cents per jelly bean.

This model is easy enough to replicate on a line chart. Use your independent variable for your x-axis, your dependent variable for your y-axis, draw your points on the graph, and let math take care of the rest.

In and of itself, this would solve our continuous pricing problem. The challenge is that such a model isn't possible in your average eCommerce solution, where every variant must have a specific price. So in order to embrace this continuous curve, vendors would have to strike it out on their own.

This got me to thinking: how can I take this idea to its furthest extent, helping business owners fully embrace their unique pricing models with all of their inherent quirks? Which led to my next thought: if we can chart prices, why can't we chart costs as well? We could chart how rapidly shipping costs increase per jelly bean, for example, or how producing jelly beans en masse reduces the cost of each individual bean. If we combined all of those cost curves together, we could arrive at a single curve that would represent the total cost of producing a jelly bean when taking into consideration every possible cost factor. 

And while we're at it, why not use that cost curve to determine the price? If we know how much it takes to produce jelly beans at a specific number of beans, and we know how much profit we want to make, couldn't our cost curves determine our prices automatically? We'd just need to multiply our costs by a given number depending on how much profit we want to make, and we'd have the price we'd need to charge at each point in our curve in order to make the profits we're after.

So, this is the outline of what I want to make! At a super basic level, it'll calculate cost curves and allow store owners to set their prices on a continuous price curve. There will eventually be more to it than that, because a price curve alone does not an eCommerce store make, but one step at a time.

Creating a Statamic Bootstrapper

Mario Vega

20 June 2018

As a developer cut from the cloth of WordPress customizations and incremental improvements, I've had a tendency to stay away from the DevOps side of development, seeing it as something "other", or separate from the process of making websites.

My first motivation for learning to code was changing the appearance of my own websites. I wanted to write code so I could see the results for myself, see the validation that what I was learning could make a direct visual impact.

This is why my first web development goal was to learn CSS. DevOps seemed, and seems even still, the exact opposite of CSS. The best DevOps is the DevOps that is invisible — to the end user but also to the developer, who in "ideal" circumstances would develop functions without consideration of form or context.

Learning programming for me has always been about finding the balance between learning and building. My primary motivation in learning is to build, but building too quickly without learning often leads to lots of rebuilding.

This being said, I learned quickly that some automation was instrumental to a speedier and more secure development process. I only needed to drag-and-drop my application files into my production server a handful of times to realize that risking my website's stability on my ability to drag-and-drop correctly was not a smart move.

Now, I'm still substantially far away from what you would call a DevOps expert. But after a particularly creative development period had me setting up several Statamic websites within the course of a few weeks, I realized that there were certain addons, theme setups, and JavaScript libraries that I was pulling into every new project by hand.

One of my earliest DevOps lessons was that it's better to type things into existence then it is to drag-and-drop them. Using your computer's terminal with regularity is, without a doubt, one of the most intimidating aspects of learning development. Instead of a friendly, intuitive graphical interface to navigate, you're presented with a blank screen

In that spirit, I set out to replace my drag-and-drop bootstrap experience with a CLI command that would install and configure my ideal Statamic bootstrap with minimal input on my part.

The Goal

The goal is to reduce the time commitment for creating a new website to be as close to instant as possible. The easier it is to start new projects, the quicker the turnaround you can have and the sooner you can provide value to your clients and partners.

The link to the Github project is here.

The Challenges

Without a way (yet) to install and configure Statamic themes from the command line, we'll need to find a way to automate the theme configuration ourselves.

The Outline

We'll create a template website alongside a template theme that's preloaded with our default JavaScript setup. From there, we'll write a Statamic command within our template site that will automatically reconfigure our default template's information with the information specific to our website. Finally, we'll write a Bash command that creates a new Statamic website before running our Statamic command with the information we provide it.

The Process

Before we can build an automated bootstrap, we'll need to have something we want to bootstrap first.

We'll start with the most mission-critical part first: picking a name. In this case, I decided to name my Statamic bootstrapper Jukung, after a type of traditional Indonesian fishing boat. Just like its physical counterpart, our Jukung will allow us to move quickly and with very little overhead.

With our name in place, let's get to work!

After installing the Statamic Command Line Interface (CLI), making a new Statamic site is as easy as typing a single command in our terminal. Setting it up to your own preferences is another story — but that's what this tutorial is for 😄.

Building the Bootstrapping Script

Before we get started writing code, let's define exactly what we want our bootstrapper to do. It's helpful to think of programming challenges in terms of inputs and outputs. What should our solution accept as inputs, and what should it produce as output?

Inputs:

  • The template website, with our JavaScript configuration included
  • The name of the new website we're looking to bootstrap

Outputs:

  • The bootstrapped website, with all generic template names replaced with the name of our new website

So, based on this description, we first need to define exactly what needs to be changed within our template site to customize it for our new website.

Because the name of our template is Jukung, which is not a common word, we'll use the word "jukung" in our template website in every place where the name of our new website should be. Then, when the bootstrapper is run, we'll go through each file and folder that we need to change, find the word "jukung", and replace it with our new website name.

To stay within the Statamic ecosystem as much as possible, we'll use a Statamic command to do the search-and-replace. Essentially, this is a command that you can run within your terminal to accomplish tasks with full access to the Statamic API. This will help make it easier for us to find the files that we need to change.

We'll go over the code line by line, but first, the whole thing:

Still here? Glad to see it! The code's long, but most of it is comments, and by the end of this you'll be bored by how simple it appears.

The first thing to notice is the $signature at the top. This is the text we'll use in the command line to call our generator. The {name} is where we'd insert the name of the site we're going to generate.

If we were creating a theme for plumbers and feeling particularly creative, for example, we'd perhaps name our website plumbing. In that case, we'd run php please jukung:generate plumbing to convert our template into a plumbing-ready boilerplate.

The $description property is self-explanatory - it describes functionality of the command we're creating. This is useful in case someone is accessing our website from the terminal and wants to know what our command does at a glance.

You can ignore the __construct() method for now. It comes included with any command you create using Statamic, and it helps set some things up under the hood. It would also come in handy if we wanted to pull in some external libraries, but we don't want to do that here, so the method mostly just sits there and looks pretty.

The handle() method is where all the action happens:

Let's go through this slowly.

On the method's first line, we're defining our new name: plumbing, to continue our previous example. The $this→argument() method is used to get the name we're entering from the command line, and we're passing in the name argument to match the {name} we used in the signature.

On the next line, we're using a Statamic-specific method to get the name of the current theme from the site we're about to bootstrap. Here, it would have been possible to simply write jukung because that's the name of the theme we created, but fetching that value from the config allows us to keep our code the same if we were to change the template name to something besides jukung.

On the third line, we're using another Statamic-specific function to get the path to our template theme. Because this definition is using the $current_theme variable we declare the line before, this code can also stay the same should we decide to change our theme later on. Robustness! Yay!

These next few lines are where the bootstrapping. In total, there are five places where we need to change the name from jukung to our new name:

  1. The theme's primary CSS (or SCSS) file
  2. The theme's primary JS file
  3. The webpack.config.js that powers Laravel Mix
  4. The theming.yaml file that Statamic uses to define the current theme
  5. The theme directory itself (located within site/themes)

The order in which we do these things doesn't matter too much, with one notable exception: we've got to change the theme directory after we've already changed all of the files within the theme. Because we're using the theme name to determine our $theme_path, if we change the theme name, we'll no longer be able to find our theme files because our $theme_path will no longer match an existing path.

Our changeCss() and changeJs() methods are nearly identical:

In both cases, we're using PHP's native rename() function to rename our theme's SCSS and JS files from jukung.scss and jukung.js to plumbing.scss and plumbing.js.

Could we have called rename() directly within our handle() method and avoided the need to define two new functions? Absolutely. But as part of my love for declarative programming, I like to give any unique functionality its own function. That way, if I ever need to extend or change how one part of my application works, I know just where to go to accomplish it. There's no performance overhead for declaring new methods or functions — my recommendation is to do so whenever it's convenient, which is always often.

Next, we use one function to alter two files: our webpack.mix.js and our theming.yaml files. That function looks like this:

Here we're using more native PHP functions. In the first line, we grab the text from the file, comb through it, and replace any instances of $old_name with $new_name. In the second, we replace the old file with our updated text.

The last method, changeThemeDir(), is very similar to ones we've defined before, so we won't retread old ground here.

And with that, our bootstrapper is complete!

Finishing Touches

Finally, we've got to bring the whole thing together. We need something that will create a new project from our template before running the bootstap And here, I'll admit I hacked together a solution that I'm not particularly happy with.

My first inclination was to use PHP's package manager, Composer, which provides all sorts of functionality for automating tasks in response to events. However, because of some intricacies regarding Statamic and its relation to Laravel (soon to be rectified) this wasn't as ideal an option as I had hoped.

Besides, when I did more research under the hood, I realized that Composer's create-project command just calls on Git and installs composer dependencies. That sounded like something I could replicate!

What I ultimately settled on was an old-fashioned Bash command, tucked away in my .bash_profile. Here's what I came up with:

Basically, this does 3 things:

  1. Clones the Jukung repository into a directory with the name of your website.
  2. Moves into that directory.
  3. Calls the bootstrapper from within that directory.

With this, all you need to do is navigate to your project directory, call jukung {your site name}, and you're off to the races.

If I were feeling particularly fancy, I'd have it move into the theme directory and npm install to preload my JavaScript environment. However, because I often end up adding other packages on top of the bootstrap, I left that process out of this command to keep things simple.

As I write this, I'm recognizing it would probably be a good idea to include a command to update Statamic as well, as there's no guarantee the version of Statamic I have in the repository is current (in all likelihood, it's not).

Feel free to adopt this template to your own needs! Again, the link to the Github project is here.

Defining a JavaScript Stack

Mario Vega

20 June 2018

I'm going to take a moment to describe what JavaScript libraries I find essential enough to include in every project. I distinctly recall this being a major sticking point for me as I stumbled towards full-stack development.

Above is the package.json for the jukung theme which I've created as a blank theme using the Statamic CLI.

For those who might be unfamiliar, package.json files are used to define any packages that will be used in your project by Node.js. Node.js, in turn, is the standard JavaScript library for working with JavaScript files on your server.

At one point this distinction confused me to no end. As a budding WordPress developer encountering the JavaScript ecosystem for the first time, I had no idea why I would want to introduce a JavaScript library amongst my PHP files. Writing JavaScript directly in my template files had worked just fine for me so far. Why did I need to complicate things by adding Node into the mix, especially if I was using PHP and not JavaScript to power my servers?

When considering whether or not to include something in my development process, I try my best to follow a simple rule: when you need it, you'll know, and don't worry about it before then. In the mercurial and evolving world of web development, the temptation and the social pressure to learn new tools never goes away. If you stick with what works for you until it doesn't work for you, you can avoid that pressure — but when something doesn't work for you, it's important to know why, as well as how to fix it.

I will admit that the path that led me into the world of JavaScript build steps originated from my background in writing. Because my interest in writing predates my interest in programming, where possible I try to program in a style that mirrors good writing. For me, this means writing code and documentation that explains what's going on in my code as clearly and explicitly as possible, even if this means being more verbose than many programmers would like.

This also leads to my preference for declarative programming, which, as the name implies, is a style of programming that declares what your code does at the highest level possible. Vanilla JavaScript is fantastically powerful, but it is not declarative by nature — instead, it is procedural, which involves solving problems through a defined series of steps.

Even though my web server runs on PHP, in order to fully embrace declarative programming, it's helpful to utilize some features and functions that are only available using non-standard JavaScript tools and libraries. Because of this, I find that a build step, which converts non-standard JavaScript into a form that works correctly on browsers across the Internet, is useful enough to incorporate into my development process.

Now that we know why a build step is useful, let's go over the package.json file in a bit more depth. The two important parts to this file are the dependencies and the scripts, which describe what JavaScript packages my Statamic theme uses as well as how those packages should be combined and delivered to the browser.

The three packages of particular importance here are vue, laravel-mix, and tailwindcss. Vue is my preferred JavaScript library to achieve the declarative programming style mentioned before, and Laravel Mix is a tool that makes creating a build step as easy as possible. Using Laravel Mix, you can write JavaScript using modern techniques without worrying about your code being too "fancy" to work on any device, from your MacBook or Surface Pro to your dad's Nokia flip phone.

Finally, TailwindCSS is a library I use to eliminate the troubles of maintaining a huge bundle of repetitive CSS files. This will definitely need to be its own series, as there's a lot to cover regarding the benefits and disadvantages of working with TailwindCSS. Suffice it to say for now that after writing CSS by hand for years before using Tailwind, I find it very hard to return to writing CSS the "normal" way.

The scripts part of the package.json looks opaque, even to me. Fortunately you don't have to write this code by hand: Laravel Mix includes some scripts that will work out of the box. I've copy-pasted their code into my setup directly.

Facebook and the Global Conversation

Spencer Boegeman

07 July 2017

What role does Facebook serve in your life? Facebook is now home to close to 2 billion users, many of which access the site at least once per month and 1.28 billion of which access the website on a daily basis. With the large majority of users living outside the US, Facebook is tasked with moderating the daily discourse of an internationally diverse community. Facebook’s formula for the systematization of how content reviewers are to censor material is given as “protected category + attack = hate speech”. On the surface this may seem reasonable -- but what is the given definition or criteria for what is or is not a “protected category”? Earlier this week Propublica published an investigative piece detailing how certain inconsistencies can arise in the application of Facebook's censorship guidelines. These inconsistencies often do not protect classes of people who would otherwise be protected under discrimination laws in the US. The larger issue however is the problem of censorship itself -- even well-intentioned censorship can mistakenly silence legitimate political discourse and involuntarily remove dissent.


Facebook acts as a filter, an arbitrator, a judge which deems that which is relevant to particular individuals. Americans mainly access news online and Facebook is increasingly used as a central hub for this purpose -- especially to those under the age of 30. Users are sharing less personal or ‘original’ content on Facebook and sharing more news articles or other media. Whether or not users on Facebook are sharing personal stories, their own political beliefs, news, memes, videos, historical photos (such as Vietnam war photos), or other content Facebook filters what you see and don’t see on your news feed. There certainly may be content that should be removed but there are ethical and legal knots. Even when it comes to the spectre of Terrorism, there are blurred lines. Countries do not always agree as to who or what group should be sanctioned or designated as a terrorist group. National security interests collide and nation states have divergent political and economic goals.

The Difficulties and Failures of Censoring Terrorism

The global threat of terrorism is central to many of the issues that are at hand with censorship and the role that social media should play in moderating public discourse. The issues regarding censorship on Facebook include not just removing specific posts or content but also banning or removing entire groups or persons. The need to categorize and remove content that is associated with the organization and recruitment of terrorist groups is a valid one. However, groups that Pakistan may deem as terrorist organizations may be viewed differently in Iran or India. 41 out of 64 groups that are banned in Pakistan operate on a variety of social networks including Facebook where they are associated with over 700 pages and groups. 160,000 users are members of these Facebook groups or like their pages. The Kashmiri conflict is also home to potential ambiguity in distinguishing legitimate political activism from incitements to violence and, furthermore, terrorism. Last year Facebook was under fire for removing posts of users discussing the killing of Burhan Wani, a member of Hizbul Mujahideen who was killed by the Indian army on July 8th, 2016. Syed Salahuddin, a leader of the group, was just last week deemed a terrorist and sanctioned by the US State Department. However, such labels do not mitigate the issues that are faced with censoring extremist content on social media.

On June 26, 2017 Facebook, Twitter, Microsoft, and YouTube announced that they are forming the “Global Internet Forum to Counter Terrorism” which aims to reduce online presence of terrorist groups and ideologies. Most would probably agree that Twitter or Facebook should remove posts from groups such as ISIS and ban the respective accounts. However, sanctioning content that can be directly linked to armed insurgencies, militant terrorism, or other violence is less clear-cut than one would imagine. Are rebel groups in the Democratic Republic of Congo, Somalia, Sudan, Yemen, Donetsk or Syria always worthy of complete censorship and silencing? Is it still not possible to view videos online that can clearly delineate the Syrian or Ukranian conflicts as they evolved from protests in the streets to armed struggles? On Vice News youtube, one can watch videos beginning with the Euromaidan protests in Kiev during November/December 2013 all the way through the conflict up until a few months ago. As demonstrations and protests have become more active in the past months in Kashmir, it be wise to consider when political dissent becomes illegitimate terrorism. India and Pakistan would likely have antithetical definitions in either case. The aforementioned regional struggles are only examples of some of the geopolitical obstacles that Facebook must navigate while moderating the global social network.

The German Network Enforcement Law

On June 30th, 2017 the German Bundestag passed the Netzwerkdurchsetzungsgesetz -- or network enforcement law that will require social media companies to remove any content that is deemed illegal by German law. The illegal content must be removed within a limited time frame or companies are liable to face fines up to €50m ($57m USD). This law covers not only hate speech but also content that is associated with defamation (libel, slander), treasonous forgery, anti-constitutional organizations (including Nazi symbolism and related propaganda), celebrating criminal offenses, calls to organizing criminal groups, and various other types of speech. Facebook and other companies are told to remove content that is “obviously illegal by German law” within 24 hours; for more opaque cases the time frame is extended to 7 days. Germany is home to some of the toughest laws concerning holocaust denial, incitements to violence and other abuses of speech. The new legislation has been called “misguided” and a “minefield for U.S. tech” by Mirko Hohmann of the Global Public Policy Institute.

Unlike YouTube or Google -- Facebook is hesitant to “geo-block” content in only certain geographical regions. Does this mean that Facebook should only block content from German citizens, or people who are posting in Germany? How does Facebook, Twitter, or Youtube determine a user’s location--is it based upon where the content was posted or where they reside? There are numerous unanswered legal questions. What may be more troubling, however, is the notion that this law may give companies a cost-benefit incentivization to excessively delete ambiguous content. It must be noted that the determinations of what must be removed is left up to the hasty judgement of social media companies rather than judges and courts. This leaves corporations with the onus of regulating freedom of speech. David Kaye, UN Special Rapporteur to the High Commissioner for Human Rights, criticized the law in a similar fashion and also pointed out his concerns with the “provisions that mandate the storage and documentation of data concerning violative content and user information related to such content, especially since the judiciary can order that data be revealed. This could undermine the right individuals enjoy to anonymous expression”.  If the US were to enact similar legislation in order to more strictly regulate hate speech and other undesirable content -- what would be the result? Freedom of speech is given a wider scope in American law and society.

Censorship and Freedom of Speech

This is surely not the first time Facebook has come up against legal and geopolitical constraints, but it does emphasize the ethical dilemma of “freedom of speech”. It is not possible to defend an absolute notion of freedom of speech -- take the case of a political activist giving a talk on a university campus who is interrupted by a jeering protest within the lecture hall. It would not be possible to rewardingly listen to both, so one must win out over the other. However, the issue is not whether limits need to be imposed upon free speech but how far those limits can extend. John Stuart Mill gives a very bold claim for his endorsement of freedom of speech:

> If all mankind minus one were of one opinion, and only one person were of the contrary opinion, mankind would be no more justified in silencing that one person than he, if he had the power, would be justified in silencing mankind.
 
- John Stuart Mill (1806–1873). On Liberty. 1869. Chapter II: Of the Liberty of Thought and Discussion

The limitations on such expression are summed up by Mill’s harm principle: “the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others”. In other words, coercion is only justified in stopping coercion or harm. Censorship is a form of coercion, a halting of speech, and is therefore only justified when it can be instrumental in preventing harm. Mill’s harm principle here is often not inclusive of hate speech--as it would have to deprive someone of a right directly. Perhaps one could argue that Mill is outdated and his views of freedom of speech are not suited to the fast-pace of 21st century online communication. Nonetheless, if we are to value democratic ideals we must caution ourselves against unnecessary, monolithic control of public debate and private discourse.