Posted by & filed under GameDev, Uncategorized.

I’ve been awake for over 24 hours developing at the Beta Breakers Gamejam. Here’s my entry:

It’s a multiplayer game where many players can join simultaneously. New characters are made available to all players at the same interval. Drag and drop a character from the inventory at the bottom of the screen to the board to place it. Pieces need to be placed next to other pieces. Each piece has an attack and health. You start off with a King, and if he dies the game is over.

The minimap is something I haven’t built into a game before. It was surprisingly simple to do and adds a lot of convenience to gamers. Ideally it would highlight the viewport and allow for quick navigation (like any other strategy game mini map) but I didn’t have enough time.



Posted by & filed under Web Server.

I’ve been happily hosting my various PHP/nginx websites and occasional crazy Node.js application on a Linode 1024 VPS for the last couple years. Recently I’ve been looking into AWS and other VPS/PaaS providers for side projects and figured I should check out what I was getting from my Linode.

Linode Upgrade/Downgrade

Linode Upgrade/Downgrade

Interesting. Apparently I haven’t upgraded or performed a reboot on my VPS in a long time.

Linode Uptime

Linode Uptime

I believe these changes also included some SSD upgrades as well as a NIC overhaul. The vCPU “upgrade” from 8 cores to 2 is of course an obvious drawback. However, after doing some quick research, people were reporting before and after benchmarks having the same quality, as if a vCPU was more likely to be dedicated to one owner. The RAM usage was the most appealing as I barely touch bandwidth, so I figured it was worth the risk of upgrading.

Pre-Upgrade Benchmark (8 x vCPU)

$ siege -c 20 -r 100
** SIEGE 3.0.5
** Preparing 20 concurrent users for battle.
The server is now under siege.. done.

Transactions: 4000 hits
Availability: 100.00 %
Elapsed time: 412.95 secs
Data transferred: 7.33 MB
Response time: 1.49 secs
Transaction rate: 9.69 trans/sec
Throughput: 0.02 MB/sec
Concurrency: 14.48
Successful transactions: 4000
Failed transactions: 0
Longest transaction: 6.76
Shortest transaction: 0.15

Post-Upgrade Benchmark (2 x vCPU)

$ siege -c 20 -r 100
** SIEGE 3.0.5
** Preparing 20 concurrent users for battle.
The server is now under siege.. done.

Transactions: 4000 hits
Availability: 100.00 %
Elapsed time: 254.19 secs
Data transferred: 7.33 MB
Response time: 0.72 secs
Transaction rate: 15.74 trans/sec
Throughput: 0.03 MB/sec
Concurrency: 11.28
Successful transactions: 4000
Failed transactions: 0
Longest transaction: 2.85
Shortest transaction: 0.15


Benchmark CPU Before and After

Benchmark CPU Before and After

The number of transactions per second went up by 50%! The concurrency went down a little (less cores for handling concurrent processing), but that doesn’t matter too much for a web server. As you can see in the image above, the CPU utilization went down a decent chunk as well.

The upgrade is definitely worth it. Fear not the reduced core count.

Posted by & filed under GameDev.

This weekend I put together Robot Onslaught. There’s no server tech involved (save for simple serving of the files). All data is sent over PubNub.

Play Robot Onslaught

Robot Onslaught on GitHub

Move around with WASD. Shoot in different directions using the Arrow Keys (I map them to a USB SNES controller for extra awesomeness).



Robot Onslaught at Challenge Post


I won the competition!

Posted by & filed under Node.js.

At work I’ve been tasked with building real-time PvP systems (pushing data from server to client) as well as matchmaking systems for pairing players together (pretty similar stuff to what I’ve been doing since first learning Node.js two years ago).

While building matchmaking systems, of course the ELO system used by Chess was brought up (As well as TrueSkill, but that’s probably patented). It’s a system which rates every player and considers the difference in ratings and the outcome of a match to determine new ratings. For example, if you’re a newbie and you’re bested by a master, it’ll only slightly affect your ratings. However, if you’re a newbie and you beat a master, it would be a large change in ratings.

I did some research on ELO modules for Node and only came across two. One was incomplete and the other was literally a file with 10 lines of code. So, I went ahead and built my own:

Arpad: An ELO Rating System for Node.js

It has unit tests and 100% code coverage. It’s also quite simple to use:

var Elo = require('arpad');

var elo = new Elo();

var alice = 1600;
var bob = 1300;

var new_alice = elo.newRatingIfWon(alice, bob);
console.log("Alice's new rating if she won:", new_alice); // 1605

Posted by & filed under Personal.

Cracks marr the surface
Of your pristine face;
Neglect and solitude
Left acrid distaste.
Fear, anger, loathing
Guiding every step. 
Like the Phoenix: rise above,
Or perish with regret.

One of the neatest things I ever wrote.

Posted by & filed under Linux.

I’ve been using Linux Mint 15 on my Thinkpad Carbon X1 since about when it came out. It’s been running pretty well, but I finally got to the point where I’m doing more ./configure than I am apt-get install as the repository is so outdated.

Linux Mint 17 XFCE recently came out, and is a “Long Term Release” (a concept I believe Mint is borrowing from Ubuntu). This means the repository should be more up to date for about five years, instead of Mint 15’s 2-ish years.

At any rate, here is the procedure to follow to get Linux Mint 17 (or Debian or Ubuntu really) using the most recent Kernel (as of this writing it is 3.15.3). The stock Kernel in Mint 17 is 3.13.0. The Kernel in my existing Mint 15 is 3.8.0.

First, download the appropriate packages. I’m assuming you’re using a 64-bit machine (and have been for the last 4 years) so the instructions are for that. To adapt for 32-bit, manually browser around the download URLs until you find the sibling half-bitted package.

mkdir -p ~/Downloads/kernel && cd ~/Downloads/kernel
sudo dpkg -i *.deb

And poof, you’ve got the latest kernel.

You may be wondering; why do I want to upgrade my kernel at all? Surely if Mint recommends an older kernel, there must be a reason to use it? And you’re probably right. With each new kernel update there comes new stability and security updates. There also comes new problems and potential version mismatches. Upgrade at your own risk.

I’m writing this post from within Mint 15. I’ve done these upgrades in a VM. I’ll update this post once my Thinkpad has been updated with any issues I find.

Posted by & filed under Linux, Node.js.

Normally when I host my Node.js-based applications, I’ll SSH into my server, open up a screen or tmux session, run node ./server.js, detach, and call it a day. Of course, if you’re reading this article, you’re fully aware that this is a horrible solution and are looking for an alternative.

One thing that is going to change between the hacky method of hosting and this new method is that it won’t be YOU that is executing the process, but instead the SYSTEM. So, we’ll be taking a few extra steps here and there to enforce that concept.

Process Management

For starters, we don’t want to execute Node directly (even once we set up our service). Instead, we want to run it behind some sort of service that will keep the process alive in the unfortunate event of a crash. There are many tools for this, such as monit, PM2, or nodemon but the one I’m most familiar with is called forever. Feel free to use an alternative if you’d like.

First, we’ll want to install forever on the server. Run the following command to take care of that:

sudo npm install -g forever

Once you’ve got forever installed, it’s a good idea to have it throw pid files and log files somewhere. I threw a directory into /var/run for just this purpose (although I’m not sure if this is technically the best place for such a thing):

sudo mkdir /var/run/forever

Application Location

If you’re used to storing your Node.js projects in your home directory (like I was…), you need to stop! Instead, store them somewhere which makes more sense as far as the entire server is concerned. The directory /var is pretty good for doing this, and if your application serves up HTML, throwing it in /var/www is probably a good idea.

I host a lot of applications and websites on my server, so I put my sites directories like /var/www/

Run Time Configuration

There are a few changes you may want to make to your application to make it server-friendly. One thing I always find myself needing to do is pass a port number that I want my process to listen on. My server has a few IP addresses (network interfaces) and sometimes I’ll also need to pass in which interface I want to bind to.

A common solution for this is to pass along command line arguments. A different approach that I’ve been liking lately is to set environment variables (environment variables are analogous to named parameters and CLI arguments to normal function arguments).

A quick note on Node.js server listening conventions: Whether you’re using the built in http module, or going with express or other similar frameworks, the convention is that you call a .listen() method on the main http object you’re working with. The first argument is a port number, and the second argument is a hostname. If you don’t provide a hostname or pass in null, it defaults to listening on all interfaces (e.g. ‘’). If you pass in the string ‘localhost’ or ‘’, the port can only be accessed from the local machine. If you pass in the ip address of one of your interfaces, it will only listen on that interface.

Here’s an example of how you might implement both of these methods in your scripts:

Command Line Arguments

./server.js 9000 "localhost"


#!/usr/bin/env node

var app = require('express')();

var port = parseInt(process.argv[2], 10) || 80;
var interface = process.argv[3] || null;

app.listen(port, interface);

Environment Variables

SERVER_PORT=9000 SERVER_IFACE="localhost" ./test.js


#!/usr/bin/env node

var app = require('express')();

var port = process.env.SERVER_PORT || 80;
var interface = process.env.SERVER_IFACE || null;

app.listen(port, interface);

Debian Service

Now for the fun part! First, create yourself an empty init script, substituting the word SERVICE for the name you want to use for the service:

sudo touch /etc/init.d/SERVICE
sudo chmod a+x /etc/init.d/SERVICE
sudo update-rc.d SERVICE defaults

Once that’s done, paste the following simple service template into the file, swapping out SERVICE to whatever you’d like to use:


export PATH=$PATH:/usr/local/bin
export NODE_PATH=$NODE_PATH:/usr/local/lib/node_modules
export SERVER_PORT=80
export SERVER_IFACE=''

case "$1" in
  exec forever --sourceDir=/var/www/SERVICE-p /var/run/forever start server.js

  exec forever stop --sourceDir=/var/www/SERVICE server.js

exit 0

This script went with the environment variable method of configuration. Since we’re dealing with a bash script, I threw the variables at the top of the script instead of on the same line as the command we executed for added readability. Of course, if you adopted the command line argument method, omit the two export lines and add your arguments to the end of your command.

If you’d like to start (or even stop) the service, you can run the same old commands that you’re likely used to:

sudo service SERVICE start
sudo service SERVICE stop

Consider the Following

Now that you’ve got everything setup, your service should be able to survive a reboot of your machine! Go ahead and run sudo init 6 right now just to be sure. Just kidding.

If you ever want a list of your currently running applications, run sudo forever list. Read up on the forever documentation to see what else you can do (hint: log reading).

That Debian service script we wrote is a bit lacking! If you check out the contents of /etc/init.d/skeleton, you can get an idea of a more robust script.

Posted by & filed under Personal.

The posts to this blog may have subsided, but my exuberance (and GitHub commits) have not.

Working on Book #2

My next book is titled A Consumer-Centric Approach to RESTful API Design and unlike my first book will be self published. I’m a lot more excited about this book than the previous one, Backbone.js Application Development. With the first book, the publisher approached me with a topic, a table of contents, and a title, and asked me to do the rest. Backbone.js, despite being a technology that I used at the YC startup I co-founded and my day job, just wasn’t something I was passionate about. This next book regarding API design is something I’ve been really excited about for a while.

I feel much more confident in my abilities to market this book, as well as the quality of the content. My technical reviewers are well known in the industry and are helping me create an awesome book aimed at a typical web developer who is interested in building an easy to consume API. The target audience are web developers with at least a years experience.

Finch App

A bunch of friends and myself competed in Ann Arbor Startup Weekend 2014. The project we built is called Finch, and while we’re still solidifying what the app will do, it’s essentially an event image aggregator. We’ve got an iOS app, (mobile friendly) website, some websockets, and an API. It’s still under massive development but we were able to pull a lot off in one weekend.

My role in the project has been a bit different that what I’ve done in the past. I’ve been a programmer for over 8 years now, but for this project I took on more of a Product Manager role, a sort of liaison between teams as well as doing some mentor and architectural decision making. I must admit, doing people interaction work has been a lot of fun. At my day job, I’ve been slowly transitioning from hardcore programming to people interaction by becoming a Developer Advocate, and I can see this being a pivotol point in my career path.

Left my Day Job

I actually put in my notice over a month ago and have been doing a lot of work on the side during this sabbatical. Of course I’ve been working on the book as well as Finch. I’ve also been working a lot with some technologies I’ve wanted to get more proficient with, such as PostgreSQL and MongoDB. I’ve used MongoDB with previous projects, but I’ve been wanting to get more experience. And I keep hearing good news regarding PgSQL vs MySQL, so it’s been fun to learn another SQL dialect.

Moving to San Francisco

And finally the Pièce de résistance is that I’m moving to San Francisco, CA. The reasons for this are plentiful: SF is the mecca for our industry; Ann Arbor, MI is currently 17 degrees and buried under a foot of snow; There are considerably more opportunities out west; I’m currently a mere 2 hours from my hometown.

The part that’s been fun explaining to my family and friends is that I don’t have a job lined up. I’ve got plenty of savings though, and a buddy of mine accepted a position at a company which provides temporary housing, so we’ll have a month to hunt for a real apartment and I’ll be living rent-free during this time. There is actually going to be five of us total making the move from A2 to SF (turns out none of us like the snow that much).

Posted by & filed under APIs.

Some friends and I are working on a project called CodePlanet.IO. It’ll be a high-quality tutorials website, and we plan on eventually releasing screencasts of full-stack development using various web-related technologies.

Our first big post is an article of mine on the Principles of good RESTful API Design. You might not have realized it, but I’ve been working as a Developer Advocate / API Architect for the last couple months at my current employer.

Check the site out and be sure to keep an eye out for our upcoming screencasts!

Discuss on Reddit or Hacker News.