Posted by & filed under NoSQL.

$ brew install rethinkdb
==> Downloading http://download.rethinkdb.com/dist/rethinkdb-1.5.0.tgz
Already downloaded: /Library/Caches/Homebrew/rethinkdb-1.5.0.tgz
==> ./configure --prefix=/usr/local/Cellar/rethinkdb/1.5.0 --fetch protobuf --fetch protoc
==> make
make[1]: *** [build/release_clang_notcmalloc/rethinkdb_web_assets/js/reql_docs.json] Error 1
make[1]: *** Deleting file `build/release_clang_notcmalloc/rethinkdb_web_assets/js/reql_docs.json'
make[1]: *** Waiting for unfinished jobs....
make[1]: unlink: build/release_clang_notcmalloc/rethinkdb_web_assets/.: Invalid argument
make: *** [make] Error 2

READ THIS: https://github.com/mxcl/homebrew/wiki/troubleshooting

The only thing Google brings up is a pastebin that someone else made who had the same problem, but with no way to contact them. So, I’m putting this message here for visibility. If anyone knows how to fix it, please say so in the comments ;).

UPDATE: The bug has been fixed in 1.5.1. Just do the following and you’ll be good to go:

$ brew update && brew install rethinkdb

Posted by & filed under Linux.

After moving to my new apartment, it was time to dust off the old Linksys router I had lying around. This thing has been hacked to run the latest DD-WRT that it could handle.

My network address changes occasionally, and I didn’t want to setup any dyndns accounts to keep track of the IP and have it resolve to a hostname. Honestly, just being able to get the last IP address is good enough for me.

So, I came up with this script that I run on one of my websites which listens for HTTP requests. When it gets one, it simply logs the IP to a file and spits it back out to the client.

Then, whenever I want to grab the IP address of the home network, I just hit another URL to grab the IP. The script is requested from my router every hour.

Configuration

Open your DD-WRT settings, go to Administration | Management, and scroll down till you see the section on CRON. You can add the following rule to have your router grab the file every hour:

*/60 * * * * root wget http://example.com/ping.php
Screen Shot 2013-05-16 at 11.01.28 PM

ping.php

<?php
$ip = $_SERVER['REMOTE_ADDR'];
$handle = fopen("./ip.txt", 'w');
fwrite($handle, $ip);
fclose($handle);
echo $ip;

pong.php

<?php
echo file_get_contents("ip.txt");

Setup

touch ip.txt
chmod a+w ip.txt

Obtaining IP

Simply browse to http://example.com/pong.php to get the last known IP address.

Posted by & filed under PHP.

<?php
/**
 * This class will safely parse complex objects or arrays with possible missing keys
 *
 * Usage: obj::query($obj, 'dot.separated.syntax');
 */
class obj {
    /**
     * Parse the provided object
     *
     * @param $object mixed The complex object you're going to parse
     * @param $path string The dot separated path you would like to query the object with
     */
    public static function query($object, $path) {
        $paths = explode('.', $path);
        return self::recurse($object, $paths);
    }

    /**
     * The function that does the real work
     *
     * @param $object mixed
     * @param $paths array
     */
    protected static function recurse($object, $paths) {
        if (!$object) {
            return null;
        }
        if (!is_array($object) && !is_object($object)) {
            return $object;
        }

        $newPath = array_shift($paths);

        if (is_array($object) && isset($object[$newPath])) {
            return self::recurse($object[$newPath], $paths);
        } else if (is_object($object) && isset($object->$newPath)) {
            return self::recurse($object->$newPath, $paths);
        } else {
            return null;
        }
    }
}

$data = '{
  "x": {
    "y": true,
    "z": null,
    "w": false,
    "l": "banana",
    "a": {
      "b": {
        "c": "d",
        "d": "e"
      }
    }
  }
}';

$complexArray = json_decode($data, true);
$complexObject = json_decode($data);
$complexMixed = array(
    array(
        'x' => json_decode('{"name": "so complex"}')
    )
);

echo "Should be banana: ";
var_dump(obj::query($complexArray, 'x.l'));

echo "Should be 'e': ";
var_dump(obj::query($complexArray, 'x.a.b.d'));

echo "Should be NULL: ";
var_dump(obj::query($complexArray, 'a.b.c.d.e.f.g'));

echo "Should be TRUE: ";
var_dump(obj::query($complexObject, 'x.y'));

echo "Should be 'so complex': ";
var_dump(obj::query($complexMixed, '0.x.name'));

Posted by & filed under PHP, Web Server.

Not too long ago I took a trip out to California to see my sister and her husband. While there, I set him up with a WordPress site so that he could sell baseball cards and do box breaks. The site, if you’re interested, is SupremeBoxBreaks.com.

Due to RAM restrictions on various servers I’ve had to use, I learned to axe Apache a long time ago. I’ve replaced it with lighttpd, although I’ll probably be transitioning over to nginx sooner or later (it’s what we use at work, and seems to be even lighter in the memory consumption department). Therefor, all of the sites running PHP on my webserver sit behind lighttpd, which consists of several WordPress based sites.

For his website, which is to sell products with inventory which gets reduced, I chose to install Woocommerce. I’ve used other Woo WordPress products before, and it looks like one of the best WordPress eCommerce solutions. Unfortunately, Woocommerce doesn’t work all that well with lighttpd, or more specifically, with lighttpd using the server.error-handler-404 configuration for handling URL routing. If you google lighttpd WordPress configuration, this is the most commonly recommended method for grabbing dynamic URLs.

The problem is that when lighttpd has the server.error-handler-404 in place for grabbing URLs, the GET parameters on the original request are NOT passed along to the index.php file. One could go to the root of the website and add a GET parameter and it would work fine, e.g. example.com/?a=b, but as soon as a page was requested which doesn’t exist, the GET parameter would be lost, e.g. example.com/store?a=b.

The solution for this problem isn’t complex by any means. If you inspect the $_SERVER variable on a request which was losing the GET parameters, you can see they’re still available in $_SERVER[‘REQUEST_URI’]. So all we have to do is grab the URI, after the first question mark, parse the variables, and replace the global GET parameter. The following code, when added to the top of the main index.php file, will solve this issue:

$question_pos = strpos($_SERVER['REQUEST_URI'], '?');
if ($question_pos !== false) {
        $question_pos++; // don't want the ?
        $query = substr($_SERVER['REQUEST_URI'], $question_pos);
        parse_str($query, $_GET);
}

Also, here’s the lighttpd.conf settings that are recommended for using WordPress with lighttpd:

$HTTP["host"] =~ "(^|\.)example\.com$" {
        server.document-root = "/var/www/example.com"
        server.errorlog = "/var/log/lighttpd/example.com/error.log"
        accesslog.filename = "/var/log/lighttpd/example.com/access.log"
        server.error-handler-404 = "/index.php?error=404"
}

Tracking down the source of the problem for Woocommerce was pretty difficult. It wouldn’t allow items to be removed, couldn’t add items while viewing the item page, although it would allow an item to be added while viewing a listing of items. AKA it sometimes worked and sometimes didn’t.

The root of the problem here is two-fold. First, lighttpd doesn’t pass GET parameters along with the error handler directive. Second, Woocommerce should not be using GET parameters for persisting changes to the server. A GET request is intended to be used for just that, getting information from a server. A POST request is intended for sending changes to the server. While talking with Woo tech support, one of the things they kept asking me is if my host was caching requests. I said no, since it’s a VPS I’m in control of the caching, and that domain has none. If Woocommerce were to switch over to using POST requests for persisting user cart changes, it would save their customers from having these caching issues (POST requests are never cached), and would have the side effect of allowing lighttpd to work without this code change.

There is a big shortcoming with this solution. When the administrator of the website updates WordPress, the changes in index.php could be overwritten. A better method to inject this code would be to write a WordPress plugin, and ensure that it is executed before the Woocommerce code is run. An even better solution would be to have more complex lighttpd rules with regular expressions to capture requests and route them all accordingly, without the need for the server.error-handler-404 code, but I don’t know lighttpd configuration that well to come up with a solution.

Posted by & filed under NoSQL.

These are my notes for the talk I’m giving today on PHP and MongoDB.

Example PHP script for communicating with MongoDB:

#!/usr/bin/env php
<?php
// Instantiate the Mongo client
$m = new MongoClient();

// Connect to a database. If it doesn't exist, it will be created
$db = $m->example;

// Point to a collection within the db. If it doesn't exist, yup.
$people_collection = $db->people;

// our first person.
$tom = array(
	'name' => 'Thomas Hunter',
	'age' => 27,
	'enjoys' => array(
		'beaches',
		'music',
		'blueberries'
	)
);

// add person to collection
$people_collection->insert($tom);

// our second person. notice the different structure
$amanda = array(
	'name' => 'Amanda',
	'age' => 31,
	'hates' => array(
		'coffee'
	),
	'enjoys' => array(
		'music',
		'kittens'
	)
);

// lets add her as well
$people_collection->insert($amanda);

// find() with no argument is basically a SELECT *
$people = $people_collection->find();

// Iterate over our peeps
foreach($people AS $person) {
	// I'm assuming everyone has a name and age
	echo "{$person['name']} is {$person['age']} years old.\n";

	// They might not enjoy anything
	if (isset($person['enjoys'])) {
		echo "Enjoys:\n";
		foreach($person['enjoys'] AS $enjoy) {
			echo "* $enjoy\n";
		}
	}

	// They might not hate anything
	if (isset($person['hates'])) {
		echo "Hates:\n";
		foreach($person['hates'] AS $hate) {
			echo "* $hate\n";
		}
	}
}

// DELETE ALL THE THINGS
$people_collection->remove();

Notes:

# MongoDB + PHP
Who needs an ORM when we can just throw our objets straight into the database?

## MongoDB vs MySQL
* MongoDB is a schemaless, "document" storage system.
* MongoDB is queried using a JSON superset / JS subset syntax
* MySQL is a schema'd, relational database management system
* MySQL is queried using a SQL dialect
* "Translation" between SQL and Mongo:
 * http://docs.mongodb.org/manual/reference/sql-comparison/

## Install Mongo
* OS X
 * `brew update && brew install mongodb`
* LINUX
 * http://docs.mongodb.org/manual/installation/

## Install PHP Mongo Client
* http://www.php.net/manual/en/mongo.installation.php
* *NIX
 * sudo pecl install mongo

## Using the CLI Interface
By default, there's no database credentials, only listens on localhost

	$ mongo							# Connect
	> show databases				# Get list of databases
	> use DB_NAME					# Pick a DB to work with
	> show collections				# Get a list of collections (tables)
	> db.COLLECTION.list()			# Get items in that collection (SELECT * FROM table)
	> db.COLLECTION.insert({"name": "steve", "age": 28}); # Insert
	> db.COLLECTION.remove(ObjectId("518e654c8f9196b5abf973e3")); # Delete

## Why use MySQL?
* Your data fits the relational database paradigm
* You need guaranteed data storage
* You know how to use MySQL

## Why use MongoDB?
* Your schema changes frequently
* You work with tons of JOINs for small pieces of data (topics, categories)
* You want super fast writes, might not care about a few missing records

Posted by & filed under Uncategorized.

If you know me, you know that I’m not a big fan of recruiters. Particularly, recruiters who take the shotgun approach to finding candidates by sending the same copied-and-pasted email to hundreds of potential applicants. I know that these are copied-and-pasted, because my various email accounts will get the exact same email sent minutes apart.

recruiter-1 recruiter-2

I was getting some more recruiter spam today, so I asked the recruiter where he had gotten my information from (since I had deleted my LinkedIn account a week earlier). He send me a screenshot of this page on Dice:

dice-scraper

Dice used to be a huge name in the hiring market. And I suppose it still is (look at all those tabs!) but it has since fallen in popularity thanks to LinkedIn. Anyway, turns out Dice scrapes popular social media networks, runs some heuristics on them, and figures out which profiles on various sites belong together (many of the links between sites could have been determined based on what I had entered into the sites, but not all). Most of the information above came from my LinkedIn profile, and was cached (e.g. scraped and stored in their database) since the LinkedIn profile no longer exists.

I’m fine with people being able to find my information on various sites, but not really a big fan of Dice tying all this stuff together (also, not sure whose MySpace profile my account is linked to on Dice…). I figured the next thing to do would be to ask Dice to delete this page about me. I sure didn’t ask Dice to aggregate this data on my behalf.

dice-remove

It’s almost creepy if you think about it, sites cyber-stalking you and aggregating it in once place. What if this site had information about forum posts and buying habits and dating site profiles of mine?

I haven’t heard back from Dice yet, but we’ll see.

Posted by & filed under Personal.

Like a lot of people, I’ve got my fair share of fears. A lot of them make sense; they directly relate to self sustaining and not wanting to die. Some of them are completely unwarranted and avoiding them has had a direct negative-impact on my life.

Not too long ago, I started conquering many of them head-on. At times it can be very difficult. My blood pressure can go through the roof. Other times, I find that I’m not afraid at all right before taking one on. It’s really based on perception and situational variables when these happen.

Fear of Public Speaking

Public Speaking is something I’ve absolutely dreaded for most of my life. There’s something about standing in front of an audience that causes me to forget my lines, face turn red, and start mumbling like an idiot. In High School, I used to lie to the teacher and tell her that I didn’t do my speech homework when it was my turn to talk, if I was freaked out enough.

To conquer this one, I gave a talk at Ann Arbor New Tech Meetup on NeoInvoice, a project I had built a few years ago. This talk was for 10 or 15 minutes, and was in front of a large auditorium with over 100 people watching. This was the biggest talk I had ever given. Around the same time, I had started a meetup of my own, the Ann Arbor PHP MySQL Meetup. Sessions were a bit smaller, and talks weren’t formal at all, but I did it for several months and must have talked dozens of time. I’ve also guest talked at an Ann Arbor Coffe House Coders Meetup, and most recently, gave a talk on the JavaScript Event Loop at Penguicon 2013.

While I do sometimes get a little nervous before a talk, I’m miles ahead of where I used to be. If you have a fear of public speaking, look for a local Meetup and offer to give a talk, or join your local SmoothTalkers Toastmasters guild.

Fear of Heights

Heights, or more specifically, falling to my death, is another big one. I remember going to some tall tower in Vegas where you could stand on a floor made of glass which was a hundred stories above the ground. I sort of froze in the center where the floor was solid. This fear mostly manifests itself for me in the form of flying in an airplane, where turbulence drives me nuts.

892414_849440803732_292171032_o

To fix this one, I fly and travel as much as possible. I just got back from the TechCrunch Disrupt convention in New York, before that I was visiting family in California, and this September I’m going to do the ultimate and hop on a many-hour flight to Ireland. A friend of mine is a pilot, and I started asking him questions about how planes work. Did you know that if the engines fail, the plane can still soar to the ground and land? I also climbed (took the stairs) a path up a small mountain in Sequoia National Park (pictured above) and had the chutzpah to lean over the railing!

Initiating Conversations

Here’s a weird one; I’m not that good at starting conversations, especially if it is with a group of people who know each other. This one isn’t so much a fear, as much as something I just avoid doing. I have no idea where it stems from, nor if others have it too.

This one was pretty fun to fix. While at the TechCrunch Disrupt conference, it was literally my job to talk to as many people as possible and sell the product as best as I could, both to people as potential users of the service and companies as potential consumers of the API. And I did a damn good job. Each day, I would cycle through all of the presenting booths, talk to people to learn about their company, then pitch mine, as well as manning the booth for several hours in-between  If their service stored media on behalf of users, I would explain how our product could save them money. If they weren’t a good fit, I’d just tell them how they can get free GBs for personal use.

The funny thing is, every single time I’d start a conversation, I led with “How’s it going?” It’s amazing what those three little words can accomplish, even when talking to women at the after-parties.

Epiphany

All these experiences have led me to this epiphany: Conquering my fears will make me a better person, allow me to understand myself better, and hell, it’s a lot of fun too (especially right after it happens and I realize nothing bad happened).

Do yourself a favor; if you have any fears, don’t let them control your life! If it is possible, jump directly into a scary situation. Also, do research on the things that scare you. Most of what we fear is really the unknown. Afraid of flying? Learn how to fly a plane. Afraid of spiders? Spend hours studying them and watching videos.

Posted by & filed under JavaScript, Node.js.

I gave a talk this morning on the JavaScript Event Loop at Penguicon 2013. Even though I had used JavaScript for several years, I didn’t completely comprehend how the Event Loop works until a few months ago. When the opportunity came to present at Penguicon, I figured this was as good of a topic as any. You can download the presentation below (or view it in your browser), and I’ll throw all the individual slides and the gist of what I said about them on this page.

Download as a Keynote, Powerpoint, or HTML presentation.

Slide 1/14: Introduction

The JavaScript Event Loop: Introduction

Slide 2/14: Credibility

The JavaScript Event Loop: Thomas Hunter Credibility

I’ve been a web developer for a while, starting at some smaller mom and pop shops (not listed), to a couple fortune 50’s, before finally ending up at smaller and smaller (and quicker and more advanced) companies. For most of that time I was doing procedural PHP and MySQL programming, before eventually moving to mostly JavaScript (both frontend and backend).

I’m currently working with Packt to get a book on Backbone.js published (which is a frontend JavaScript framework for building Single Page Applications). Be sure to keep an eye out for it and purchase several copies, even if you don’t intend on reading them.

Slide 3/14: MultiThreaded

The JavaScript Event Loop: MultiThreaded

Let me first begin the presentation by talking about something mostly unrelated to JavaScript; MultiThreaded programming. If an application is built to be MultiThreaded, it will make use of several of your CPU cores simultaneously. This means it can do number crunching in different places at the same time and we refer to this as Concurrency. An application built in this manner can be a single process within the Operating System. The Operating System itself usually gets to choose which cores an application will run on (even which core a single threaded application will run on).

One way to fake MultiThreaded-ness in SingleThreaded languages is to simply run several different processes and have them communicate with each other.

For the longest time, CPUs were getting faster and faster, but then Moore’s Law caught up, and we sorta hit a wall with how fast our CPUs can get. So, to make hardware faster, we now throw more CPU cores at the computer. In order to truly scale and use the hardware to its fullest, one needs to build applications which make use of all CPU cores.

MultiThreading isn’t all butterflies and puppy tails though. There can be some big issues with this type of code, particularly Deadlocks and Race Conditions. One such example of these kinds of issues is that if an application is running on two separate threads, both threads reads a variable from memory at the same time, and both attempt to update the value by adding 2 to it. If the existing value is 10, and thread A adds 2, it does so by writing 12 to the memory location. If thread B also wants to add 2, it still thinks the value is 10, and writes 12. The programmer would expect it to be 14 and ends up with 12, and there are no errors. This type of bug can be very hard to track down, and the worst part is that it will happen in an unpredictable way.

Slide 4/14: SingleThreaded

The JavaScript Event Loop: SingleThreaded

Now that you know what MultiThreaded means, lets talk about how JavaScript is not MultiThreaded. A JavaScript engine exists in a single OS process, and consumes a single thread. This means that when your application is running, CPU execution is never performed in parallel. By running the JavaScript engine in this method, it is impossible for users to get the Deadlocks and Race Conditions which plague MultiThreaded applications.

Developers often refer to their callbacks running in an unexpected order as a Race Condition, however it is not the same thing that happens to MultiThreaded applications, and can usually be solved and tracked down easily enough (e.g., use another callback).

Slide 5/14: Implementation

The JavaScript Event Loop: Implementation

There are three important features of a JavaScript engine that deserve mention. These are the Stack, the Heap, and the Queue. Now, different browsers have different JavaScript engines (e.g. Chrome has V8, Firefox has OdinMonkey, and IE has something written in BASIC called Chakra (just kidding!)) and each browser will implement these features differently, but this explanation should work for all of them.

Heap: The simplest part of this is the Heap. This is a bunch of memory where your objects live (e.g. variables and functions and all those things you instantiate). In the presentation I refer to this as Chaotic, only because the order doesn’t really matter and there’s no guarantee with how they will live. In this heap, different browsers will perform different optimizations, e.g., if an object is duplicated many times, it may only exist in memory once, until a change needs to happen, at which point the object is copied.

Stack: This is where the currently running functions get added. If function A() runs function B(), well you’re two levels deep in the stack. Each time one of these functions is added to the stack, it is called a frame. These frames contain pointers to the functions in the heap, as well as the objects available to the function depending on its current scope, and of course the arguments to the function itself. Different JavaScript engines likely have different maximum stack sizes, and unless you have a runaway recursive function, you’ve probably never hit this limit. Once a function call is complete, it gets removed from the stack. Once the stack is empty, we’re ready for the next item in the Queue.

Queue: This is where function calls which are queued up for the future go. If you perform a setTimeout(function() { console.log('hi'); }, 10);, that anonymous function is living in the next available queue slot. No items in the queue will be run until the current stack is complete. So, if you have some work that might be slow that you want to run after you get your data, try a setTimeout() with a delay of 0ms. Future items which rely on I/O to complete, or a long running timer, are somehow in that queue as well, although I’m not exactly sure how that is implemented.

It’s worth mentioning Garbage Collection here as well. In JavaScript it’s easy to create tons of objects all willy nilly like. These get added to the Heap. But, once there is no scope remaining that needs those objects, it’s safe to throw them away. JavaScript can keep an eye on the current stack and the items in the Queue, and see what objects in the Heap are being pointed to. If an object no longer has pointers to it, it is safe to assume that object can be thrown away. If you aren’t careful with how you manage your code, it’s easy to not have those pointers disappear, and we call this wasted memory a Memory Leak.

Slide 6/14: Implementation Example

The JavaScript Event Loop: Implementation Example

This code-run is an example of the previous slide. So, the very first thing that happens is that function a() and b() are “hoisted” to the top of the script, and are added to the heap. We then run the first message log “Adding code to the queue” in the current stack. After that we run a setTimeout, and the anonymous function in there is added to the Queue. Then we do another log, and run the a() function with an argument of 42. We are now one level deep in the stack, and that frame knows about the a() function, the b() function, and its argument of 42. Within a() we run b(), and we are now two levels deep in our stack. We print more messages, leave b(), leave a(), and print a final message. At that point, our stack is empty and we’ve run all of our code, and are now ready for the next item in the queue.

Once we’re in the next queue item, we run the anonymous function (which exists in the Heap somewhere), and display our message.

At first glance, one might assume the message “Running next code from queue” could have been run earlier, perhaps after the first message. If this were a MultiThreaded application, that message could have been run at any point in time, randomly placed between any of the outputted messages. But, since this is JavaScript, it is guaranteed to run after the current stack has completed.

Slide 7/14: Sleeping

The JavaScript Event Loop: Sleeping

I come from a background in writing PHP/MySQL applications. When a PHP script runs, it performs a bunch of work, and then probably runs a MySQL query. Once that call is made to the external server, the application falls asleep. It literally halts everything it is doing and waits for a response from the database server. Once the result comes back, it does some further processing, and then it might perform another I/O function, such as calling an RSS feed. And, as you might guess, it falls asleep again.

Now, what if the call to the RSS feed doesn’t require any of the data we gain from the database call? Then the order of the two calls might not have mattered. But, more importantly, the two calls could have been run simultaneously! The application is as slow as the two calls combined, instead of being as slow as the slowest of the two.

Node.js does something pretty cool, where every I/O request it makes is a non blocking call. This means that the call can end the current stack, and the callback can be called later on in a separate Queue. If we’re performing a bunch of I/O operations, they can be run in parallel. The application will still sleep, but it won’t be blocking.

The web browser is the same. Most of the time it is doing nothing, perhaps waiting for a user to click on something, or waiting for an AJAX request to finish up.

Slide 8/14: Sequential vs Parallel I/O

The JavaScript Event Loop: Sequential vs Parallel I/O

This is a great graphic I adapted from the CodeSchool Real-Time Web with Node.js course. It shows how the I/O operations for sequential I/O compares to parallel I/O. The sequential graph represents calls make in a more traditional language such as PHP, whereas the parallel graph represents calls made in an EventLoop driven language with non blocking I/O, or even MultiThreaded applications. Notice that the application is only as slow as the slowest I/O operation, instead of as slow as all I/O operations combined.

Slide 9/14: Other Language Event Loops

The JavaScript Event Loop: Other Language Event Loops

JavaScript isn’t the only language that can have an Event Loop. They can be implemented in the more traditional procedural languages as well. However, by having it built into the language, it’ll surely be quicker and have a nicer syntax.

Also, when it is implemented in another language, you lose out on the special features if your I/O is blocking, so you’ll have to be careful with which libraries you choose.

Some examples of Event Loops in other languages include Ruby’s EventMachine, Python’s Twisted and Tornado, and PHP’s ReactPHP.

Slide 10/14: Other Language Event Loop Example

The JavaScript Event Loop: Other Language Event Loop Example

Here’s an apples to oranges comparison of the Event Loop working in Node.js to perform a simple TCP echo example, and the (I’m assuming) same application working in Ruby’s EventMachine. I took the Node example from the homepage of nodejs.org, and the EventMachine example from their GitHub readme. They’ve been altered slightly to use the same text and hopefully perform the same function (I honestly don’t know Ruby though).

Notice that the syntax for JavaScript is less terse.

Slide 11/14: Event Loops are Awesome

The JavaScript Event Loop: Event Loops are Awesome

There you have it folks, Event Loops are awesome. They don’t have the race conditions or deadlock issues that MultiThreaded applications have. Most web applications waste time waiting on I/O, and this is a good way around it. There is no special syntax for it to work in JavaScript; it is built in. It’s pretty easy to build stateful web applications (whereas if this were PHP you’d need a database to store shared data, in JS you could just use a local variable).

Slide 12/14: Event Loops aren’t Awesome

The JavaScript Event Loop: Event Loops aren't Awesome

There you have it folks, Event Loops aren’t awesome. If you perform a bunch of CPU intensive work, it will block your process and only use one core. Unless, of course, you use Node.js and offload work to another process. Or, if you’re in a browser, read the next slide. Memory leaks are also possible, as you’re running an application for a long time instead of temporarily. Unless, of course, you program cleanly and are able to avoid those issues.

Slide 13/14: Web Workers

The JavaScript Event Loop: Web Workers

Well, now that I spent this whole time telling you how JavaScript is a SingleThreaded application and you can’t make use of multiple cores, I’ll apologize for being a liar. The core of JavaScript is single threaded, and it’s been that way for many years. However, there’s this cool new thing that came out in the last few years called Web Workers. It will allow your browser (doesn’t exist in Node) to offload work to a separate thread. This feature is available in every modern web browser, so feel free to offload your work today.

How it works is you create a script, and throw some specifically formatted code in there. The main script loads it with var worker = new Worker('task.js');, where task.js is an existing JavaScript file. You also attach a bunch of event handlers to the created worker object, and interact with the worker that way. The script will run in its own instance of the JavaScript engine, and cannot share memory with the main thread (which has the nice side effect of preventing those race conditions).

When you want to pass information to and from the worker, you use something called message passing. This allows you to pass simple JSON objects around, but not complex objects that contain functions or anything referencing the DOM. A great use-case for Web Workers would be calculating a SHA1 hash or performing some map/reduce computations. Basically, anything that involves a ton of number crunching and isn’t all DOM operations.

Slide 14/14: Conclusion

The JavaScript Event Loop: Conclusion

There you have it, the JavaScript Event Loop. It is great for I/O bound applications, and horrible for CPU bound applications. Many people think the engine is MultiThreaded, or at least that it can do things in parallel. Turns out it can do I/O in parallel, but not CPU computations (unless using a separate process with Node.js or a Web Worker in the browser).