Posted by & filed under Linux.

Use this script to take a photo using the webcam on your computer. It makes use of and requires the installation of mplayer. When it takes pictures, it will take 20 of them, and delete the first 19. I have to do this because my netbook has a really crappy camera, and it takes a while to adjust to the light and focus. You might be able to get away with a lower setting, so tweak the number until something looks good.

/home/USERNAME/webcam/grab.sh:

#!/bin/bash

# Take 20 frames worth of pictures
mplayer -vo png -frames 20 tv://

# Copy the 20th frame to the webcam directory as the current time
mv /home/USERNAME/00000020.png /home/USERNAME/webcam/`date +"%Y-%m-%d_%H:%M:%S"`.png

# Delete the first nineteen frames
rm /home/USERNAME/000000*.png

I wouldn’t be surprised if mplayer has some sort of command to ‘trim’ the first 20 frames, and specify output location so you don’t have to run this awkward delete, but I wasn’t able to figure it out.

The images will be saved into a folder called webcam, and the filenames will be in the format YYYY-MM-DD_HH:MM:SS.

I’m using absolute paths for everything because I’m running this as a CRON job (to take a picture once per minute). Basically, I use it as a security camera. If you’d like to do the same, feel free to snag this crontab entry (which you can add by running `crontab -e`):

crontab:

* * * * * /home/USERNAME/webcam/grab.sh >/dev/null 2>&1

Posted by & filed under Security.

I was at work yesterday, and mentioned the word “Authentication” to a co-worker via IRC. His client had cut off the last word, and he asked me what the hell Authenticatio was. I jokingly said I was talking about authenticat.io as a domain. On a whim, I dropped $40 and bought the thing.

Later, while asking what the hell I should do with it, a friend suggested it could be about AuthentiCat, a cat who has advice on Authentication and Data Encryption. Thus, AuthentiCat.IO was born.

It’s going to be a web-comic site about silly computer security topics. I’ll try to post at least one thing a week. Wish me luck!

Posted by & filed under NoSQL.

$ brew install rethinkdb
==> Downloading http://download.rethinkdb.com/dist/rethinkdb-1.5.0.tgz
Already downloaded: /Library/Caches/Homebrew/rethinkdb-1.5.0.tgz
==> ./configure --prefix=/usr/local/Cellar/rethinkdb/1.5.0 --fetch protobuf --fetch protoc
==> make
make[1]: *** [build/release_clang_notcmalloc/rethinkdb_web_assets/js/reql_docs.json] Error 1
make[1]: *** Deleting file `build/release_clang_notcmalloc/rethinkdb_web_assets/js/reql_docs.json'
make[1]: *** Waiting for unfinished jobs....
make[1]: unlink: build/release_clang_notcmalloc/rethinkdb_web_assets/.: Invalid argument
make: *** [make] Error 2

READ THIS: https://github.com/mxcl/homebrew/wiki/troubleshooting

The only thing Google brings up is a pastebin that someone else made who had the same problem, but with no way to contact them. So, I’m putting this message here for visibility. If anyone knows how to fix it, please say so in the comments ;).

UPDATE: The bug has been fixed in 1.5.1. Just do the following and you’ll be good to go:

$ brew update && brew install rethinkdb

Posted by & filed under Linux.

After moving to my new apartment, it was time to dust off the old Linksys router I had lying around. This thing has been hacked to run the latest DD-WRT that it could handle.

My network address changes occasionally, and I didn’t want to setup any dyndns accounts to keep track of the IP and have it resolve to a hostname. Honestly, just being able to get the last IP address is good enough for me.

So, I came up with this script that I run on one of my websites which listens for HTTP requests. When it gets one, it simply logs the IP to a file and spits it back out to the client.

Then, whenever I want to grab the IP address of the home network, I just hit another URL to grab the IP. The script is requested from my router every hour.

Configuration

Open your DD-WRT settings, go to Administration | Management, and scroll down till you see the section on CRON. You can add the following rule to have your router grab the file every hour:

*/60 * * * * root wget http://example.com/ping.php
Screen Shot 2013-05-16 at 11.01.28 PM

ping.php

<?php
$ip = $_SERVER['REMOTE_ADDR'];
$handle = fopen("./ip.txt", 'w');
fwrite($handle, $ip);
fclose($handle);
echo $ip;

pong.php

<?php
echo file_get_contents("ip.txt");

Setup

touch ip.txt
chmod a+w ip.txt

Obtaining IP

Simply browse to http://example.com/pong.php to get the last known IP address.

Posted by & filed under PHP.

<?php
/**
 * This class will safely parse complex objects or arrays with possible missing keys
 *
 * Usage: obj::query($obj, 'dot.separated.syntax');
 */
class obj {
    /**
     * Parse the provided object
     *
     * @param $object mixed The complex object you're going to parse
     * @param $path string The dot separated path you would like to query the object with
     */
    public static function query($object, $path) {
        $paths = explode('.', $path);
        return self::recurse($object, $paths);
    }

    /**
     * The function that does the real work
     *
     * @param $object mixed
     * @param $paths array
     */
    protected static function recurse($object, $paths) {
        if (!$object) {
            return null;
        }
        if (!is_array($object) && !is_object($object)) {
            return $object;
        }

        $newPath = array_shift($paths);

        if (is_array($object) && isset($object[$newPath])) {
            return self::recurse($object[$newPath], $paths);
        } else if (is_object($object) && isset($object->$newPath)) {
            return self::recurse($object->$newPath, $paths);
        } else {
            return null;
        }
    }
}

$data = '{
  "x": {
    "y": true,
    "z": null,
    "w": false,
    "l": "banana",
    "a": {
      "b": {
        "c": "d",
        "d": "e"
      }
    }
  }
}';

$complexArray = json_decode($data, true);
$complexObject = json_decode($data);
$complexMixed = array(
    array(
        'x' => json_decode('{"name": "so complex"}')
    )
);

echo "Should be banana: ";
var_dump(obj::query($complexArray, 'x.l'));

echo "Should be 'e': ";
var_dump(obj::query($complexArray, 'x.a.b.d'));

echo "Should be NULL: ";
var_dump(obj::query($complexArray, 'a.b.c.d.e.f.g'));

echo "Should be TRUE: ";
var_dump(obj::query($complexObject, 'x.y'));

echo "Should be 'so complex': ";
var_dump(obj::query($complexMixed, '0.x.name'));

Posted by & filed under PHP, Web Server.

Not too long ago I took a trip out to California to see my sister and her husband. While there, I set him up with a WordPress site so that he could sell baseball cards and do box breaks. The site, if you’re interested, is SupremeBoxBreaks.com.

Due to RAM restrictions on various servers I’ve had to use, I learned to axe Apache a long time ago. I’ve replaced it with lighttpd, although I’ll probably be transitioning over to nginx sooner or later (it’s what we use at work, and seems to be even lighter in the memory consumption department). Therefor, all of the sites running PHP on my webserver sit behind lighttpd, which consists of several WordPress based sites.

For his website, which is to sell products with inventory which gets reduced, I chose to install Woocommerce. I’ve used other Woo WordPress products before, and it looks like one of the best WordPress eCommerce solutions. Unfortunately, Woocommerce doesn’t work all that well with lighttpd, or more specifically, with lighttpd using the server.error-handler-404 configuration for handling URL routing. If you google lighttpd WordPress configuration, this is the most commonly recommended method for grabbing dynamic URLs.

The problem is that when lighttpd has the server.error-handler-404 in place for grabbing URLs, the GET parameters on the original request are NOT passed along to the index.php file. One could go to the root of the website and add a GET parameter and it would work fine, e.g. example.com/?a=b, but as soon as a page was requested which doesn’t exist, the GET parameter would be lost, e.g. example.com/store?a=b.

The solution for this problem isn’t complex by any means. If you inspect the $_SERVER variable on a request which was losing the GET parameters, you can see they’re still available in $_SERVER[‘REQUEST_URI’]. So all we have to do is grab the URI, after the first question mark, parse the variables, and replace the global GET parameter. The following code, when added to the top of the main index.php file, will solve this issue:

$question_pos = strpos($_SERVER['REQUEST_URI'], '?');
if ($question_pos !== false) {
        $question_pos++; // don't want the ?
        $query = substr($_SERVER['REQUEST_URI'], $question_pos);
        parse_str($query, $_GET);
}

Also, here’s the lighttpd.conf settings that are recommended for using WordPress with lighttpd:

$HTTP["host"] =~ "(^|\.)example\.com$" {
        server.document-root = "/var/www/example.com"
        server.errorlog = "/var/log/lighttpd/example.com/error.log"
        accesslog.filename = "/var/log/lighttpd/example.com/access.log"
        server.error-handler-404 = "/index.php?error=404"
}

Tracking down the source of the problem for Woocommerce was pretty difficult. It wouldn’t allow items to be removed, couldn’t add items while viewing the item page, although it would allow an item to be added while viewing a listing of items. AKA it sometimes worked and sometimes didn’t.

The root of the problem here is two-fold. First, lighttpd doesn’t pass GET parameters along with the error handler directive. Second, Woocommerce should not be using GET parameters for persisting changes to the server. A GET request is intended to be used for just that, getting information from a server. A POST request is intended for sending changes to the server. While talking with Woo tech support, one of the things they kept asking me is if my host was caching requests. I said no, since it’s a VPS I’m in control of the caching, and that domain has none. If Woocommerce were to switch over to using POST requests for persisting user cart changes, it would save their customers from having these caching issues (POST requests are never cached), and would have the side effect of allowing lighttpd to work without this code change.

There is a big shortcoming with this solution. When the administrator of the website updates WordPress, the changes in index.php could be overwritten. A better method to inject this code would be to write a WordPress plugin, and ensure that it is executed before the Woocommerce code is run. An even better solution would be to have more complex lighttpd rules with regular expressions to capture requests and route them all accordingly, without the need for the server.error-handler-404 code, but I don’t know lighttpd configuration that well to come up with a solution.

Posted by & filed under NoSQL.

These are my notes for the talk I’m giving today on PHP and MongoDB.

Example PHP script for communicating with MongoDB:

#!/usr/bin/env php
<?php
// Instantiate the Mongo client
$m = new MongoClient();

// Connect to a database. If it doesn't exist, it will be created
$db = $m->example;

// Point to a collection within the db. If it doesn't exist, yup.
$people_collection = $db->people;

// our first person.
$tom = array(
	'name' => 'Thomas Hunter',
	'age' => 27,
	'enjoys' => array(
		'beaches',
		'music',
		'blueberries'
	)
);

// add person to collection
$people_collection->insert($tom);

// our second person. notice the different structure
$amanda = array(
	'name' => 'Amanda',
	'age' => 31,
	'hates' => array(
		'coffee'
	),
	'enjoys' => array(
		'music',
		'kittens'
	)
);

// lets add her as well
$people_collection->insert($amanda);

// find() with no argument is basically a SELECT *
$people = $people_collection->find();

// Iterate over our peeps
foreach($people AS $person) {
	// I'm assuming everyone has a name and age
	echo "{$person['name']} is {$person['age']} years old.\n";

	// They might not enjoy anything
	if (isset($person['enjoys'])) {
		echo "Enjoys:\n";
		foreach($person['enjoys'] AS $enjoy) {
			echo "* $enjoy\n";
		}
	}

	// They might not hate anything
	if (isset($person['hates'])) {
		echo "Hates:\n";
		foreach($person['hates'] AS $hate) {
			echo "* $hate\n";
		}
	}
}

// DELETE ALL THE THINGS
$people_collection->remove();

Notes:

# MongoDB + PHP
Who needs an ORM when we can just throw our objets straight into the database?

## MongoDB vs MySQL
* MongoDB is a schemaless, "document" storage system.
* MongoDB is queried using a JSON superset / JS subset syntax
* MySQL is a schema'd, relational database management system
* MySQL is queried using a SQL dialect
* "Translation" between SQL and Mongo:
 * http://docs.mongodb.org/manual/reference/sql-comparison/

## Install Mongo
* OS X
 * `brew update && brew install mongodb`
* LINUX
 * http://docs.mongodb.org/manual/installation/

## Install PHP Mongo Client
* http://www.php.net/manual/en/mongo.installation.php
* *NIX
 * sudo pecl install mongo

## Using the CLI Interface
By default, there's no database credentials, only listens on localhost

	$ mongo							# Connect
	> show databases				# Get list of databases
	> use DB_NAME					# Pick a DB to work with
	> show collections				# Get a list of collections (tables)
	> db.COLLECTION.list()			# Get items in that collection (SELECT * FROM table)
	> db.COLLECTION.insert({"name": "steve", "age": 28}); # Insert
	> db.COLLECTION.remove(ObjectId("518e654c8f9196b5abf973e3")); # Delete

## Why use MySQL?
* Your data fits the relational database paradigm
* You need guaranteed data storage
* You know how to use MySQL

## Why use MongoDB?
* Your schema changes frequently
* You work with tons of JOINs for small pieces of data (topics, categories)
* You want super fast writes, might not care about a few missing records

Posted by & filed under Personal.

If you know me, you know that I’m not a big fan of recruiters. Particularly, recruiters who take the shotgun approach to finding candidates by sending the same copied-and-pasted email to hundreds of potential applicants. I know that these are copied-and-pasted, because my various email accounts will get the exact same email sent minutes apart.

recruiter-1 recruiter-2

I was getting some more recruiter spam today, so I asked the recruiter where he had gotten my information from (since I had deleted my LinkedIn account a week earlier). He send me a screenshot of this page on Dice:

dice-scraper

Dice used to be a huge name in the hiring market. And I suppose it still is (look at all those tabs!) but it has since fallen in popularity thanks to LinkedIn. Anyway, turns out Dice scrapes popular social media networks, runs some heuristics on them, and figures out which profiles on various sites belong together (many of the links between sites could have been determined based on what I had entered into the sites, but not all). Most of the information above came from my LinkedIn profile, and was cached (e.g. scraped and stored in their database) since the LinkedIn profile no longer exists.

I’m fine with people being able to find my information on various sites, but not really a big fan of Dice tying all this stuff together (also, not sure whose MySpace profile my account is linked to on Dice…). I figured the next thing to do would be to ask Dice to delete this page about me. I sure didn’t ask Dice to aggregate this data on my behalf.

dice-remove

It’s almost creepy if you think about it, sites cyber-stalking you and aggregating it in once place. What if this site had information about forum posts and buying habits and dating site profiles of mine?

I haven’t heard back from Dice yet, but we’ll see.