Cracks marr the surface
Of your pristine face;
Neglect and solitude
Left acrid distaste.
Fear, anger, loathing
Guiding every step.
Like the Phoenix: rise above,
Or perish with regret.
One of the neatest things I ever wrote.
Cracks marr the surface
Of your pristine face;
Neglect and solitude
Left acrid distaste.
Fear, anger, loathing
Guiding every step.
Like the Phoenix: rise above,
Or perish with regret.
One of the neatest things I ever wrote.
I’ve been using Linux Mint 15 on my Thinkpad Carbon X1 since about when it came out. It’s been running pretty well, but I finally got to the point where I’m doing more ./configure than I am apt-get install as the repository is so outdated.
Linux Mint 17 XFCE recently came out, and is a “Long Term Release” (a concept I believe Mint is borrowing from Ubuntu). This means the repository should be more up to date for about five years, instead of Mint 15’s 2-ish years.
At any rate, here is the procedure to follow to get Linux Mint 17 (or Debian or Ubuntu really) using the most recent Kernel (as of this writing it is 3.15.3). The stock Kernel in Mint 17 is 3.13.0. The Kernel in my existing Mint 15 is 3.8.0.
First, download the appropriate packages. I’m assuming you’re using a 64-bit machine (and have been for the last 4 years) so the instructions are for that. To adapt for 32-bit, manually browser around the download URLs until you find the sibling half-bitted package.
mkdir -p ~/Downloads/kernel && cd ~/Downloads/kernel wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.15.3-utopic/linux-headers-3.15.3-031503-generic_3.15.3-031503.201407010040_amd64.deb wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.15.3-utopic/linux-headers-3.15.3-031503_3.15.3-031503.201407010040_all.deb wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.15.3-utopic/linux-image-3.15.3-031503-generic_3.15.3-031503.201407010040_amd64.deb sudo dpkg -i *.deb
And poof, you’ve got the latest kernel.
You may be wondering; why do I want to upgrade my kernel at all? Surely if Mint recommends an older kernel, there must be a reason to use it? And you’re probably right. With each new kernel update there comes new stability and security updates. There also comes new problems and potential version mismatches. Upgrade at your own risk.
I’m writing this post from within Mint 15. I’ve done these upgrades in a VM. I’ll update this post once my Thinkpad has been updated with any issues I find.
Normally when I host my Node.js-based applications, I’ll SSH into my server, open up a screen or tmux session, run node ./server.js, detach, and call it a day. Of course, if you’re reading this article, you’re fully aware that this is a horrible solution and are looking for an alternative.
One thing that is going to change between the hacky method of hosting and this new method is that it won’t be YOU that is executing the process, but instead the SYSTEM. So, we’ll be taking a few extra steps here and there to enforce that concept.
For starters, we don’t want to execute Node directly (even once we set up our service). Instead, we want to run it behind some sort of service that will keep the process alive in the unfortunate event of a crash. There are many tools for this, such as monit, PM2, or nodemon but the one I’m most familiar with is called forever. Feel free to use an alternative if you’d like.
First, we’ll want to install forever on the server. Run the following command to take care of that:
sudo npm install -g forever
Once you’ve got forever installed, it’s a good idea to have it throw pid files and log files somewhere. I threw a directory into /var/run for just this purpose (although I’m not sure if this is technically the best place for such a thing):
sudo mkdir /var/run/forever
If you’re used to storing your Node.js projects in your home directory (like I was…), you need to stop! Instead, store them somewhere which makes more sense as far as the entire server is concerned. The directory /var is pretty good for doing this, and if your application serves up HTML, throwing it in /var/www is probably a good idea.
I host a lot of applications and websites on my server, so I put my sites directories like /var/www/example.org.
There are a few changes you may want to make to your application to make it server-friendly. One thing I always find myself needing to do is pass a port number that I want my process to listen on. My server has a few IP addresses (network interfaces) and sometimes I’ll also need to pass in which interface I want to bind to.
A common solution for this is to pass along command line arguments. A different approach that I’ve been liking lately is to set environment variables (environment variables are analogous to named parameters and CLI arguments to normal function arguments).
A quick note on Node.js server listening conventions: Whether you’re using the built in http module, or going with express or other similar frameworks, the convention is that you call a .listen() method on the main http object you’re working with. The first argument is a port number, and the second argument is a hostname. If you don’t provide a hostname or pass in null, it defaults to listening on all interfaces (e.g. ‘0.0.0.0’). If you pass in the string ‘localhost’ or ‘127.0.0.1’, the port can only be accessed from the local machine. If you pass in the ip address of one of your interfaces, it will only listen on that interface.
Here’s an example of how you might implement both of these methods in your scripts:
./server.js 9000 "localhost"
#!/usr/bin/env node var app = require('express')(); var port = parseInt(process.argv, 10) || 80; var interface = process.argv || null; app.listen(port, interface);
SERVER_PORT=9000 SERVER_IFACE="localhost" ./test.js
#!/usr/bin/env node var app = require('express')(); var port = process.env.SERVER_PORT || 80; var interface = process.env.SERVER_IFACE || null; app.listen(port, interface);
Now for the fun part! First, create yourself an empty init script, substituting the word SERVICE for the name you want to use for the service:
sudo touch /etc/init.d/SERVICE sudo chmod a+x /etc/init.d/SERVICE sudo update-rc.d SERVICE defaults
Once that’s done, paste the following simple service template into the file, swapping out SERVICE to whatever you’d like to use:
#!/bin/sh export PATH=$PATH:/usr/local/bin export NODE_PATH=$NODE_PATH:/usr/local/lib/node_modules export SERVER_PORT=80 export SERVER_IFACE='0.0.0.0' case "$1" in start) exec forever --sourceDir=/var/www/SERVICE-p /var/run/forever start server.js ;; stop) exec forever stop --sourceDir=/var/www/SERVICE server.js ;; esac exit 0
This script went with the environment variable method of configuration. Since we’re dealing with a bash script, I threw the variables at the top of the script instead of on the same line as the command we executed for added readability. Of course, if you adopted the command line argument method, omit the two export lines and add your arguments to the end of your command.
If you’d like to start (or even stop) the service, you can run the same old commands that you’re likely used to:
sudo service SERVICE start sudo service SERVICE stop
Now that you’ve got everything setup, your service should be able to survive a reboot of your machine! Go ahead and run sudo init 6 right now just to be sure. Just kidding.
If you ever want a list of your currently running applications, run sudo forever list. Read up on the forever documentation to see what else you can do (hint: log reading).
That Debian service script we wrote is a bit lacking! If you check out the contents of /etc/init.d/skeleton, you can get an idea of a more robust script.
The posts to this blog may have subsided, but my exuberance (and GitHub commits) have not.
My next book is titled A Consumer-Centric Approach to RESTful API Design and unlike my first book will be self published. I’m a lot more excited about this book than the previous one, Backbone.js Application Development. With the first book, the publisher approached me with a topic, a table of contents, and a title, and asked me to do the rest. Backbone.js, despite being a technology that I used at the YC startup I co-founded and my day job, just wasn’t something I was passionate about. This next book regarding API design is something I’ve been really excited about for a while.
I feel much more confident in my abilities to market this book, as well as the quality of the content. My technical reviewers are well known in the industry and are helping me create an awesome book aimed at a typical web developer who is interested in building an easy to consume API. The target audience are web developers with at least a years experience.
A bunch of friends and myself competed in Ann Arbor Startup Weekend 2014. The project we built is called Finch, and while we’re still solidifying what the app will do, it’s essentially an event image aggregator. We’ve got an iOS app, (mobile friendly) website, some websockets, and an API. It’s still under massive development but we were able to pull a lot off in one weekend.
My role in the project has been a bit different that what I’ve done in the past. I’ve been a programmer for over 8 years now, but for this project I took on more of a Product Manager role, a sort of liaison between teams as well as doing some mentor and architectural decision making. I must admit, doing people interaction work has been a lot of fun. At my day job, I’ve been slowly transitioning from hardcore programming to people interaction by becoming a Developer Advocate, and I can see this being a pivotol point in my career path.
I actually put in my notice over a month ago and have been doing a lot of work on the side during this sabbatical. Of course I’ve been working on the book as well as Finch. I’ve also been working a lot with some technologies I’ve wanted to get more proficient with, such as PostgreSQL and MongoDB. I’ve used MongoDB with previous projects, but I’ve been wanting to get more experience. And I keep hearing good news regarding PgSQL vs MySQL, so it’s been fun to learn another SQL dialect.
And finally the Pièce de résistance is that I’m moving to San Francisco, CA. The reasons for this are plentiful: SF is the mecca for our industry; Ann Arbor, MI is currently 17 degrees and buried under a foot of snow; There are considerably more opportunities out west; I’m currently a mere 2 hours from my hometown.
The part that’s been fun explaining to my family and friends is that I don’t have a job lined up. I’ve got plenty of savings though, and a buddy of mine accepted a position at a company which provides temporary housing, so we’ll have a month to hunt for a real apartment and I’ll be living rent-free during this time. There is actually going to be five of us total making the move from A2 to SF (turns out none of us like the snow that much).
Some friends and I are working on a project called CodePlanet.IO. It’ll be a high-quality tutorials website, and we plan on eventually releasing screencasts of full-stack development using various web-related technologies.
Our first big post is an article of mine on the Principles of good RESTful API Design. You might not have realized it, but I’ve been working as a Developer Advocate / API Architect for the last couple months at my current employer.
Check the site out and be sure to keep an eye out for our upcoming screencasts!
I recently read a copy of Debian 7: System Administration Best Practices, written by Rich Pinkall Pollei and published by Packt. Full disclosure: I’ve published a book through Packt, and they sent me a free copy of the book to do this review.
Relevant background: Debian has been my preferred distribution for a few years, with
apt-get being the package management system I’m most familiar with. My website, as well as several others I control, are hosted on a Debian 7 Linode VPS, and I perform all maintenance via SSH. Even my current development machine is running Linux Mint, which is Debian under the hood.
This book is an ambitious attempt to cover many facets of Debian Linux sysadmin within the confines of 100 pages. There is a lot of material to cover, and the 100 pages of this book falls a bit short (on several occasions the author mentions a touched-upon topic as being outside the scope of the book). The ideal audience is a narrow set of intermediate users, perhaps having used Linux for at least one year and no more than two years. While some of the information is catered towards beginners, such as the overview of Linux in the beginning and different filesystems, much of the content requires intermediate knowledge of Linux.
In the later sections of the book, the focus switches slightly to the LAMP stack, with Apache, PHP, and some commonly used tools for headless web server administration. I’ll be honest, when I picked up the book I assumed it was on the topic of server administration, but with earlier sections covering desktop-focused topics such as full-disk encryption as well as Window Managers, the book covers the full gamut of Debian environments.
The first chapter, Debian Basics for Administrators, is a short history and overview of Linux and how Debian fits within the Linux ecosystem. While this chapter doesn’t contain any information which would warrant the for Administrators part of the title, it is good information for anyone running a Linux-based Operating System.
I found the section on Filesystems much appreciated. While the default FS selection presented during installation is usually fine for most installations, as a beginner it’s easy to get caught up and wonder what the differences are.
Considering the brevity of the book, there is a decent amount of information provided on the concept of Disk Encryption, a topic many books on this subject would have left out. In light of recent government surveillance revelations this topic is quite worthy of being covered by more books.
The book overall is a quick read, one which the user can get through in a dedicated afternoon (I read my copy over a few hours yesterday). Plenty of the topics are high-level, and don’t necessarily get the readers hands too dirty.
The Package Management section covers the basics of
dpkg, and mentions a tool called Alien for installing non-Debian packages. There’s even a sub-section on performing manual builds of software. While it’s nice to cover these tools, I’ve personally never had to use Alien,
apt-get being 95% of package management I do, and building from source being the last 5%.
Considering the audience, I would like to have seen the author guide the reader through an actual build process they may encounter in the wild. For example:
sudo apt-get install build-essential wget http://nodejs.org/dist/v0.10.24/node-v0.10.24.tar.gz tar -zxvf node-v0.10.24.tar.gz cd node-v0.10.24 ./configure make sudo make install
Many topics are mentioned, which requires intermediary knowledge of Linux, but a quick explanation could have made the topic digestible by beginners. For example, on page 24, the author talks about the swap partition, and how it is used for paging memory to disk. What does paging memory mean? By rewording this into saying “If the computer runs out of RAM, which it might have about 8GB of, it can temporarily store data into the slower hard drive, which may be 1TB.”, the audience is much larger.
On page 43, the author mentions config files having been changed between the package maintainers version and the sysadmin’s version, and how there are tools for showing diffs and choosing which version to go with. I would like to have seen more emphasis on this section, as it is the biggest cause of a nuked Debian installation (for me personally, that is).
Also on page 43, the author mentions how PHP can change and how “re-coding web pages” may need to occur. While much of the software has automatic dependency checking, like phpMyAdmin installed via apt-get, scripts installed by a user will be unaware of said version change and can break. A bigger distinction on this would have been beneficial for many a beginner.
On page 53 the author talks about services and how to control and configure them. As an example, he covers Apache 2 to some length. However, he covers Apache 2-only commands such as
apace2ctl. While it is nice that the author chose a service which exemplifies the many different configuration options Debian services have (config includes, -enabled vs -available), I really wish he would have covered the build in
service command which can be applied to all services (e.g.
sudo service mysql restart). The
service command also could have been covered in the System Management chapter when talking about init scripts.
The topic of Linux Clusters is mentioned a few times throughout the book, but it feels artificial, and after reading the book, I am no closer to understanding how to build my own Linux Cluster.
On page 43, the reader is told that they can read email sent to the root mail account to get information about upgrade notes. It would be great if the author covered how to do this (e.g.
Also on page 43, in the After the Upgrade section, a nice tip for the reader would be to reboot their machine if it is a development machine. I’ve often found myself rebooting a Linux laptop weeks after I’ve run a few upgrades, only to find X Windows not starting, and wishing I had done it sooner while the changes were still fresh in my mind.
On page 41, the author tells the reader to read the Debian release notes, but doesn’t mention where to find them.
bum commands mentioned on page 60 exist on my Mint Linux laptop nor my headless Debian 7 server. I looked into it some, and one must first
sudo apt-get install sysv-rc-conf to get those programs. The same goes for the
parted command mentioned on page 67. Whenever covering a command which doesn’t exist in the base-installation, installation of said programs should be mentioned beforehand.
The System Management > Filesystem section on page 66 might have been better merged into the Filesystem Layout chapter.
On page 80 the author mentions editing the
/etc/sudoers file for giving users the ability to run
sudo. However, the preferred “safe” method for editing this file is by using the
visudo command (which is even mentioned within the file). This command provides syntax checking and does file locking, and provides other niceties.
Page 10: “leading-edge” should be “bleeding edge”.
apt-get dist-upgrade and
aptitude full-upgrade could have been highlighted as code entirely, but that’s a matter of opinion considering the context.
Page 52: The image of text would have been better served by using text. The config parent/children relationship seems off due to how config files are loaded.
Page 53: There are two spaces instead of one in “These are the files that are part of”.
Page 53: Could have mentioned that normal files in sites-enabled, which aren’t symlink’d to sites-available, still load as normal.
Page 63: The text says the interfaces file was generated automatically, but I was under the impression Debian only auto generates DHCP interfaces (it is displayed as static).
Page 67: The note that EXT4 is 2-20 times faster than EXT3 for FSCK would be good to know in the earlier Filesystem chapter when selecting a Filesystem.
Page 68: The paragraph at the bottom beginning with “Note” would have been a good candidate for the bracketed “note” paragraph style.
Page 68 & 69: The notes about gparted and live systems seems redundant with each other.
Page 71: The tangent about a NAS device introduces many new concepts and may leave the reader confused.
Page 74: The phrase “Straight servers” is unfamiliar to me; perhaps “Headless servers” would have been better?
Page 74: The claim that European users prefer KDE and Americans prefer Gnome should have a citation.
Page 74: The term “home sites” should be “websites”.
Page 75: The
gdm3setup command needs to be entirely highlighted as code.
Page 75: The note at the bottom of the page should be clarified by stating Linus Torvalds is the creator of Linux.
Page 79: The tip at the bottom of the page should say root account password, not root account login. It’s still possible to become the root user, the account just doesn’t have a password which can be used for login (e.g. run
sudo su and you’re root).
Page 96: The Installing Webmin file content should be bold, and the second line is missing the trailing backslash.
Page 98: The comment on using Webmin to make changes, then manually checking the file to make sure the configuration is legit, leaves the reader wondering why they would want to use Webmin at all.
The game is really simple. You progress through different levels by getting a specified goal to turn on. This goal is always a 1×1 grid location and is highlighted in blue. You’re allowed to change the are of the level highlighted in pink. Changing that area is as simple as clicking a grid location to toggle the state between on and off.
Conway’s Game of Life is a rather simple simulation. It is a state machine, exemplified by a limitless 2d array (mine is just 64×64). This thing falls into a category of similar things called “Cellular Automata”. You can read more about it on WikiPedia, however, the four simple rules are copied here for your convenience:
Each frame of the playing animation represents a single generation.
If you throw some SSL onto your NGINX hosted website (as you’ve likely noticed thomashunter.name is now doing), you may notice a few hard-to-diagnose issues. Many PHP scripts look for the presence of a certain server variable, namely,
$_SERVER['HTTPS'], to determine if it is behind an SSL connection.
To fix this, you need to add the following line to your server block:
fastcgi_param HTTPS On;
Interestingly, it is quite hard to find documentation on this topic, and I have no idea why. I’m not sure if the HTTPS server variable is that common, but I do know that Apache always provides it, and many PHP scripts rely on it. Honestly, it isn’t a bad idea to manually set this to Off if you know that your website isn’t behind SSL, as I’ve seen some code do silly things.
Check out the following crazy logic some common PHP systems use for checking if the current site is secure, all of which rely on the presence of this parameter. Most importantly, notice how every single one of these common PHP systems do it differently:
Chromium is the entirely free version of Google Chrome. What makes it entirely free? Well, it doesn’t include license restricted code, such as the PDF viewer. If you’re like me, you’re a stickler for installing software using your distributions package manager, and prefer doing so over installing packages outside of the package manager. And honestly, I can’t think of a single other reason I keep using Chromium instead of Chrome. But I digress!
To get the PDF viewer working in Chromium, so that you can click a PDF link and view it in your browser instead of requiring it to be downloaded, just do the following.
First, download the Chrome .deb package: https://www.google.com/intl/en/chrome/browser/
Extract the file opt/google/chrome/libpdf.so from the package, and save it to /usr/lib/chromium-browser.
Once you’ve done that, restart the browser (close all windows), and then attempt to view a PDF file.
You can also visit chrome://plugins/ to confirm that the plugin is listed.
Everyone knows that script kiddies are constantly bombarding servers with login requests, attempting to get access to an account which you might have secured with a stupid password. I was curious to find out which accounts they were attempting to login as, and more importantly, if any of these accounts were actual accounts I knew of.
I couldn’t find anything on the internets, but I was able to cobble together the following (overly) complex command:
sudo cat /var/log/auth.log | grep -oEi "Invalid user ([a-zA-Z0-9]+)" | colrm 1 13 | sort | uniq -c | sort -h
If you’d like an explanation, check out the command breakdown on Explain Shell.
Here are some of the more popular accounts people attempt to login as:
30 ftpuser 33 astrid 33 autumn 33 bailey 36 avalon 36 testuser 39 git 42 bezhan 42 test 45 admin 45 asuka 45 auction 45 bar 45 bella 48 bbs 54 bandit 57 bind 57 oracle 63 nagios 69 au 78 ben 87 ftp 93 bill 864 ftptest
If you know of a better way to format this command (I have a feeling the length can be cut in half) leave a comment!