Docker in dev and in production – a complete and DIY guide

Docker is an amazing Linux containerization tool. At Burke Software, we moved our development environment to Fig months ago and are now using Docker in production as well. This guide should give you ideas. I’m going to cover a lot of technologies not related to Docker to give you an idea how we do things. In my examples I’m using which is on GitHub for learning purposes. You should be able to follow along and run the website in Docker! Talk about self promotion – did I mention we are available for hire?

Docker in development

In development we use Fig, a tool that makes Docker a bit easier to use. It’s great whether you’re a Linux admin, software engineer, or graphic designer with minimal command line experience. The Fig documentation is pretty good so I won’t go into running it. With Fig everything gets standardized and mimics production. It would be a lot to ask for a designer to run solr, redis, postgres, and celery. Fig lets you do this. If your production environment runs a worker queue like celery, so should develop. The most differences between development and production, the more opportunities for bugs.

Current state of Docker in production

Docker itself is stable and past the 1.0 release. However the tools around it are not. For nontrivial deployments you need a little more than basic Docker commands or ‘fig up’. Flynn and Deis looks REALLY cool but are not stable yet. There is mesos and shipyard and lots more. Running Docker by hand can be a bit daunting. The guide will focus on the by hand approach – with some tools to support it.


Let’s start with a basic server. I’m using DigitalOcean. If you like this post and start a DigitalOcean account please consider using this affiliate link. The cool thing about Docker is you aren’t tied to any one service as long as that service runs Linux. AWS? Microsoft? Your decade-old desktop lying around? Anything you want.

I use Ansible to provision my server. The idea here is it’s somewhat self-documenting and I can throw out my DigitalOcean account and start it up on EC2 on a whim. Here is my Ansible YML file. This is my file and not intended to just copy. It’s so that you to know how to use Ansible and get some ideas. I will refer to it often. Basically, any task I would normally do by hand I do via Ansible so it’s reproducible. I’m using a private Git repo so I am actually adding secrets here.

Docker in Production

Docker itself is installed via Ansible. I’ll follow the order that a incoming request to the server would take.

  1. An incoming request hits nginx installed on the host. Nginx is using proxies to route to a port on localhost that a Docker instance is listening to. The following is my configuration for Nginx has the task to route a request for to port 8002. Port 8002 was arbitrarily assigned by me.
        server {
        listen 80;
        access_log  /var/log/nginx/access.log;
        location / {
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  2. Supervisor – We need docker to be running and answer port 8002. We need an init system for Docker to run on and to respawn, restart, etc. Here is my supervisor conf file. WTF Fig in production???
    command = fig -f /opt/fig/ up
    stdout_logfile = /var/log/webapps/
    redirect_stderr = true
  3. Fig in production – This Docker blog post provides a basic overview of using Fig in production. While I prefer the Fig YML syntax over writing plain Docker commands, I still recommend taking some time to get familiar with Docker. You should know how to build, create, and remove Docker containers before going forward, because Fig won’t help you if things blow up. Once you have an understanding of Docker, though, you’ll find that Fig does make it very easy to connect and run various Docker containers. Here is my Fig production file:
      image: dockerfile/redis
      build: /opt/
      command: gunicorn bsc_website.wsgi --log-file - -b -n
        - /opt/
        - "8002:8000"
        - USE_S3=Yup
      mem_limit: 1000m
        - redis

    I’m saving the environment variables in the Fig file, which is in a private Git repo. I like tracking them in Git over something like Heroku where you just enter them without version control. Notice that port number again. I’m redirecting port 8002 to the container’s port 8000 – which is just the port I always use in Fig. It could be anything but I want to change as little as possible from dev. The mem_limit will prevent the container from eating all system RAM. Look how easy it is to get redis up! I can just as easily run celery workers and other junk just like Fig in development.

  4. Persistent data – At this point our request has hit the Docker Gunicorn server which will respond. Cool. However, what happens when the container is restarted or even destroyed? Fig can deal with this itself and make databases persist; however, I don’t trust it in production. I’d like to destroy the container fully and create a new one without losing my important data. You could use dockervolumes to mount the persistent data. I’m just going to run Postgres on my host. You could also use Amazon’s RDS or an isolated database server. I feed in the Postgres credentials via environment variables as seen in the fig file. I’m storing my user uploaded files in S3. In my Ansible YML file you can see I’m backing up my entire Postgres database to S3 using the python package S3-backups. The important thing here is that I can stop and rm all my docker containers, rebuild them, and it’s not a big deal.

  5. Updating production – I’m using Git hooks to update the server. I have staging servers here too. It’s nice to give your developers easy access to push to staging and production with just Git. Notice I started a bare Git repo in the Ansible YML file. I’ll use a post-receive hook to checkout the master branch, Fig build, collectstatic (Django specific), and migrate my database (also Django specific). Finally it will restart Docker using supervisor. The set -x will ensure whoever does the Git push will see everything in their terminal window. It’s a lot like Heroku or, more accurately, Heroku is a lot like a Git hook because it is a Git hook. Unlike Heroku I can install packages and run anything I want. 🙂

    set -x
    git --work-tree=/opt/$NAME/ checkout -f master
    fig -f /opt/fig/$NAME/fig.yml build
    fig -f /opt/fig/$NAME/fig.yml run --rm web ./ collectstatic --noinput
    fig -f /opt/fig/$NAME/fig.yml run --rm web ./ migrate
    supervisorctl restart $NAME

Hopefully all that gives you some idea of how we run Docker in production. Feel free to comment with questions!

django, rest, and angularjs – a Don’t Repeat Yourself approach

I’m a django developer. When I started working with angular I wanted to keep using DRY principles that I’m used to with Django Forms. For example defining validation, verbose_name, etc in your models. This guide should give you an overview of building a system with django-rest-framework (DRF) and angular. It should also give you some ideas on using the rest option method to pull in some data about your fields.

Disclaimer – I’m a angular noob, criticism of this approach is much appreciated.

The entire project is on github so please follow along. The guide assumes you understand angular basics.

Creating a rest api in django

I won’t go into detail because their documentation is great. Run the project above and check out the options method if you haven’t already. We have some great info here like help_text, (some) validation, and verbose name is now called label.

Screenshot from 2014-06-13 14:43:40

Consuming the api in angular

Let’s create a django admin like form that will save on blur.

Screenshot from 2014-07-05 15:41:42

Most examples of angular forms I’ve seen are highly repetitive. This feels wrong when you’re used to the Django Forms framework. Luckily we can ask the options method for meta data about our forms. Here’s an interesting post about the “almost unused” options method. Our client side app still retains it’s decoupled nature from the server. The client doesn’t care whether DRF or typing monkeys are providing the options method. It just cares about what label to be using or whether the field is required.

<span class="help-block" ng-if="fieldOptions.help_text">{{ fieldOptions.help_text }}</span>

In this example we see a help_text span shows only when help text is available. Now our fields are becoming more generic looking. Generic and repetitive tasks can be automated. Let’s make a directive to automate what we can (Notice I’m using coffeescript, js is also provided in the github project).

app.directive "bscField", ->
    fieldOptions: "="
    fieldForm: "="

templateUrl: "/static/app/partials/field.html"
transclude: true

bscField can accept a few attributes and uses transclude to allow customization of input itself. It’s a great strategy for including css framework gunk too. Check out the field.html partial. We can use it like this.

<div bsc-field field-options="pollOptions.int_field" field-form="form.int_field">
  <input class="form-control" name="int_field" ng-required="pollOptions.int_field.required" type="text" ng-model="poll.int_field" ng-blur="savePoll('int_field')" />

Notice I am still repeating myself a good bit. Consider it a work in progress. The input itself actually can’t be done in the partial and still work with ng-forms. Details here.

The RestfulModel factory will handle all of our interactions with the rest api. It uses restangular. I choose restangular over ngResource because it seemed a little bit easier to work with. It supports patch of the box which will be nice for our edit one field at a time approach. I’ve also introduced a isSaving property to the Forms so we can indicate to the user when a form is being saved. You can use RestfulModel in a controller like this:

    pollModel = new RestfulModel.Instance("polls")
    pollModel.getOptions().then (options) ->
        $scope.pollOptions = options
    pollModel.getOne($routeParams.poll_id, $scope.form).then (poll) ->
        $scope.poll = poll
        $scope.savePoll = poll.saveForm

Notice we are really just tying a model to our scope so we can access our options (rest options method) and the poll itself. We’re also adding a save function to the scope that we can have trigger on blur. ngRoute is being used to determine the id of the poll we are on.

$routeProvider.when “/poll/:poll_id/”,
controller: “PollController”
templateUrl: ‘/static/app/partials/polls.html’,

It’s probably best to just play with the github project and ask any questions you have in comments. Or perhaps tell me why I’m insane for doing things this way.

Saving and Error Handling

DRF will return a 400 (BAD REQUEST) when you attempt to save something invalid. Best of all it returns a reason why!

{"int_field": ["Enter a whole number."]}

Our directive can show this (and other states) to the user.

Screenshot from 2014-07-05 16:07:21

Next Steps

This is an experiment of mine that I hope triggers some discussion on DRY principles and javascript frameworks. Hopefully it gives you some ideas and hope that you don’t have to redefine all your server side models again in javascript. I’ll be putting the concept into production in django sis this summer. If it goes well I may try releasing RestfulModel as a stand alone angular project.

Too many id’s in django admin actions

The Django docs suggest sending queryset id’s for admin actions.  This works until you get tens of thousands of id’s in a queryset. At this point your URL with GET variables becomes too large.

I’m working around this with sessions. It’s not quite as nice as your url is no longer copy/pastable. I decided to display such to the user.

Here’s my action to export data.

def export_simple_selected_objects(modeladmin, request, queryset):
    selected_int = queryset.values_list('id', flat=True)
    selected = []
    for s in selected_int:
    ct = ContentType.objects.get_for_model(queryset.model)
    if len(selected) > 10000:
        request.session['selected_ids'] = selected
        return HttpResponseRedirect("/admin_export/export_to_xls/?ct=%s&ids=IN_SESSION" % (,))
        return HttpResponseRedirect("/admin_export/export_to_xls/?ct=%s&ids=%s" % (, ",".join(selected)))
export_simple_selected_objects.short_description = "Export selected items to XLS"

Then in my action view

field_name = self.request.GET.get('field', '')
model_class = ContentType.objects.get(id=self.request.GET['ct']).model_class()
if self.request.GET['ids'] == "IN_SESSION":
    queryset = model_class.objects.filter(pk__in=self.request.session['selected_ids'])
    queryset = model_class.objects.filter(pk__in=self.request.GET['ids'].split(','))

At least now the user will know the data is in the session. Note the 10,000 limit is just made up! If your users don’t know what sessions are, their action will just work instead of doing nothing 🙂

python – convert documents (doc, docx, odt, pdf) to plain text without Libreoffice

I recently needed to convert some resumes to plain text. There are any number of use cases for wanting to extract readable text from binary formats. So here is a code snippet to do just that. I’m using some non python Linux programs and python libs. Notably absent is Libreoffice which would take care of a ton of formats. Libreoffice is however heavyweight and clunky to use. These programs will convert much faster. First let’s get some dependencies.

Notice – New versions of python-docx removed this function. Make sure to pip install docx and not the new python-docx.

Fedora 20 on Chromebook Pixel

I wrote about the Pixel before. This review will be about using Fedora 20 on the Pixel.


ChromeOS is nice. I was using a lot of vim in a ubuntu chroot. It is of course limited. I was also having suspend issues which takes away from the stability of ChromeOS which is really it’s main feature. Might have been related to using crouton. Ubuntu installs on the Pixel but I can’t recommend it. Unity has poor high dpi support. I also had a nightmare of crashes and bugs with Ubuntu. Fedora 20 comes default with gnome 3.10 which I heard support high dpi. So let’s try it out.


The fedora installer is bad. An installer should target two groups of users – technical and casual users. Ubuntu does a great job of this giving you fast routes to what any user is looking for.

Install perfection – Do it for me or send me to the partition editor.

Fedora on the other hand is just odd. The partition editor is horrid. After finding it, one must type in the size of any created partition. No slider, really? One more complaint – why is there a five second delay in grub? I only have one OS.

Booting to gnome 3.10

The boot animation works, another +1 over Ubuntu whose boot animation is lots of flickering and random text. I’ve heard it described by teachers as “programming talk”. Ok now to hack on gnome to make this readable….holy fuck it detected my high dpi screen automatically. Wow…I’m speechless. Unity can’t resize the top bar. XFCE requires editing themes to make larger borders. KDE can work but requires tweaks (and still looks ugly IMO). Gnome 3.10 just works.

I like the option to add accounts right away on first boot. I added my Google account with 2 factor auth no problem. The mini browser like window was scaled perfectly too. Too bad I don’t like evolution as it’s ready to go without any more configuration. I did install gnome-tweak-tool to decrease the text scaling factor just a tiny bit. The only tweak I had to do was install a dpi support plugin for Firefox. For the most part things size correctly. Google Chrome has an notable lack of high dpi Linux support despite ChromeOS being Linux but can set the default zoom to compensate. Screenshot from 2013-12-18 23:39:19

Fedora vs Ubuntu

Fedora has been mostly stable so far. The bug report tool hides in the Gnome Messaging center unlike Ubuntu that displays it more prominently leading to much panic from some users I’ve encountered. Screenshot from 2013-12-18 23:45:59I like Gnome 3.10 over Unity in other ways. While I hate my defaults it’s a easy to install extensions. Here’s what I use. Unity has equally annoying defaults but doing something as simple as changing the default tab behavior to the way God intended is difficult and buggy even!

So far I would say I’ve experienced less bugs than Ubuntu but using Fedora for a few days is hardly a scientific study.

Touch Support

When I say touch support I mean gestures that are intuitive and functional. Sliding a finger to scroll for instance. Emulating a mouse is not touch support.

Gnome has the best touch support I’ve seen in (non-google) Linux  and it sucks. Some but not all of the gnome apps actually support touch out of box. I can scroll around in Nautilus to my heart’s content. I can launch applications too. But no browsers support it – making is 90% useless. Not even Gnome’s browser epiphany supports it.

Touchegg is a poor man’s touch support. It works sometimes…if you can manage to install it. I had to install from source and find outdated rpms on If you aren’t an expert I would plan at least an hour or more to get it running. Anyway I can now scroll with two fingers in Firefox and Chrome. I disabled the other gestures because they don’t work.

Resizing windows using touch is still impossible.

Bad Stuff

  • Headphones autodetect is disabled by default. Fix – Run alsamixer, select sound card, HDA Intel, press right until selecting HP/Speaker Auto Detect, enable by pressing “m”
  • Suspend fails sometimes. I have no fix. In the 8 years of using Linux I’ve never had a laptop that works reliably with suspend. Sometimes the mouse gets stuck in a specific location. Sometimes it just turns off.
  • Wifi – Ubuntu and Fedora both have issues connecting to the tether wifi on my phone. ChromeOS connects fine. Sadness.
  • Most people who support Linux mean that they support Ubuntu and maybe it works elsewhere. Take Dropbox for example – download the rpm from their site and it will break updating because it installs a non existent repo. Easy fix for someone experienced. For the casual user I could see this just bricking updates.

Should you install it?

Are you a casual Chromebook user looking for extra functionality? Go crouton – it’s a lot easier to set up and you keep the more stable ChromeOS around.

If you’re an experienced Linux user and thinking about which distribution to try – Fedora makes a fine choice. Especially if you want the latest Gnome version.

Django 1.6 urls.defaults compatibility work-around

ImportError: No module named defaults

Getting this error on your under-maintained third party django apps? First go bug the maintainer with a pull request removing the defaults in

from django.conf.urls.defaults import *
# should be
from django.conf.urls import *

After that you can work around the issue if you don’t feel like forking the app. Just copy their file into your application then import that urls file instead.

For example let’s say you want to use django-ckeditor. Copy their file into your project. Maybe call it and remove “defaults”

Now in your project’s change:

(r'^ckeditor/', include('ckeditor.urls')),
# change to
(r'^ckeditor/', include('mystuff.ckeditor_urls')),

Once the 3rd party app fixes their app you can just revert this change.

Using the Gumby Framework in Django Applications

The Gumby Framework is a nice css (and more) framework (like bootstrap). Django isn’t coupled to any particular frontend. That said I wanted to share my experiences on best practices for using Gumby in Django. The goal here is maintainability and ease of use. If you are just toying with gumby, this how to is more than you need. If it’s an integral part of your project, do read on.

Where should Gumby go?

For a non trivial project I suggest installing Gumby and other css, js, etc with Bower. Bower is a package management system for web libraries. Think apt-get for your js and css files.  django-bower is a small django app that integrates bower into Django static files. Just follow the django-bower instructions and you will end up with something like

Notice the use of static files and Django magically knows those files are in /components/bower_components/gumby

Let’s talk about file structure. components is a folder I named and told bower to install things to. components/bower_components is where bower installs files to. What’s important is that you don’t want to edit anything in bower_components because this area gets overwritten on upgrades! That’s the whole point of bower. Now you don’t have to keep track of every jquery minor update. components however is fair game.

Customizing Gumby without cp

Next you probably want to customize gumby. But when you update gumby you don’t want it to overwrite your work! And you are too cool to copy things about. Instead we will set up gumby in components (where we are allowed to edit) and it will refer to bower_components (up to date files) as needed. Gumby has a tool called claymate to help automate this. Notice the bower_components reference in their documentation. At this point you should try editing _custom.scss and compile the css. This is explained in Gumby documentation so I won’t repeat it.

What goes in version control

You now have a lot of crap in components and you only reference a few files in your actual project. You could just copy these files into version control. You could cite these as dependencies and tell users to ./ bower_install to get the needed files. That adds a lot of overhead to anyone developing your project however. I suggest just adding the entire components folder to version control. You might be adding a mb or two….who cares. You don’t want to scare away potential contributors having to install npm just to render your html templates.

But I want to use {{ form }} and it just works

Initially on using any css framework you probably can’t rely on just throwing normal Django forms into templates and writing them out long form in html takes a long time 🙁 Luckily django-floppyforms lets us edit how forms render. And now introducing the awesomely named django-floppy-gumby. This is just a collection of Gumby friendly form templates for django-floppyforms. It lets you use {{ form }} and it just works. Note I took some liberties in how to render the templates to my taste. Feel free to fork and submit pull requests 🙂

Letter from the moon

I wrote this fictional letter a little while ago. My friend made a illustration of it and I decided to post it here for fun.

Date: Moon year 2013, Moon day 58.

From: David Matthew
547 Rock Drive
Crater #54
The Moon

Hello Shirley,

I’m glad you have decided to be my new pen pal. It’s difficult for me to find pen pals on Earth. Can you believe those stupid pen pal websites don’t even list the Moon? Much less our language – Moonese. No one wants to learn Moonese anymore. Not since the Earth’s cold war is over. We on the moon sure hope there is a new war soon. You earthlings pay much more attention to us during a war.

You’ve noticed my English is perfect by now. With all your English radio and TV transmissions, of course it’s perfect! Some of my children don’t even speak Moonese anymore! Can you believe it? What a stupid fad English is. Same with Russian! You Earthings hardly ever visit anyway. Did you know us Moonians gave your Astronauts our most expensive moon rocks? All your people gave us back was a stupid flag. We had never seen a flag before, so you can see why we liked it so much at first. Then we realized you can’t eat a flag! What’s the point? It doesn’t even smell nice!

Let me tell you about my life on the moon. I have 2 long term mates and 43 spawn. We live in a small middle class Moon House in Crater #54. Crater #54 is your name for us. We call our town Sqyuzz. We name your geographic regions too! This is because we started naming them before you started sending us your radio messages (which we enjoy so much, thank you!) You live in Lake #567, region C5. Why do you call it Lake Charles now? Why not Lake Shirley? Haha! I guess Charles owns the lake and not you.

At work I convert sun light into small entertainment products. I work at a big “exchange” factory on what you call the “Dark side of the moon.” Now you are thinking, why is there a sunlight exchange on the Dark side of the moon! Well let me tell you – your government isn’t the only inefficient one! That’s right, our United Moon government built a sunlight exchange on the dark side of the moon! Did the government admit their mistake? No! They built a network of mirrors to bring sunlight to us! Can you believe it? I’m sure you can, since your government is always building those sun + carbon exchanges that make calorie-dense “foodstuffs” as you call it. Only then to employ people to exchange it back to carbon! I bet you joke about your governments just as much as we do!

I’m very curious about your life on Earth. What types of rocks do you enjoy most? I like ones high in iron myself. You said you studied Chemistry so I’m sure you know a lot about good food! Can you tell me why humans differentiate Chemists from chefs? In Moonese we use the same word for both. I never see Earthlings eating on the TV shows. I’m not even sure where your mouths are! You have to admit you Earthlings are odd creatures. How do you even tell each other apart? On the moon we track Earthlings (for scientific study) by their carbon to salt ratios.

Shirley – what do Earthlings do when not at work – that is making TV shows. We know a lot about your actors. We don’t understand what your past times are. When humans aren’t making mildly entertaining jokes about sex or talking about some minor war – what is it they do? Do you have a spawn yourself? Maybe a family as your TV shows talk about? Is your family 2-d or 3-d? I don’t mean to offend, but do you look like one of those people who seems like a drawing or one that looks like a photograph? I’m sure Earthlings don’t describe that way – I am just so ignorant of your Earthling culture!

I hope you will write me back soon. I know how expensive Earth to Moon letters are. I’ll understand if you just send me a TV show back.

Hope to hear from you soon.

David Matthew

Review and how to install Koha Integrated Library System

My needs

A 400 person school wants a small and simple library system. We aren’t a real library and don’t need any advanced features. Check in and out books via barcode. Let students search online for books.

Installing Koha

Koha is easy to install in Ubuntu. They provide a PPA. Just follow the guide here. Upgrading versions seemed to have even worked for me with a simple apt-get dist-upgrade.

I’ll need ldap authentication but it’s broken for Active Directory right now. They seem responsive to the bug report so hopefully that get’s fixed. I’ll update this post once it’s working without hacks.

Koha features CAS authentication but I can’t make it work. Also even if it did work it requires clicking on a special CAS link and doesn’t seem to be an option for the administrative backend. Since 99.99% of the world doesn’t know what CAS means – they won’t click on it.

Using Koha

Koha is vastly over complicated for my use case. I don’t care what a marc record is and I sure don’t need 10 pages of information on each cataloged book. I’m sure a real library might like this but for me the default causes considerable pain. Let’s look at importing a book.

  1. Log into the “intra” section of the website. 
  2. Click More, Cataloging, then  Z39.50 search.
  3. I can now search for my book using resources like the Library of Congress. This is nice but I wish I could search by say Amazon as I’ve seen some other systems allow.
  4. Click import after selecting result
  5. Edit field if needed and import. This is a serious trouble spot. Read more below about confusing marc fields.
  6. Add each item (physical book)

10 pages of fields for a book? I really just want the title, author, year, subject and that’s it. To make it worse there are two required fields that by default don’t get imported from the search. This presents a major usability issue. Even the title is scary. A high school volunteer isn’t going to know what a MARC record is.

Screenshot from 2013-06-03 14:00:08

If I was a nicer person I’d remove every field except maybe 5. Since I’m lazy let’s just get rid of the scary required but not imported fields. Go to
Here you can edit these field. Go ahead and delete Control Number Identifier. Make Koha [default] item type default to whatever you want, maybe Books? The fields are still scary in volume but now the import process should be just hitting save and ignoring this mess.

Next we need to add items which are physical books you have. I don’t care about these fields and just want barcodes. Luckily you can enable auto barcodes in Administration. Just search for “barcode” under Global system preferences and turn it on. Now I can just count the number of physical copies I have and enter it in!

Print Barcodes

To print barcodes just go to Tools, Labels. You can follow the documentation which is a little too confusing. You can add a batch of books which you can search by date added. What’s important to me is that I don’t ever need to give a damn about barcode numbers.

Thoughts on Koha

Koha is too complicated for my use case and is going to scare away some less technical administrators. However after the initial setup – it’s very easy to create barcodes, check in and out books, and search for books. The check in and out system really deserves praise. To check out just enter someone’s name in the checkout search box (which is highlighted by default). It does a ajax search to quick selection. You could also use a library card with a barcode. Next scan the a book’s barcode that you printed earlier. The book is now check out. Notice I didn’t say click here, check this, etc. It just works and the defaults are probably what you want. Nice!

Checking in is yet easier. Click Check out, then scan a book. Done.

I’m making a internal users guide to Koha for librarians at my school. Stay tuned I’ll publish it here too.

Koha also has paid hosting however it’s vastly over my budget for a school. This is probably fine if you are running a full library and need Koha’s many features. If you want cheaper hosting for your school, contact Burke Software for options – I’d be happy to host it for you.

My review of the Chromebook Pixel

I purchased a Chromebook Pixel two weeks ago. My goal is to replace my lightweight travel laptop and hulky System 76 with dedicated GPU. Thought I would share my experience.


It’s the only Linux computer I’ve used that successfully suspends and auto updates itself reliably. I stay in ChromeOS for most leisure use and light sys admin. ChromeOS comes with a simple terminal and you can install ssh apps for more (but still limited) features. I find myself using touch more than I thought I would. Usually I stick with the keyboard – I hate using a mouse.

For terminal use I suggest using the Crosh Window app. You can ssh a foreign server or enter an Ubuntu chroot (more about that later). Here you can see me using Vim and a Django development server. Everything is running locally, no Internet required. Note this all requires Developer mode enabled.

Screenshot 2013-04-21 at 20.34.45

But ChromeOS isn’t fully open source!

Oh noes! Neither is Ubuntu thanks to binary blobs. As for the locked down nature of Chromebooks – I see nothing wrong with a walled garden as long as you have a key out. That key is developer mode which is very simple to enable. One complaint – You have to press ctrl-D to boot every time after enabling developer mode. Annoying! The Pixel (unlike older chromebook models) can also boot directly into Linux like a normal computer. Other models require more creativity.

Full Linux

ChromeOS is cute – but sometimes I need a real work environment. I went with crouton and KDE. Crouton has some oddities such as not working with upstart. That means services like mysql won’t start on “boot” but can be started manually. There’s a few other quirks but overall I’m pleased. It’s certainly easy to install and you can just keep using the ChromeOS Linux Kernel. I don’t like worrying about kernel updates so now I don’t have to.

What is crouton? It’s a script to set up an Ubuntu chroot for ChromeOS. One can think of it as virtualization in that it lets you use multiple Linux environments at the same time. Unlike virtualization it uses only the original (ChromeOS) Linux kernel and therefore has very low overhead.

How to start mysql automatically in crouton. Edit /etc/rc.local and add before the exit 0

export HOME=/etc/mysql
umask 007
[ -d /var/run/mysqld ] || install -m 755 -o mysql -g root -d /var/run/mysqld
exec /usr/sbin/mysqld&

I’m using KDE, a first for me, because it supports DPI changes better than other environments. The Pixel has such a high resolution, UI elements need to scale up to be usable. I found XFCE to be pretty bad at this. Changing the title bars in XFCE requires changing the images files. Yuck! KDE looks very nice after just a few tweaks. Here is what I did.

  1. Click the KDE button
  2. Search for dpi, which reveals the fonts settings
  3. Change it

Not bad KDE! There are a few more steps like making the task bar and window borders larger. But by far KDE is the most customizable and intuitive desktop environment I’ve seen. To compare with Ubuntu’s Unity – all you can do is change the text size in “Accessibly”. There are only 4 choices!  It’s worth noting that Unity and Gnome 3 don’t run currently in Crouton 🙁

Touch with Crouton and KDE works surprising well. It’s not as good as ChromeOS, but you can scroll and use multi touch features such as changing work spaces.

Even with KDE many UI elements are off. It’s livable and worth the wonderful resolution. I like editing code with such nicely rendered fonts. If you are going to spend 8 hours a day staring at text in a terminal – why not have really great text? You can see what scales and doesn’t scale on this screenshot.


Gaming on the Pixel

Occasionally I enjoy video games especially now that Steam for Linux is out. It does work but has some issues. The Intel HD 4000 GPU is minimally capable of playing newer high end games. I don’t really care much about graphics so this suits me fine. I was able to get Serious Sam 3 running with very lower resolution. I had to enable the X SWAT PPA and run the beta version of Serious Sam 3.

Storage space is a big problem with gaming. I store my steam library on a USB disk. This is fine, but USB 3.0 would have been nice for a $1300 computer! Come on Google! So it loads slowly, but it does work.

Don’t plan on upgrading it

The Pixel can be taken apart fairly easily. However the memory is soldered on. It seems likely the battery could be replaced, but not nearly as easily as a traditional laptop. I don’t see the ram requirements of vim exploding any time soon so this is ok.

Final Thoughts

The Pixel has some issues for sure. The selling point for me is the wonderful display and it’s the only high end, metal body, lightweight Linux computer out there. The only computer that comes close would be from Apple. Since I hate OSX and Apple’s entire business model, I’m sure not going to buy from them. I haven’t used another computer in two weeks so I’m pretty happy.

Would I recommend it? For most people – no. Despite this good review I can’t think of many people who need a Pixel. If you are new to Linux, the Pixel is a little too hard to get traditional Linux running well on. You would be better off with System 76. Oh well, I love it!