rxjs check as your type validation

rxjs has a steep learning curve, but can do some really cool things. Let’s say you want an input form to do “as you type” async validation. Perhaps it’s checking if the username is taken or not. Another use case could be checking if some url is valid. I implemented this with ngrx-effects (after failing a lot!) and thought I would share.

  asyncServerUrlCheck$ = this.store.select(fromAccount.getLoginForm).pipe(
    filter(form => form.value.showUrl),
      (first, second) => first.value.url === second.value.url
    switchMap(form =>
            () => new StartAsyncValidationAction(form.controls.url.id, "exists")
          map(() => new ClearAsyncErrorAction(form.controls.url.id, "exists")),
          catchError(() => [
            new SetAsyncErrorAction(

Lots going on here and it sums up both my love and hate of rxjs. It’s unreadable garbage code until you understand it – then it’s fantastic. Let’s try to break this mess down.

First off – notice the asyncServerUrlCheck observable (All ngrx effects are just observables) is watching state instead of actions$. This is done because I’m watching the form field’s state instead of waiting for an action. Then I filter out changes that are not to the form field in question and I make sure it’s a real change.

Now the magic starts – next in my pipe is switchMap which is important because I want to cancel any previous observable.  If the user types google.co and then google.com I probably don’t want to check if google.co exists. switchMap throws out any previous work and start over. Of note – if I didn’t use switchMap I would see a LOT more network requests.

Next up is concat. concat is what is going to allow me to return multiple actions. Without it, I would just get the first start async validation and nothing else. concat is a static function and not a operator (Oh but it was an operator in rxjs 5 – which actually makes me hate rxjs a little because it’s so much mental overhead!). We’ll pass observables as parameters to concat. Our first concat observable is a timer. Timer is what implements the logic to wait until the user stops typing. Because we use it with switchMap earlier – it will get canceled if the user types something else! We could stop right now and have a start async validation action dispatch when the user stops typing. Cool. I do suggest trying this out piece by piece if you want to understand it rather than coping the entire snippet.

Now I need to add success and failure actions after the async call is made. I’ll add a second parameter to concat which is my service function call. The function will return an Observable (Once again remember that concat accepts a list of observables). I pipe this into a map and catchError. That logic should look familiar if you used effects before so I won’t go into detail.

Screenshot from 2018-07-11 10-01-35

This is how it looks in redux devtools. I get lots of set value’s from each user character input. But I don’t get a start async validation for each one (meaning I don’t excessively check the server!). Then I get either set async error or clear async error (success) actions depending on if the server url is valid.

I’ve found this pattern hard to grok initially – but now is easy to apply elsewhere. Making async validation easy means I’ll be more likely to use it and give users a more interactive experience. Try it yourself at https://passit.io and download the chrome or firefox extension (The web version ask for server url). And if you aren’t a regular visitor to my blog – Passit is an open source, online password manager that my company built so please give it a try.

Forms with ngrx, NativeScript, and Angular

There are many ways to make forms in Angular. There’s template driven, reactive, and the question of syncing with ngrx state or keeping the it local to the component. When making a NativeScript app it’s not always obvious how to reuse these forms. For example, template driven forms in Angular might use the dom’s “required” attribute.  NativeScript doesn’t have a dom or input component at all, so the required logic would have to be remade, perhaps using a required directive. Redux/ngrx driven forms offer a significant advantage when we have multiple platforms as ngrx is platform agnostic and we can perform the validation logic in the reducer instead of the component or a directive.

As a case study, I recently rewrote Passit’s login form with the fantastic ngrx-forms package. ngrx-forms takes care of common use cases while providing a blueprint and examples of how to make the state driven forms.

The validation logic is in the reducer, here two different platforms show the errors that get pulled from state

Using ngrx-forms on the web is straight-forward, just follow the docs. For NativeScript you’ll have to make a few changes:

  • There is no “form” component in NativeScript, so you’ll have to manage isSubmitted yourself. You could modify the submit state in the reducer itself or in the component with MarkAsSubmittedAction.
  • ngrx-forms comes with directives to keep the form and ngrx data in sync. But these won’t work out of the box with NativeScript components. Here is a NgrxTextFieldViewAdapter for a TextInput. Just add the directive like [ngrxFormControlState]=”form.controls.yourField” and the TextInput state will sync with the form state, just like on the web.

Now I can reuse all of my form validation logic in both platforms. The only difference is the presentational components for app and web.

Overall I think ngrx-forms offers a straight forward, redux friendly, and platform agnostic solution to forms. Please feel free to take a look at my Android preview release of Passit, the open source password manager. As I create more forms on both platforms I’m looking forward to having a single strategy for building them. Be sure to report bugs on gitlab.

Does your password manager really need permissions to do anything ever?

Screenshot from 2018-03-03 12-01-59

Almost all password manager’s browser extensions have permission to “Read and change all your data on the websites you visit”. If that sounds scary, it’s because it is. That’s the “<all_urls>” permission. It means the extension is allowed to execute arbitrary JavaScript at any time on any website without warning. Here’s some examples of what I could do if you installed an extension I made with <all_urls>.

  • Run a keylogger on every webpage you visit
  • Inject extra ads into every website
  • Run a password manager that autofills every login form even when you do not ask it to – this is in fact common. This includes malicious forms that might have been injected into a victims website to steal your password.
  • Run a password manager that checks each and every domain you ever visit and forget to sanitize the domain url making it vulnerable to code injection attacks that could lead to a rouge website capturing all of your passwords.
  • If I’m feeling only slightly evil, I simply record each domain you ever visit and sell that data to advertisers.

Passit does not require the <all_urls> permission. This doesn’t make us invulnerable to all extension-based attacks, but it greatly mitigates them. Let’s consider some:

  • “I sell you out (perhaps to another company) and start making my extension serve ads or other garbage” – you’d get a notification about the increased permissions and hopefully you’d check our blog to see why we want scary permissions.
  • “My sloppy code is vulnerable to JS injection attacks” – but because Passit doesn’t run until you invoke it, that probably only happens on websites you already know and trust at least somewhat.

Forget security vs convenience

(Sort of)

Passit has easy to use shortcuts for autofill, so you don’t give up much convenience. I personally don’t find pressing a shortcut key to log in to be a big burden, especially with such nice security gains. Our strategy also means Passit will never have Clippy-esque “Would you like to save this password??” forms because Passit will never bother you until you activate it.

That said, Passit is positioned to be a web-based, easy-to-sync, and share/organization-friendly password manager. I think password managers like pass still offer a benefit for personal use when you don’t care about sharing or autofill. The most secure password manager would be a piece of paper in a safe in a fort. At some point, we have to pick where we are comfortable between security and privacy vs convenience. I hope Passit makes an appealing choice that is nicer to use than programs like pass or KeePass while providing better security and privacy than LastPass or 1Password.

Try Passit out today. Use our free hosted service or run it yourself. If you like it, please star us on Gitlab and report some feature requests or issues.

Review: Dell XPS 13 9370 Developer Edition

As the owner of Burke Software and Consulting I get to play with a few more Linux laptops than I would as an individual. I recently picked up Dell’s latest XPS 13 (9370) Developer Edition. Here’s my review as a developer.

Comparing Laptops

I compared the current 9370 model with the 2016 9350 model. I also compared a couple benchmarks with the Galago Pro 2 from System76, which is another laptop with Linux preloaded.

Both XPS computers are top of the line with Intel i7 branded CPUs. The 9370 is an 8th generation Intel i7-8550U CPU while the 9350 is a 6th generation i7-6560U. The Galago Pro was configured with a i5 processor – so while it’s interesting to throw in it’s not a totally fair comparison.

Hardware and Appearance

The 9370 model is noticeably slimmer as seen in these photos. The other dimensions are the same. It’s noticeably a little lighter too. I tested it with a friend, having them close their eyes and pick the lighter laptop to make sure it wasn’t a placebo effect. The Dell USB-C charger is also a little bit smaller which is nice.

The camera is still unusable due to its placement. Dell moved it to the center in the 9370 model which is maybe a slight improvement, but all you will see is your fingers on the keyboard and up your nose with this camera. I can’t think of any circumstance in which I would use this horribly placed camera.


I benchmarked some typical developer tasks. I tested by building the open source password manager Passit. This has been my passion project for the past couple of years and if you aren’t using a password manager yet and like open source I encourage you to try it out (and give me feedback).

I tested a Webpack bundle from the passit-frontend Gitlab repo. This runs angular-cli’s serve command.

yarn start (webpack)
9370: 11173ms
9350: 12322ms
Galago: 13945ms

It appears the 9370 is only slightly faster, saving about 1 full second. That is not impressive for a upgrade of two CPU generations.

Ubuntu Disk Utility Benchmark
Average Read Rate
9370: 2.7 GB/s
9350: 1.6 GB/s
Average Access Time (lower is better)
9370: 0.04 msec
9350: 0.22 msec

Looks like the 9370 has a faster SSD. That’s always a good thing. We can see how benchmarks look impressive but real world results don’t reflect it.

time tns run android
9370: 1m 49s
9350: 2 m9s

This test uses the passit-mobile repo. There’s a lot going on. It needs to compile the typescript into JavaScript, build the apk file, start an android emulator, and load the apk file on the device. I did my test using the Unix “time” command. I manually stopped it when I saw the Passit app fully loaded in the emulator. I like this test because it does SO much. Some of the Android compilers take advantage of multiple cores while the node based tools can use only one core. A big change on the  i7-8550U CPU is having 4 cores and 8 threads (previous comparable U series CPUs had 2 cores). We can see a decent speed boost on the new XPS here. Nothing ground shattering. If your workload is more multi core utilizing you might see bigger jumps in performance.

All tests were done on battery.


I was pleased to see a good boost in battery life on the new XPS 9370 despite it having smaller battery and a higher resolution display of 3840 x 2160 (9350 topped out at 3200 x 1800). Battery testing is hard and your results will vary. In the type of work I do (web development, vim, browsing) I got about 4-5 hours on the 9350. I seem to get around 7-8 on the 9370. I’ve only been using it a couple days, so I’ll update this post if I find it varies. On both set ups I do not have powertop or TLP installed – I find they can make Linux too unstable for me.

If you opt for the lower screen resolution on any XPS model, you will get vastly better battery life. Unfortunately, Dell does not offer the lower resolution with 16GB of RAM, which made it not an option for me.


The XPS 13 line is not for gaming so I’ll keep this short. The 9350 had an Intel Iris integrated GPU option while the 9370 offers only the less powerful UHD 620 integrated GPU. At first I was worried this would mean no more Cities Skylines for me – but that is not the case. Cities Skylines runs just fine on very low settings on the 9370. Dell says the heat dissipation is improved on their new laptop so it may be one reason for decent performance even without the better Iris GPU. If you want some light gaming, the XPS 13 dev edition is a solid choice.


The new Dell XPS developer edition is a modest improvement. I think the better battery life even on the higher end model is the biggest improvement. It’s probably not worth upgrading from the 9350 model but it might be from anything earlier. Performance gains are minimal, possibly due to Intel CPU’s having only minor performance upgrades the past couple years.

Building Nativescript apps on Gitlab CI

I’m building a Passit Android app through Gitlab CI. My goal is to produce a signed apk file in passit-mobile’s public Gitlab repo without exposing my signing key. This post will outline what I did and act as a starting point to anyone wishing to do the same. It should also be relevant for anyone using React Native or another Docker based CI solution.

Screenshot from 2018-01-27 18-31-52

At a high level: In order to generate the APK file automatically in Gitlab CI, I need to build my project in Docker. That means I need the Android SDK and the Node environment for Nativescript in Docker. I also need to get my key file and password to the CI runner in a secure manner. Finally, I need to place the file somewhere where it can be downloaded.

Here’s my complete .gitlab-ci.yml file.

Local Docker

Local docker isn’t CI! OK, but we need to run everything locally inside Docker first to make sure it works. It’s no fun waiting on slow CI when debugging. I’ll create a Dockerfile for my project. I found runmymind/docker-android-sdk:ubuntu-standalone to be the most popular and still-updated Android SDK Docker image at the time of this writing. However, the image doesn’t have node installed, since Android is Java-based. So my Dockerfile is going to install node, Nativescript, other packages, and the Android platform version I want. Here are the relevant lines:

# Installs Node.js
RUN cd && \
 wget -q http://nodejs.org/dist/v${NODE_VERSION}/node-v${NODE_VERSION}-linux-x64.tar.gz && \
 tar -xzf node-v${NODE_VERSION}-linux-x64.tar.gz && \
 mv node-v${NODE_VERSION}-linux-x64 /opt/node && \
 rm node-v${NODE_VERSION}-linux-x64.tar.gz
ENV PATH ${PATH}:/opt/node/bin

# Installs nativescript
RUN npm install -g nativescript --unsafe-perm
RUN npm install nativescript --unsafe-perm
# installs android platform
RUN $ANDROID_HOME/tools/bin/sdkmanager "tools" "platform-tools" "platforms;android-25" "build-tools;25.0.2" "extras;android;m2repository" "extras;google;m2repository"

Neat. Notice the –unsafe-perm in the Nativescript install. That’s right from the Nativescript advanced install steps. At this time platform version (25) works with Nativescript, but you may need to change it.

I also made a tiny docker compose file. I probably don’t need to but I like it’s easy to remember cli syntax. To build and run the docker image:

  1. Install Docker if you haven’t already
  2. docker-compose build
  3. docker-compose run –rm android bash

The last step is to get a bash terminal inside the container. I suggest trying a tns command to see if it works – it may not connect to any emulators nor devices, but it should be able to build.


Before continuing, I suggest you read https://docs.nativescript.org/publishing/publishing-android-apps if you haven’t already.

Generally I build with webpack, aot, and snapshot but not uglify (can’t get it to work yet). This looks like:

tns build android --release --bundle --env.snapshot --env.aot --key-store-path your-key.jks --key-store-password <your-password> --key-store-alias-password <your-password> --key-store-alias your-alias

I’d suggest ensuring this command works outside of Docker first, then within your Docker container, and finally on CI. The reason to use all of this and not just tns build Android is to make app performance and startup better. Nativescript + Angular is quite slow otherwise. If you aren’t using Angular, it will be a little different but you can still use snapshot.

Gitlab CI

At this point you have verified that you can build an APK file inside of Docker. Next up is the .gitlab-ci.yml file. I basically just mimic my local Docker file. You could actually use the same image if you wanted and use Docker in Docker. It seems a little simpler to me to keep them separate. There are a few extra commands I’ll call out.

echo $ANDROID_KEY_BASE64 | base64 -d > key.jks

I need to get my key file into gitlab CI without committing to the public repo. I can use Gitlab CI’s protected secret variables for this. Unfortunately, Gitlab doesn’t support private files at this time. But I can base64 encode bytes – and what is a file if not a series of bytes?

cat your-key.jks | base64 -w 0

This will convert your key file to base64 without newlines (which will be more gitlab CI variable friendly). Now paste this base64 into a secret and protected variable. I called mine ANDROID_KEY_BASE64. I also set my password for the key to ANDROID_KEY_PASSWORD (doesn’t need to be base64). Now I can regenerate my key file on the fly in the CI runner. Protected secret variables will only show up in a protected branch – and I can limit who has access to protected branches. So no one can get my info by, say, submitting a merge request with echo $ANDROID_KEY_PASSWORD.

Even if your repo is private, it’s not a bad idea to keep secrets hidden so that others can contribute without seeing them.

The last thing I’ll note is my artifacts section, which will store my APK file after each build. This is great for daily build users or if you want to just manually upload the APK to Google Play and other stores. Ensuring the app builds with production settings is also a fantastic test in itself.

Final Thoughts

That’s it – fully automated APK builds. This method will not work for iOS, as Apple does not allow you to build iOS apps in non-Apple operating systems such as Linux. And Docker runs in Linux.

(Side rant: Screw you, Apple! I sure hope the U.S. gets an administration that cares about anti-trust again – then maybe your development process wouldn’t be two decades in the past. Why would you make a human build something when it could just be automated?)

Improvements and/or the next levels of this process:

  • Running unit and integration tests. Unit testing comes with nativescript and integration testing could be done with Appium. You can actually run an Android emulator in Docker.
  • Automating the deploy based on branch or tags to publish to any stores.
  • It would be great if someone made a Nativescript app-building Docker image – that would let me simplify many steps here. I could imagine even tagging each supported Android platform version so the only steps to do per project would be sending over the key file, npm install, and tns build.
  • You could try iOS builds by hooking up a physical Apple computer to gitlab CI – you can keep it in your garage right next to your horse and buggy!

Feel free to comment if any of these steps aren’t working right!

Server side tracking with piwik and Django

Business owners want to track usage to gain insights on how users actually use their sites and apps. However tracking can raise privacy concerns, lead to poor site performance, and raises security concerns by inviting third party javascript to run.

For Passit, an open source password manager, we wanted to track how people use our app and view our passit.io marketing site. However we serve a privacy sensitive market. Letting a company like Google snoop on your password manager feels very wrong. Our solution is to use the open source and self hosted piwik analytics application with server side tracking.

Traditional client side tracking for our marketing site

passit.io uses the piwik javascript tracker. It runs on the same domain (piwik.passit.io) and doesn’t get flagged by Privacy Badger as a tracking tool. It won’t track your entire web history like Google Analytics or Facebook like buttons do.

Nice green 0 from privacy badger!

To respect privacy we can keep on the default piwik settings to anonomize ip addresses and respect the do not track header.

Server side tracking for app.passit.io

We’d like to have some idea of how people use our app as well. Sign ups, log ins, groups usage, ect. However injecting client side code feels wrong here. It would be a waste of your computer’s resources to track your movements to our piwik server and provides an attack vector. What if someone hijacked our piwik server and tried to inject random js into the passit app?

We can track usage of the app.passit.io api on the server side instead. We can simply track how many people use different api endpoints to get a good indication of user activity.

Django and piwik

Presenting django-server-side-piwik – a drop in Django app that uses middleware and Celery to record server side analytics. Let’s talk about how it’s built.

server_side_piwik uses the python piwikapi package to track server side usage. Their quickstart section shows how. We can implement it as Django middleware. Every request will have some data serialized and sent to a celery task for further processing. This means our main request thread isn’t blocked and we don’t slow down the app just to run analytics.

class PiwikMiddleware(object):
  """ Record every request to piwik """
  def __init__(self, get_response):
  self.get_response = get_response

def __call__(self, request):
  response = self.get_response(request)

  SITE_ID = getattr(settings, 'PIWIK_SITE_ID', None)
  if SITE_ID:
    ip = get_ip(request)
    keys_to_serialize = [
    data = {
      'HTTPS': request.is_secure() 
    for key in keys_to_serialize:
      if key in request.META:
        data[key] = request.META[key]
    record_analytic.delay(data, ip)
  return response


Now you can track usage from the backend which better respects user privacy. No javascript and no Google Analytics involved!

Feel free to check out the project on gitlab and let me know any comments or issues. Passit’s source is also on gitlab.

Using libsodium in Android and NativeScript.

Libsodium is a fantastic crypto library, however it’s documentation can be lacking. Libsodium supports many languages via native wrappers and javascript via asm.js. I’ll document here how I got it working in Android both in Java and NativeScript. The target audience is someone who knows web development but not Android development.

Java + Android

We’ll use libsodium-jni. The install instructions didn’t make sense to me at first because I’m not an experienced Android developer. There is no need to manually compile it – they publish builds (AAR files) to the Sonatype OSS repository. This is analogous to publishing packages to the NPM repository if you’re more familiar with javascript. Just like how there are a million ways to install javascript dependencies, there are a million and one ways to install java dependencies. This can make documentation appear confusing or contradictory. Here’s how I did it.

Make a new Android Studio project (I suggest starting simple instead of using your existing project). Edit app/build.gradle and in the dependencies section add:

compile 'com.github.joshjdevl.libsodiumjni:libsodium-jni-aar:1.0.6

Note the latest version might change. The format of that string is ‘groupId:artifactId:version’ which you can find in libsodium-jni’s readme. That’s actually all you need to do to install it. gradle will find the package for you. Now try to run the app, you may get errors about android:allowBackup and the minimum android version supported.

I don’t understand why allowBackup could have any connection to libsodium – Native dev kind of sucks if you are used to something nicer like Python or Web development. We can fix it by editing app/src/main/AndroidManifest.xml and adding an attribute to the application tag.


If you run again, it will complain about the “tools” attribute. Android XML configuration is fucking awful – go add this attribute to the manifest tag.


Next we may need to fix the version number – this issue actually makes sense. Both your application and libsodium-jni declare the minimum android version supported. Just edit app/build.gradle and change minSdkVersion to whatever libsodium is asking for – at the time of this writing it’s minSdkVersion: 16.

It should now compile. Next let’s add some test code. Open MainActivity.java and we’ll generate a key pair to prove libsodium works. Add imports:

import org.libsodium.jni.SodiumConstants;
import org.libsodium.jni.crypto.Random;
import org.libsodium.jni.keys.KeyPair;

Modify the onCreate function to generate and print out the key pair’s public key:

protected void onCreate(Bundle savedInstanceState) {
    byte[] seed = new Random().randomBytes(SodiumConstants.SECRETKEY_BYTES);
    KeyPair encryptionKeyPair = new KeyPair(seed);

Now the app should run and the key should print out in logging (logcat). Success!


Screw Java and Android XML and gradle configuration – let’s use javascript. A few years ago this might have been a sarcastic remark…

Amazingly, you can generate typescript definitions using this. It works great – but I’d actually suggest getting a little familiar with the Java library in Java first. Java and javascript use very different conventions and the generated type definitions might not be perfect. Here’s a generated libsodium-jni type def. https://gitlab.com/snippets/1665527

To install make the same changes you made in the pure java app. The gradle and xml files are in platforms/android/. Drop in the libsodium type def file into the root of your project. You should now have code completion and type checking. Now let’s add that keypair generation test again to make sure it’s working.

let rand = new org.libsodium.jni.crypto.Random().randomBytes(org.libsodium.jni.SodiumConstants.SECRETKEY_BYTES);
let key = new org.libsodium.jni.keys.KeyPair(rand);

Looks weird right? But it should work. You probably want to make a wrapper around this in either javascript or Java. Mixing Java calls into your javascript will look strange and be hard to read and debug.

Hopefully you can run the project as usual with tns run android. You may want to build the project or run gradew clean first to avoid any cached builds. If you are successful you should again get the public key printed out in the log. Excellent! Now you can make a web/native hybrid app that uses libsodium’s cryptography and can run in the web and as an app!

One more gotcha

If you have used libsodium in another language like Python or Javascript you may be spoiled with higher level function wrappers. For example in javascript libsodium you can run “let keypair = sodium.crypto_box_keypair();” to generate a keypair. libsodium-jni has very few high level wrappers so you’ll need to write C-like code in Java to mimic the low level functions. To make matters worse libsodium-jni isn’t documented. I found searching the code for unit tests to be the only way to find the right syntax. If you just start calling functions at will you’ll probably get a missing implementation error. If this happens you may want to see how it’s done in a unit test. Here is a link to the earlier key pair example but in a low level way. Now you are writing C like code in Java in Javascript. This is your life now.

Building a continuous integration workflow with gitlab and openshift 3

In this post I’ll go over building and testing a Docker image with gitlab CI and then pushing that image to Openshift 3. It should be somewhat helpful for people using other Docker solutions like Kubernetes too or CI solutions like Jenkins. I’m using Django for the project with some front end assets built in node.

Our goal is to have one docker environment used in development, CI, staging, and production. We’ll avoid repeating ourselves with image building.

At a high level my workflow looks like

Screenshot from 2016-04-01 15-12-12

Local development

All local development happens with docker compose. There is plenty of info on the matter so I’ll skip most of this. I will point out that I want to use the same python based docker image for development and later in production.

Continuous Integration

I’m using gitlab and gitlab CI runner to do testing and build a docker image. Gitlab has some docs on how to build a docker image. The choices are shell and docker-in-docker. I found docker-in-docker to be slow, complex, and error prone. In theory it would be better since the environments are more isolated.

Gitlab CI building images with shell executor

Here is my full .gitlab-ci.yml file for reference.

The goal here is to build the image, run unit tests, deploy on success, and always clean up. I had to run gitlab ci runner as root otherwise I would get permission errors. 😦

In a non trivial CI system, shell can get messy too. We need to be concerned about building too many images and filling all disk space, exhausting the number of docker network subnet pools, and ensuring concurrency works (if you need that).

Disk space – I suggest using a service that lets you attach a large volume that is formatted with an lot of inodes and using Docker’s overlayfs storage engine. I used AWS’s EC2 with a 120gb mount for /var/docker. See this blog post for details. Pay attention to the part where you define inodes. I went with 16568256.

Docker clean up – Gitlab has a docker image that can help clean up docker images and containers for you here. I’d also consider restarting the server at night and running your own clean up scripts too. I also place a CI cleanup stage like

  - build
  - test
  - deploy
  - clean

  COMPOSE: docker-compose -p myproject$CI_BUILD_REF

  stage: clean
  when: always
    - $COMPOSE stop
    - $COMPOSE down
    - $COMPOSE rm -f

I’ve been using docker since 1.0 and I’m still always amazed by how it finds new ways of breaking itself. You may need to add your own hacks to seek and destroy docker images and containers that will want to build up forever.

The $CI_BUILD_REF is to ensure each docker image is unique – this allows us to run multiple builds and have some certainty the image being tested is the one being pushed to docker hub.

The test stages are rather django/node specific. Just place whatever code needs to execute to run tests here. If it gets a success exit code gitlab CI will know it passed.

Pushing to docker hub – I’m tagging my tested image, pushing it to docker hub, and running a webhook to notify openshift to automatically pull the image and deploy it to staging.

  stage: deploy
   - qa
   - echo &quot;Tag and push ${PROJECT_NAME}_web&quot;
    - docker tag ${PROJECT_NAME}_web ${IMAGE_NAME}qa
    - docker push ${IMAGE_NAME}:qa
    - &quot;./bin/send-deploy-webhook.sh qa $DEPLOY_WEBHOOK_STAGING&quot;

Notice how I’m only running this on the qa branch and that I’m tagging the image as “qa”. I’m using docker tags so that I can have one image that has different development stages – dev, staging, and production.

Openshift with docker build strategy

Openshift lets you build using a source to image strategy or docker. Source to image would mean rebuilding a docker image – which we already did in CI. So let’s not use that. The docker strategy was a bit confusing to me however. I ended up having a very minimal build stage using Openshift’s docker build strategy. Here is a snippet from my build yaml.

    type: Dockerfile
    dockerfile: &quot;FROM thelab/tsi-cocoon:devnRUN ./manage.py collectstatic --noinput&quot;
    type: Docker
        kind: DockerImage
        name: 'docker.io/user/image:dev'
        name: dockerhub
      forcePull: true

Notice the source type Dockerfile with a VERY minimal inline dockerfile that just gets the right image and collects static (Django specific – this could really just be FROM image-name)

The strategy is set to type: Docker and includes my docker image and the “secret” needed to pull the image from my private repo. Note that if you must specify the full docker registry (docker.io/ect) or else it will not work with a private registry. You need to add the secret using oc secrets new dockerhub .dockercfg=dockercfg where dockercfg is the file that might be under ~/.dockercfg.

forcePull is set to true so that openshift does a docker pull each time.

You’ll need to define deployment, services, ect in openshift – but I include that in the scope of this post. I switched a source to image build to docker based without having to touch anything else.

That’s it – the same docker image you used with compose locally should be on openshift. I set up a workflow where git commits on specific branches automatically deploy on openshift staging environments. Then I manually trigger the production deploy using the same image as staging.

Extending Openshift’s source to image

Openshift 3 introduces a new concept “source to image” or s2i. It’s a way to create a Docker image out of some source code and a Docker image – for example a python s2i image. It makes Openshift 3’s docker based workflow feel more like Openshift 2 or Heroku.

Image from http://blog.octo.com/en/openshift-3-private-paas-with-docker/

One problem I ran into using s2i-python was a lack of binary packages installed that are needed to build things like Pillow. For this we need to extend the base image to add what we want. You can see the end result on github.

Let’s review the Dockerfile I build. We start by extending centos’s s2i image. Note I’m using Python 3.4. You can review the upstream project here to see what I’m extending.

FROM centos/python-34-centos7

Next I’m mimicking how centos installs packages from the upstream file. Upstream and my version

You can see I installed postgresql client tools and node too which I need. I also install a couple pip packages I know I’ll need on all my python projects – this is done just to speed up the build time. Do as much or as little as you want.

Next I’ll post it on Docker hub with automated builds. See here. I added centos/python-34-centos7 as a linked repository. This way my docker image builds any time the upstream image builds too – ensuring I get security updates.

Screenshot from 2016-03-02 15-40-55

Finally I just use the image as I would the original s2i python in openshift. I can add it as I would any image with oc import-image s2i-python --from="thelab/s2i-python:latest" --confirm

Next post I’ll describe how to reuse a customized s2i python image on local development with Docker compose.

Finding near locations with GeoDjango and Postgis Part II

This is a continuation of part I.

Converting phrases into coordinates

Let’s test out Nominatim – the Open Street Maps tool to search their map data. We basically want whatever the user types in to turn into map coordinates.

from geopy.geocoders import Nominatim
from django.contrib.gis.geos import Point

def words_to_point(q):
    geolocator = Nominatim()
    location = geolocator.geocode(q)
    point = Point(location.longitude, location.latitude)
    return point

We can take vague statements like New York and get the coordinates of New York City. So easy!

Putting things together

I’ll use Django Rest Framework to make an api where we can query this data and return json. I’m leaving out some scaffolding code – so make sure you are familiar with Django Rest Framework first – or just make your own view without it.

class LocationViewSet(mixins.ListModelMixin, viewsets.GenericViewSet):
    """ Searchable list of locations

    GET params:

    - q - Location search paramater (New York or 10001)
    - distance - Distance away. Defaults to 50
    serializer_class = LocationSerializer
    queryset = Location.objects.all()

    def get_queryset(self, *args, **kwargs):
        queryset = self.queryset
        q = self.request.GET.get('q')
        distance = self.request.GET.get('distance', 50)
        if q:
            point = words_to_point(q)
            distance = D(mi=distance)
            queryset = queryset.filter(point__distance_lte=(point, distance))
        return queryset

My central park location shows when I search for New York, but not London. Cool.
Screenshot from 2015-10-02 14-55-45

At this point we have a usable backend. Next steps would be to pick a client side solution to present this data. I might post more on that later.