Wagtail Single Page App Integration news

wagtail-spa-integration is a Python package I started with coworkers at thelab.

Version 2.0

Wagtail SPA Integration 2.0 is released! The release is actually maintenance only, but now requires Wagtail 1.8+ thus making it a potentially breaking change.

Coming soon version 3.0 with wagtail-headless-preview

A major feature of Wagtail SPA Integration is preview support. Torchbox (the creators of Wagtail) developed their own solution called wagtail-headless-preview. We’ll be migrating to this and it will be a significant breaking change. Aligning with Torchbox’s implementation will reduce our maintenance burden and allow for a generally more “normal” experience. It will also remove a feature/quirk in Wagtail SPA that generated links that could be used for up to one day without authentication.

NextJS Support coming soon!

We have a proof of concept for NextJS support. Preview/contribute it here. This package utilizes NextJS’s dynamic routing and dynamic components making it possible base NextJS pages on Wagtail page types. One difference from the Angular Wagtail implementation is that all communication is handled via the NextJS Node server instead of direct Wagtail REST API calls. This results in simpler code, but slightly worse performance.

Angular Wagtail will also receive a minor update to work with 3.0.

Shameless advertising

Looking for an open source error monitoring solution for your Django, Angular, or NextJS apps? Try out glitchtip.com. GlitchTip is compatible with Sentry’s open source SDK but unlike Sentry is 100% open source. We offer paid SaaS hosting as an option. Your support gives us time to continue work on these various open source projects!

Deploy Django with helm to Kubernetes

This guide attempts to document how to deploy a Django application with Kubernetes while using continuous integration It assumes basic knowledge of Docker and running Kubernetes and will instead focus on using helm with CI. Goals:

  • Must be entirely automated and deploy on git pushes
  • Must run database migrations once and only once per deploy
    • Must revert deployment if migrations fail
  • Must allow easy management of secrets via environment variables

My need for this is to deploy GlitchTip staging builds automatically. GlitchTip is an open source error tracking platform that is compatible with Sentry. You can find the finished helm chart and gitlab CI script here. I’m using DigitalOcean and Gitlab CI but this guide will generally work for any Kubernetes provider or Docker based CI tool.

Building Docker

This guide assumes you have basic familiarity with running Django in Docker. If not, consider a local build first using docker compose. I prefer using compose for local development because it’s very simple and easy to install.

Build a Docker image and tag it with the git short hash. This will allow us to specify an exact image build later on and will ensure code builds are tied to specific helm deployments. If we used “latest” instead, we may end up accidentally upgrading the Docker image. Using Gitlab CI the script may look like this:

docker build -t ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME} -t ${CI_REGISTRY_IMAGE}:${CI_COMMIT_SHORT_SHA}

This uses -t to tag the new build with the Gitlab CI environment variables to specify the docker registry and tags. It uses “ref name” which is the tag or branch name. This will result in a tag such as “1.3” or branch such as “dev”. This tagging is intended for users who may just want a specific named version or branch. The second -t tags it with the git short hash. This tag will be referenced later on by helm.

Before moving on – make sure you can now docker pull your CI built image and run it. Make sure to set the Dockerfile CMD to use gunicorn, uwsgi, or another production ready server. We’ll deal with Django migrations later using Helm.

Setting up Kubernetes

This guide assumes you know how to set up Kubernetes. I chose DigitalOcean because they provide managed Kubernetes, it’s reasonably priced, and I like supporting smaller companies. DigitalOcean limits choice which makes it easier to use for average looking projects. It doesn’t offer the level of customization and services AWS does. If you decide to use DigitalOcean and want to help offset the cost of my open source projects, considering using this affiliate link. My goals for a hosting platform are:

  • Easy to use
  • Able to be managed via terraform
  • Managed Postgres
  • Managed Kubernetes
  • Able to restrict network access for internal services such as the database

Whichever platform you are using, make sure you have a database and it’s connection string and can authenticate to Kubernetes. If you are new to Kubernetes, I suggest deploying any docker image manually (without tooling like helm) to get a little more familiar. Technically, you could also run your database in Kubernetes and Helm. However I prefer managed stateful services and will not cover running the database in Kubernetes in this guide.

Deploy to Kubernetes with Helm in Gitlab CI

Now that you have a Docker image and Kubernetes infrastructure, it’s time to write a Helm chart and deploy your image automatically from CI. A Helm chart allows you to write Kubernetes yaml configuration templates using variables. The chart I use for GlitchTip should be a good starting point for most Django apps. At a minimum, read the getting started section for Helm’s documentation. The GlitchTip chart includes one web server deployment and a Django migration job with helm lifecycle hook. You may need to set up an additional deployment if you use a worker such as Celery. The steps are the same, just override the Docker RUN command to start celery instead of your web server.

Run the initial helm install locally. This is necessary to set initial variables such as the database connection that don’t need to be set in CI each deploy. Reference each value to override in your chart’s values.yaml. If following my GlitchTip example, that will be databaseURL and secretKey. databaseURL is the Database connection string. I use django-environ to set this. You could also define a separate databaseUser, databasePassword, etc if you like making more work for yourself. The key to make this work is to ensure one way or another the database credentials and other configuration get passed in as environment variables that are read by your settings.py file. Ensure your CI server has built at least one docker image. Place your chart files in the same git repo as your Django project in a directory “chart”

Run helm install your-app-name ./chart --set databaseURL=string --set secretKey=random_string --set image.tag=git_short_hash

If you use GlitchTip’s chart – it will not set up a load balancer but it will show output that explains how to connect locally just to test that everything is working. The Django migration job should also run and migrate your database. This guide will not include the many options you have for load balancing. I choose to use DigitalOcean’s load balancer and having it directly select the deployment’s pods. Note that in Kubernetes, a service of type Load Balancer may run a service providers load balancer and allow you to configure it through kubernetes config yaml. This will vary between providers. Here’s a sample load balancer that can be applied with ​kubectl –namespace your-namespace apply -f load-balancer.yaml note that it uses selector to directly send traffic from the load balancer to pods. It also contains DigitalOcean specific annotations, which is why I can’t document a universal way to do this.

apiVersion: v1
kind: Service
metadata:
  name: your-app-staging
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-certificate-id: long-id
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-path: /
    service.beta.kubernetes.io/do-loadbalancer-protocol: http
    service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: "true"
    service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
spec:
  type: LoadBalancer
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
  - name: https
    port: 443
    protocol: TCP
    targetPort: 8080
  selector:
    app.kubernetes.io/instance: your-app-staging
    app.kubernetes.io/name: your-app

At this point you should have a fully working Django application.

Updating in CI using Helm

Now set up CI to upgrade your app on git pushes (or other criteria). While technically optional, I suggest making separate namespaces and service accounts for each environment. Unfortunately this process can feel obtuse at first and I felt was the hardest part of this project. For each environment, we need the following:

  • Service Account
  • Role Binding
  • Secret with CA Cert and token

For a rough analogy the service account is a “user” but for a bot instead of a human. A role binding defines the permissions that something (say a service account) has. The role binding should have the “edit” permission for the namespace. The secret is like the “password” but is actually a certificate and token. Read more from Kubernetes documentation.

Once this is set up locally, test it out. For example, use the new service account auth in your ~/.kube/config and run kubectrl get pods –namespace=your-namespace. The CA cert and token from your recently created secret should be what is in your kube’s config file. I found no sane manner of editing multiple kubernetes configurations and resorted to manually editing the config file.

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: big-long-base64 
    server: https://stuff.k8s.ondigitalocean.com
  name: some-name

...

users:
- name: default
  user:
    token big-long-token-from-secret

Notice I used certifate-authority-data so I could reference the cert inline as base64. Next save the entire config file in Gitlab CI under settings, CI, Variables.

Screenshot from 2020-01-24 10-59-53

There’s actually a lot happening in this little bit of configuration. File type in Gitlab CI will cause the value to save into a random tmp file. The key “KUBECONFIG” will be set to the file location. KUBECONFIG is also the environment variable helm will use to locate the kube config file. Protected will allow this only to be available to protected git branches/tags. If we didn’t set protected, someone with only limited git access could make their own branch that runs echo $KUBECONFIG and view the very confidential data! If set up right, you should now be able to run helm with the authentication that just works.

Finally add the deploy step to Gitlab CI’s yaml file.

deploy-staging:
  stage: deploy
  image: lwolf/helm-kubectl-docker
  script:
    - helm upgrade your-app-staging ./chart --set image.tag=${CI_COMMIT_SHORT_SHA} --reuse-values
  environment:
    name: staging
    url: https://staging.example.com
  only:
    - master

​stage ensures it runs after the docker build. For image, use lwolf/helm-kubectl-docker which has helm already installed. The script is amazingly just one line thanks to the previous authentication and Gitlab CI variable tricks done. It runs helm upgrade with –set image.tag to the new git short hash and –reuse-values allows it to set this new value without overriding previous values. Using helm this way allows you to keep database secrets outside of Gitlab. Do note however that anyone with helm access can read these values. If you need a more robust system then you’ll need something like Vault. But even without Vault, we can isolate basic git users who can create branches and admin users who have access to helm and the master branch.

The environment section is optional and let’s Gitlab track deploys. “only” causes the script to only run on the master branch. Alternatively it could be set for other branches or tags.

If you need to change an environment variable, run the same upgrade command locally and –set as many variables as needed. Keep the –reuse-values. Because the databaseURL value is marked as required, helm will error instead of erase previous values should you forget the important –reuse-values.

Conclusion

I like Kubernetes for it’s reliability but I find it creates a large amount of decision fatigue. I hope this guide provides one way to do things that I find works. If you have a better way – let me know by commenting here or even open an issue on GlitchTip. I’m sure there’s room for improvement. For example, I’d rather generate the django secret key automatically but helm’s random function doesn’t let you store it persistently.

I don’t like Kube’s, maddening at times, complexity. Kubernetes is almost never a solution by itself and requires additional tools to make it work for even very basic use cases. I found Openshift to handle a lot of common use cases like deploy hooks and user/service management much easier. Openshift “routes” are also defined in standard yaml config rather than forcing the user to deal with propreitary annotations on a Load Balancer. However, I’m leery of using Openshift Online considering it hasn’t been updated to version 4 and no roadmap seems to exist. It’s also quite a bit more expensive (not that it’s bad to pay more for good open source software).

Finally if you need error tracking for your Django app and prefer open source solutions – give GlitchTip a try. Contributors are preferred, but you can also support the project by using the DigitalOcean affiliate link or donating. Burke Software also offers paid consulting services for open source software hosting and software development.

Django Rest Framework ModelViewSets with natural key lookup

DRF ModelViewSet can easily support detail views by slug via the lookup_value attribute. But what if you had compound keys (aka natural keys)? For example a url structure like

/api/computers/<organization-slug>/<computer-slug>/

A computer slug may only be unique per organization. That means different organizations may have computers with the same slug. But no computer may have the same slug in one organization. By using both slugs, we can look up a specific computer. We can use the lookup_value_regex attribute for this.

class ComputerViewSet(viewsets.ModelViewSet):
    queryset = Computer.objects.all()
    serializer_class = ComputerSerializer
    lookup_value_regex = r"(?P<org_slug>[^/.]+)/(?P<slug>[-\w]+)"

    def get_object(self):
        queryset = self.filter_queryset(self.get_queryset())
        obj = get_object_or_404(
            queryset,
            slug=self.kwargs["slug"],
            organization__slug=self.kwargs["org_slug"],
        )

        # May raise a permission denied
        self.check_object_permissions(self.request, obj)

        return obj

This works with drf-nested-routers. For example, we could add a nested /hard_drives viewset. The url values are in self.kwargs.

class HardDriveViewSet(viewsets.ModelViewSet):
    queryset = HardDrive.objects.all()
    serializer_class = HardDriveSerializer

    def get_queryset(self):
        return (
            super()
            .get_queryset()
            .filter(
                computer__slug=self.kwargs["slug"],
                computer__organization__slug=self.kwargs["org_slug"],
            )
        )

Angular Wagtail 1.0 and getting started

Angular Wagtail and Wagtail Single Page App Integration are officially 1.0 and stable. It’s time for a more complete getting started guide. Let’s build a new app together. Our goal will be to make a multi-site enabled Wagtail CMS with a separate Angular front-end.  When done, we’ll be set up for features such as

  • Map Angular components to Wagtail page types to build any website tree we want from the CMS
  • All the typical wagtail features we expect, drafts, redirects, etc. No compromises.
  • SEO best practices including server side rendering with Angular Universal, canonical urls, and meta tags.
  • Correct status codes for redirects and 404 not found
  • Lazy loaded modules
  • High performance, cache friendly, small JS bundle size (In my experience 100kb – 270kb gzipped for large scale apps)
  • Absolutely no jank. None. When a page loads we get the full page. Nothing “pops in” unless we want it to. No needless dom redraws that you may see with some single page apps.
  • Scalable – add more sites, add translations, keep just one “headless” Wagtail instance to manage it all.

Start with a Wagtail project that has wagtail-spa-integration added. For demonstration purposes, I will use the sandbox project in wagtail-spa-integration with Docker. Feel free to use your own Wagtail app instead.

  1. git clone https://gitlab.com/thelabnyc/wagtail-spa-integration.git
  2. Install docker and docker-compose
  3. docker-compose up
  4. docker-compose run –rm web ./manage.py migrate
  5. docker-compose run –rm web ./manage.py createsuperuser
  6. Go to http://localhost:8000/admin/ and log in.

Set up Wagtail Sites. We will make 1 root page and multiple homepages representing each site.
Screenshot from 2019-10-20 12-08-46

You may want to rename the “Welcome to Wagtail” default page to “API Root” just for clarity. Then create two child pages of any type to act as homepages. If you don’t need multi-site support, just add one instead. Wagtail requires the Sites app to be enabled even if only one site is present. The API Root will still be important later on for distinguishing the Django API server from the front-end Node server.

Next head over to Settings, Sites. Keep the default Site attached to the API Root page. Add another Site for each homepage. If you intend to have two websites, you should have three Wagtail Sites (API Root, Site A, Site B). Each hostname + port combination must be unique. For local development, it doesn’t matter much. For production you may have something like api.example.com, http://www.example.com, and intranet.example.com.

Screenshot from 2019-10-20 15-13-39

Next let’s set up the Wagtail API. This is already done for you in the sandbox project but when integrating your own app, you may follow the docs here. Then follow Wagtail SPA Integration docs to set up the extended Pages API. Make sure to set WAGTAILAPI_BASE_URL to localhost:8000 if you want to run the site locally on port 8000. Here’s an example of setting up routes.

api.py

from wagtail.api.v2.router import WagtailAPIRouter
from wagtail_spa_integration.views import SPAExtendedPagesAPIEndpoint

api_router = WagtailAPIRouter('wagtailapi')
api_router.register_endpoint('pages', SPAExtendedPagesAPIEndpoint)

urls.py

from django.conf.urls import include, url
from wagtail.core import urls as wagtail_urls
from wagtail_spa_integration.views import RedirectViewSet
from rest_framework.routers import DefaultRouter
from .api import api_router

router = DefaultRouter()
router.register(r'redirects', RedirectViewSet, basename='redirects')

urlpatterns = [
    url(r'^api/v2/', api_router.urls),

Test this out by going to localhost:8000/api/ and localhost:8000/api/v2/pages/

If you’d like to enable the Wagtail draft feature – set PREVIEW_DRAFT_CODE in settings.py to any random string. Note this feature will generate special one time, expiring links that do not require authentication to view drafts. This is great for sharing and the codes expire in one day. However if your drafts contain more sensitive data, you may want to add authentication to the Pages API. This is out of scope for Wagtail SPA Integration, but consider using any standard Django Rest Framework authentication such as tokens or JWT. You may want to check if a draft code is present and only check authentication then, so that the normal pages API is public.

Angular Front-end

Now let’s add a new Angular app (or modify an existing one).

  1. ng new angular-wagtail-demo
  2. cd angular-wagtail-demo
  3. npm i angular-wagtail –save

In app.module.ts add

import { WagtailModule } from 'angular-wagtail';
WagtailModule.forRoot({
  pageTypes: [],
  wagtailSiteDomain: 'http://localhost:8000',
  wagtailSiteId: 2,
}),

In app-routing.module.ts add

import { CMSLoaderGuard, CMSLoaderComponent } from 'angular-wagtail';
const routes: Routes = [{ path: '**', component: CMSLoaderComponent, canActivate: [CMSLoaderGuard] }];

This is the minimal configuration. Notice the domain and site ID are set explicitly. This is not required as Wagtail can determine the appropriate site based on domain. However, it’s much easier to set it explicitly so that we don’t have to set up multiple hostnames for local development. Next let’s add a lazy loaded homepage module. Making even the homepage lazy loaded will get us in the habit of making everything a lazy loaded module which improves performance for users who might not visit the homepage first (Such as an ad or search result to a specific page).

ng generate module home --routing
ng generate component home

In app.module.ts add a “page type”. An Angular Wagtail page type is a link between Wagtail Page Types and Angular components. If we make a Wagtail page type “cms_django_app.HomePage” we can link it to an Angular component “HomeComponent”. Page types closely follow the Angular Router, so any router features like resolvers will just work with exactly the same syntax. In fact, angular-wagtail uses the Angular router behind the scenes.

pageTypes: [
  {
    type: 'sandbox.BarPage',
    loadChildren: () => import('./home/home.module').then(m => m.HomeModule)
  },
]

This maps sandbox.BarPage from the wagtail-spa-integration sandbox to the HomeModule. “sandbox” is the django app name while BarPage is the model name. This is the same syntax as seen in the Wagtail Pages API and many other places in django to refer to a model (app_label.model). “loadChildren” is the same syntax as the Angular Router. I could set the component instead of loadChildren if I didn’t want lazy loading.

Next edit home/home-routing.module.ts. Since our homepage has only one component, set it to always load that component

home-routing.module.ts with WagtailModule.forFeature

const routes: Routes = [{
  path: '',
  component: HomeComponent
}];

To test everything is working run ​”npm start” and go to localhost:4200.

Screenshot from 2019-10-20 14-47-23

We now have a home page! However, it doesn’t contain any actual CMS data. Let’s start by adding the page’s title. We could get this data on ngOnInit however this would load the data asynchronously after the route is loaded. This can lead to jank because any static content would load immediately on route completion but async data would pop in later. To fix this, we’ll use a resolver. Resolvers can get async data before the route completes.

Edit home-routing.module.ts

import { GetPageDataResolverService } from 'angular-wagtail';
const routes: Routes = [{
  path: '',
  component: HomeComponent,
  resolve: { cmsData: GetPageDataResolverService }
}];

This resolver service will assign an Observable with the CMS data for use in the component. We can use it in our component:

home.component.ts

import { ActivatedRoute } from '@angular/router';
import { Observable } from 'rxjs';
import { map } from 'rxjs/operators';
import { IWagtailPageDetail } from 'angular-wagtail';

interface IHomeDetails extends IWagtailPageDetail {
  extra_field: string;
}

@Component({
  selector: 'app-home',
  template: `
    <p>Home Works!</p>
    <p>{{ (cmsData$ | async).title }}</p>
  `,
})
export class HomeComponent implements OnInit {
  public cmsData$: Observable<IHomeDetails>;

  constructor(private route: ActivatedRoute) { }

  ngOnInit() {
    this.cmsData$ = this.route.data.pipe(map(dat => dat.cmsData));
  }
}

Going top to bottom, notice how IHomeDetails extends IWagtailPageDetail and adds page specific fields. This should mimic the fields you added when defining the Wagtail Page model. Default Wagtail fields like “title” are included in IWagtailPageDetail.

The template references the variable cmsData$ which is an Observable with all page data as given by the Wagtail Pages API detail view.

ngOnInit is where we set this variable, using route.data. Notice how cmsData is available from the resolver service. When you load the page, you should notice “Home Works!” and the title you set in the CMS load at the same time. Nothing “pops in” which can look bad.

Screenshot from 2019-10-20 15-15-59.png

At this point, you have learned the basics of using Angular Wagtail!

Adding a lazy loaded module with multiple routes

Sometimes it’s preferable to have one module with multiple components. For example, there may be 5 components and two of them represent route-able pages. Keeping them grouped in a module increases code readability and makes sense to lazy load the components together. To enable this, make use of WagtailModule.forFeature. Let’s try making a “FooModule” example to demonstrate.

ng generate module foo
ng generate component foo

Edit foo.module.ts

import { NgModule, ComponentFactoryResolver } from '@angular/core';
import { CommonModule } from '@angular/common';
import { WagtailModule, CoalescingComponentFactoryResolver } from 'angular-wagtail';
import { FooComponent } from './foo.component';

@NgModule({
  declarations: [FooComponent],
  entryComponents: [FooComponent],
  imports: [
    CommonModule,
    WagtailModule.forFeature([
      {
        type: 'sandbox.FooPage',
        component: FooComponent
      }
    ])
  ]
})

export class FooModule {
  constructor(
    coalescingResolver: CoalescingComponentFactoryResolver,
    localResolver: ComponentFactoryResolver
  ) {
    coalescingResolver.registerResolver(localResolver);
  }
}

FooComponent is added to both declarations and entryComponents as it’s not directly added to the router. WagtailModule.forFeature will link the wagtail page type with a component. You can also add a resolver here if needed. Lastly, the constructor adds coalescingResolver. This enabled dynamic component routing between modules and likely won’t be needed in Angular 9 with Ivy and future versions of Angular Wagtail.

Add as many types of page types as desired.

Angular Universal

Angular Universal can generate pages in Node (or prerender them). This is nice for SEO and general performance. The effect is to generate a minimalist static view of the page that runs without JS enabled. Later the JS bundle is loaded and any dynamic content (shopping carts, user account info) is loaded in. Because the server side rendered static page is always the same for all users, it works great with a CDN. I’ve found even complex pages will be around 50kb of data for the first dom paint. Installation is easy.

ng add @nguniversal/express-engine --clientProject angular.io-example

Compile with npm run build:ssrand serve with npm run serve:ssr​. Angular Wagtail supports a few environment variables we can set in node. Setting the API server domain and site per deployment is possible:

export WAGTAIL_SITE_ID=2
export CMS_DOMAIN=http://localhost:8000

Confirm it’s working by disabling JavaScript in your browser.

Angular Wagtail provides a few extras for Angular Universal when run in Node (serve:ssr). You can return 404, 302, and 301 status codes by editing server.ts as documented. You can also add the wagtail generated sitemap. Not directly related to Wagtail, but I found helmet and adding a robots.txt pretty helpful too. Angular Univeral just runs express, so anything possible in express is possible in Angular Universal.

Bells and whistles – not found and more SEO

For a real site, consider adding a 404 not found component, setting page meta tags and canonical url. Edit the WagtailModule.forRoot configuration to modify this however you wish. If you followed the server set up from above then Wagtail redirects and drafts should “just work”. Any time Angular Wagtail can’t match a url path to component, it will query the Wagtail SPA Integration redirects API and will redirect if it finds one. If not, Angular Wagtail will show the 404 not found component to the user.

You can find the full angular wagtail demo source on gitlab.

Controlling a ceiling fan with Simple Fan Control

I released Simple Fan Control today on Google Play, web, and source on Gitlab. This project’s genesis was the purchase of a Hunter Advocate fan with Internet connectivity. It’s app doesn’t work, which I wrote about recently.

Screenshot from 2019-06-30 11-41-05
Simple Fan Control’s web version

The app is build with NativeScript and works by interacting with Alya Network’s Internet of Things (IoT) service. If there was interest, I would explore communicating with the fan directly instead of through Alya Networks. The IoT world scares me a bit because users transmit personal data to a third party service they may not be aware even exists. Alya’s service collects name, address, email, and GPS coordinates. It’s scary to think what this data could be used for or it being leaked. There’s also the concern that the fan control app becomes useless if the internet is down or should the company shut it down.

If you are using a Alya Networks based device or want to collaborate on using the code for other IoT projects please let me know by opening a Gitlab issue. I’m charging $4 for the app, but you can of course build it yourself for source. By purchasing the app, you’d support further development. Alya Networks dev boards aren’t free and would let me test out other configurations and device wifi connectivity.

I do consulting work if you are a IoT company looking to improve your software. Get in touch with info at burkesoftware.com if you’d like to know more.

How to set up a Hunter fan Wifi control by decompiling the app

I got a hunter fan recently that was supposed to be controllable via an app called SimpleConnect. Looking at the reviews, it doesn’t work. It gets stuck on the email verification step. You get an email link that opens in the app and does nothing.

I decided to inspect the apk file with dex2jar and JD-GUI. All the confirm account step actually has to do is send a PUT request to a url with a token from the email. No need for an app at all really.

Confirming the account

To confirm the email sign up and get the email link. It should look something like https://app-launcher.aylanetworks.com/launch?custom_url=aylacontrol://user_sign_up_token?token=XXXXXXXX

All we need is the token. The app is supposed to then make a PUT request to https://user.aylanetworks.com/users/confirmation.json with a payload of

{
	"confirmation_token": "XXXXXXXX"
}

You can do this yourself in postman. Just enter the payload in the Body tab as raw JSON.

fan_confirm

You should get a response with the same personal data you entered before. It should include approved: true.

Adding the fan

The next issue you’ll face is that the QR code is printed too small with very poor quality. The app also seems to set the camera in some sort of poor quality mode. I tried a couple devices and eventual got the code to scan with a Pixel C tablet.

Pretty amazing Ayla Networks made such a worthless app. No testing at all. But what do you expect with internet of things devices.

Google Assistant Integration

It’s not that hard to set up, but it’s not well documented. You just go here to set it up after setting up the simple connect wifi app. It’s a bit clunky saying “tell simple connect to do something” but it works.

Using Angular with a “headless” Wagtail CMS

Wagtail is a great Django-based content management system. Angular is a full-featured JavaScript framework. I wanted to use them together, so I made some helper libraries. Below, I explain how I did it.

Goals:

  • Enable Wagtail features like preview and redirects.
  • Allow routing to be defined (mostly) in Wagtail
  • Maintain great performance through
    • Lazy loading JS modules
    • Compatible with Angular Universal for server side rendering
  • Ensure Wagtail Multi-site functionality works

wagtail and angular

Setting up Wagtail

Install from pypi wagtail_spa_integration using the instructions here. Since there is nothing angular specific about it – you could also use this with other front-end solutions. This package provides an extended Wagtail V2 Pages Endpoint.

Setting up Angular

Install angular-wagtail. Follow the instructions. At a high level, instead of setting routes to components, you will set Wagtail page types to either components or modules. For example the Wagtail Page “foo.MyPage” might map to MyPageComponent in Angular. I will call this dynamic routing, as opposed to Angular router’s fixed routes. This is all that’s needed for simple websites. However, angular-wagtail works by having the Angular project request page data for every page. This is a problem if your site has thousands of blog pages. Your Angular app may not need to know every blog URL up front. It just needs to know that they follow a schema like “blog/blog-post-slug”. You can make a lazy loaded module for blog and set the route like

{
  type:"cool_blog.BlogIndexPage",
  loadChildren:"./blog/blog.module#BlogModule"
}

There are some limitations. loadChildren won’t work with nested routes. If you have two components in BlogModule, then you can’t both lazy load the module and use the dynamic Wagtail driven routes. There are two workarounds. Ensure the lazy loaded modules only have one route or keep them in Angular’s routing instead of WagtailModule’s page type mapping. In the blog example, you may have a blog index page and a blog post page nested under. As long as you assume the route is always /blog and /blog/post-slug you don’t really need the dynamic routing that WagtailModule provides.

Finally you need to gather page detail data in our components. In the ngOnInit function for add something like

constructor(private wagtail: WagtailService) {}
...
this.cmsData$=this.wagtail.getPageForCurrentUrl<IMyCoolPage>();

IMyCoolPage is the interface for the data you expect to receive from wagtail for this page. This works with both fixed routes in Angular router and dynamic routes in WagtailModule.

These functions will also automatically check for redirects if a page is not found.

Closing thoughts

I really enjoy having all view logic in Angular instead of attempting to mix Django templates with a JS framework’s view layer (JSX, Angular’s templates, etc). Previously, this meant giving up a lot of features that “just work” in Wagtail. With using these packages, I can quickly bootstrap a headless Wagtail server with a separate Node/JavaScript front-end and keep all the features I’m used to. Please consider contributing some code, unit tests, or make a JS integration with your favorite framework.

Review: Pixel Slate for Linux and Web Development

The Pixel Slate (i7 model) can be a decent computer for web development, including Docker, Node, and Android development. My workplace recently got me one so I decided to review it for anyone curious about using it for Linux-based development.

Performance

I’m reviewing the highest-end version with an i7-8500Y CPU. Let’s break that down:

  • Y series is the 5 watt low power offering (not to be confused with the 15 watt U series which is for “Ultrabooks”).  This allows the Slate to not have any vents or fans, making it perfectly quiet.
  • The 8 stands for 8th generation which is the newest generation for the Y series.
  • The “i7” means it’s both more expensive and faster than the same class i5. But that doesn’t mean an i7 Y series is going to be faster than a very old i3 desktop K series CPU. It’s essentially the same chip with more cores enabled and a higher clock frequency.

The i7-8500Y is considerably slower than a roughly equivalent i7-8550U as seen in the XPS 13 9370. (See my review of the XPS 13 developer edition here). Since I have both, I’ll do a few comparisons. All tests on the XPS 13 are run on Ubuntu 18.10.

Basemark Web 3.0
Pixel Slate – 500.4
XPS 13 – 365.8

Wow – the Slate beats the XPS here – this is surprising! Both are running Chrome. My guess is that Chrome on the Slate has far better optimized drivers than stock Ubuntu on the XPS. This probably allowed the GPU to do more of the work, resulting in a higher score.

Webpack

I tested building Passit, the open source password manager I’m working on. Passit is built with angular-cli and uses webpack to build bundles. See the repo here if you want to compare. I ran a development build with “npm run build”

Pixel Slate – 16 seconds
XPS 13 – 11 seconds

CPU benchmark

I ran “sysbench –test=cpu –cpu-max-prime=20000 –num-threads=8 run”

Pixel Slate – 5.8 seconds
XPS 13 – 10.0 seconds

Lower is better – and the Slate wins. I don’t understand this. It should be a simple CPU test, and the XPS 13 has a faster CPU with more cores. Since this test had odd results, I ran “stress-ng –cpu 6 –cpu-method matrixprod –metrics-brief –perf -t 60” too.

6 cores:
Pixel Slate: 22839 ops
XPS 13: 46106 ops

2 cores:
Pixel Slate: 25464 ops
XPS 13: 33069 ops

This time the XPS got more than twice as many operations done in the 6 core test – presumably due to its extra cores. Even with just 2 cores, the XPS is still faster.

Docker and Django

As an example of back-end development, I’ll run the passit-backend (Django) tests in Docker. This shows the time required for creating a PostgreSQL database and running the Python tests. I ran:

– docker-compose up db
– time docker-compose run –rm web ./manage.py test

Pixel Slate – 38 seconds
XPS 13 – 26 seconds

This test involves a mix of CPU and I/O bound operations. It’s not surprising that the XPS wins.

Linux Apps

Screenshot 2018-12-23 at 16.54.29
Just a typical day in Chrome OS running Firefox, VS Code, and Docker

I installed Firefox within five minutes of opening the Pixel Slate – because why not? Linux apps run mostly well on the Slate. Setting them up is easy – just enable that option in settings. Installing apps is easy for someone experienced with the Linux command line, but harder for someone new to Linux. For example, on most Linux OS’s, you can double click a package file (such as a .deb file) and it installs. Not so on Chrome OS – you’ll need to use apt and dpkg to install programs like VS Code and Firefox.

Linux in Chrome OS (called Crostini) runs Debian Stretch in a container-based environment. That means it’s more efficient than a virtual machine and more secure that just executing Linux programs directly. It does add some inconveniences, such as having a separate file storage area (similar to Android).

Most things work just fine, but an exception was Docker. I followed the comments here to get it working. I ran into another minor kink when installing gnome-terminal because no shortcut was created (every other app I installed did so and “just worked”). Crostini doesn’t support GPU acceleration at this time, so Steam gaming with the Slate isn’t going to be a great experience. Actual virtualization doesn’t work at all, although Wine does.

One perk of using the Slate as a developer is that you can develop Android apps and run them right on the device without an emulator. This does require enabling developer mode, which leads you to a rather annoying startup screen that must be bypassed by pressing CTRL-D or waiting 10 seconds. It’s actually really handy running Android apps directly in Chrome OS and not taking the typical performance hit from full virtualization.

Mobility, Battery Life, and Other Features

The Slate weighs 1.6 lbs by itself; with the keyboard it’s 2.9 lbs. For comparison, the XPS 13 weighs 2.67 lbs – so the Slate as a laptop substitute is not a lighter option.

I get 4-6 hours of battery life on the XPS 13 when actually working. The Pixel Slate does better – more like 6-12 hours. (It’s hard to estimate because I’m typically not continuously coding/compiling things for more than 6 hours at a stretch.) This is no surprise given the lower power requirements of the Slate’s CPU.

The Slate easily goes into a suspend mode when inactive, just like an Android phone or tablet would. Ubuntu on the XPS is more finicky – it mostly works, but consumes more power when suspended and occasionally has glitches when waking. I would feel comfortable simply suspending the Slate when I step away from my work, whereas I often shut down my XPS 13 to avoid the issues just mentioned.

The Slate doesn’t have a headphone jack, and only has two USB-C ports. If I want to charge it, listen to music (through an adapter), and plug in a second monitor at the same time, then I need a USB-C dock. Google doesn’t provide much guidance on what adapters or docks are supported. I found USB-C to DisplayPort to work fine with a 4k monitor at 60hz, while a USB-HDMI adapter I use for my XPS didn’t work at all with the Slate. USB-C docks don’t support 4k at 60hz, and the ports appear not to be Thunderbolt-compatible. I found this whole connection process confusing and annoying – but in the end I got what I wanted using a USB-C dock (for power and audio) and a separate cable for DisplayPort.

The official Slate keyboard works as well as any device in this tablet-to-computer product class. It’s usable on your lap, but not good. It’s perfectly fine on a table. The round keys are a little odd, but I got used to them. It’s almost a full keyboard, including escape and F row keys – meaning I can use vim with it.

This may be a matter of personal taste, but I find the Slate too large and burdensome for reading an e-book. One advantage of the size, though, is that I can read full-size magazine articles without having to zoom or use the lite version.

The Slate’s magnets seem to be weaker than the Pixel-C’s, or maybe they’re the same but not strong enough for the increased weight. The Slate wouldn’t stay up when I tried sticking it to the fridge like I do with my Pixel-C. At the Slate’s vastly elevated price point, however, I probably wouldn’t trust it in the kitchen anyway!

Conclusion

As a developer, I’d feel confident using the Pixel Slate as a replacement for my tablet and laptop. I’d still want a faster desktop with this set up and as a backup just in case Docker or something didn’t work right. As something I got from work and didn’t pay for myself – it’s great!

Pros

  • Great battery life
  • Fast web performance
  • A good way to run Linux with a solid, stable base OS that runs without glitches
  • Running Android apps next to Linux apps all inside Chrome OS is really cool

Cons

  • Expensive – I could buy both an XPS 13 and a small tablet for less money
  • CPU performance is slower than an “ultrabook”
  • No headphone jack and not enough USB-C ports

rxjs check as your type validation

rxjs has a steep learning curve, but can do some really cool things. Let’s say you want an input form to do “as you type” async validation. Perhaps it’s checking if the username is taken or not. Another use case could be checking if some url is valid. I implemented this with ngrx-effects (after failing a lot!) and thought I would share.

  @Effect()
  asyncServerUrlCheck$ = this.store.select(fromAccount.getLoginForm).pipe(
    filter(form => form.value.showUrl),
    distinctUntilChanged(
      (first, second) => first.value.url === second.value.url
    ),
    switchMap(form =>
      concat(
        timer(300).pipe(
          map(
            () => new StartAsyncValidationAction(form.controls.url.id, "exists")
          )
        ),
        this.userService.checkUrl(form.value.url).pipe(
          map(() => new ClearAsyncErrorAction(form.controls.url.id, "exists")),
          catchError(() => [
            new SetAsyncErrorAction(
              form.controls.url.id,
              "exists",
              form.value.url
            )
          ])
        )
      )
    )


Lots going on here and it sums up both my love and hate of rxjs. It’s unreadable garbage code until you understand it – then it’s fantastic. Let’s try to break this mess down.

First off – notice the asyncServerUrlCheck observable (All ngrx effects are just observables) is watching state instead of actions$. This is done because I’m watching the form field’s state instead of waiting for an action. Then I filter out changes that are not to the form field in question and I make sure it’s a real change.

Now the magic starts – next in my pipe is switchMap which is important because I want to cancel any previous observable.  If the user types google.co and then google.com I probably don’t want to check if google.co exists. switchMap throws out any previous work and start over. Of note – if I didn’t use switchMap I would see a LOT more network requests.

Next up is concat. concat is what is going to allow me to return multiple actions. Without it, I would just get the first start async validation and nothing else. concat is a static function and not a operator (Oh but it was an operator in rxjs 5 – which actually makes me hate rxjs a little because it’s so much mental overhead!). We’ll pass observables as parameters to concat. Our first concat observable is a timer. Timer is what implements the logic to wait until the user stops typing. Because we use it with switchMap earlier – it will get canceled if the user types something else! We could stop right now and have a start async validation action dispatch when the user stops typing. Cool. I do suggest trying this out piece by piece if you want to understand it rather than coping the entire snippet.

Now I need to add success and failure actions after the async call is made. I’ll add a second parameter to concat which is my service function call. The function will return an Observable (Once again remember that concat accepts a list of observables). I pipe this into a map and catchError. That logic should look familiar if you used effects before so I won’t go into detail.

Screenshot from 2018-07-11 10-01-35

This is how it looks in redux devtools. I get lots of set value’s from each user character input. But I don’t get a start async validation for each one (meaning I don’t excessively check the server!). Then I get either set async error or clear async error (success) actions depending on if the server url is valid.

I’ve found this pattern hard to grok initially – but now is easy to apply elsewhere. Making async validation easy means I’ll be more likely to use it and give users a more interactive experience. Try it yourself at https://passit.io and download the chrome or firefox extension (The web version ask for server url). And if you aren’t a regular visitor to my blog – Passit is an open source, online password manager that my company built so please give it a try.