Chinese takeaway

I don’t know what I accidentally dragged into the Firefox awesomebar to trigger this Google search, but I’m sure it’s pretty interesting…

google

Yes, that totally makes sense (not)

I’m developing a (rather cool, I think) website/app (it’s going to go live somewhere in the next few weeks, depending on how fast AAPL approves it) and as a part of that today I wanted to develop a circular countdown. A what? Well, essentially we needed a circular area to gradually ‘fill up’ clockwise until a certain time had been reached. In common speak, I was making a clock.

Back in the stone ages this would have involved 360 images each depicting the clock in a different ‘position’, but this is 2016 so I was sure I could do better.

I initially figured I’d use a canvas-element for this, but quickly found out I could do much better: SVG with a clipping filter. SVG allows you to define a clipping range with an arc. The syntax is pretty arcane (involving its own mini-language) but I managed to get it working quickly enough. It looks something like this:


<svg xmlns="http://www.w3.org/2000/svg" height="400" width="400">
<defs>
<clipPath id="wtf">
<path fill="#109618" stroke-width="1" stroke="#ffffff" d="M200 0 A 200 200 0 1 1 27 100 L 200 200 L 200 0"/>
</clipPath>
</defs>
<foreignObject height="400" width="400" clip-path="url(#wtf)">
<img width="400" height="400" src="/css/clock/light.svg">
</foreignObject>
</svg>

 

The exact syntax here isn’t important (Google it if you’re curious) but the M(ove), A(rc) and L(ine) specify commands, followed by a bunch of parameters. Yes it’s illegible, but it works. We then import an HTML image using <foreignObject> and specify the path we defined as a clip-path attribute (you’re going to understand why it’s called #wtf in a minute ;)). This simply refers to the ID we put on the clipPath SVG element, as the standard says “if you omit an actual URL, #id refers to something in the same document”.

(Actually, writing SVG in Angular involves a few more workarounds, but they’re not relevant for this problem. They were easily solved using mostly ng-attr- prefixes.)

So yay, I have this working. It looks roughly like this:

klok3

As you can imagine, the lighter area gradually progresses over time (in this random test case, 24 hours).

This is an Angular app, so I created a component just to display the clock so I could reuse it like a good programmer boy. That’s when things started getting weird…

My test case was working, and the directive was working fine where I used it first (which happened to be the app’s home page). I then wanted to use it somewhere else (e.g. /foo/) and the clipping filter (and any other filters for that matter) refused to apply. Okay, wtf?

I literally spent a few hours debugging this. I copied the SVG verbatim to my local testpage: it worked fine. Okay, so not an issue with the SVG being generated. Maybe some CSS causes it not to work? (I had a flashback to Explorer days of yore where some things would only work if an element had “display”, whatever that was. We just used to give it zoom: 1 to force IE to behave. If you have no idea what I’m talking about: consider yourself blessed.) Copied everything to an empty page on the same server, loaded the CSS and Javascript the actual site also loads: yup, still works. So no CSS issue.

Okay, next test. I actually whipped up a quick empty page in the actual application and tried to display the raw SVG (which I by now knew for a 100,000,000% certain was supposed to work). Now it fails again.

angry

Did I make an error copy/pasting somewhere? Nope, the SVG is exactly the same, yet it fails on one URL but not on the other.

Then finally I saw it. There was one other thing that differed between my raw HTML test page and the page generated from the application. The application has <base href="/"> in the head. And indeed, the ‘failing’ page was also attempting to load the home page in the background.

For those not familiar with Angular, it needs a <base> tag for routing to work. Normally not a big deal, but having this tag causes #id to no longer refer to the current document, but to the document specified in the tag. So my clipping directive was trying to load something from /index.html (where of course it couldn’t find it, since it was in /foo.html).

I’m sure there’s a reason this works like it does, since I could reproduce it in Firefox, Safari and Chrome. (At least they were consistent :)). But seriously, why? Okay, I get that <base> tells the browser to “consider every link in this page to be prepended with the href value unless declared more explicitly”, and from that perspective it sort of makes sense, but it kinda conflicts with the definition of “if omitted refers to an element in the current document”. I mean, what’s it gonna be people? The current document or whatever’s specified in <base>? Also, I tried working around this by using stuff like “url(./#wtf)” but that didn’t work.

Anywho, in the end I ended up just prefixing stuff with essentially window.location.href, and it worked everywhere. But I’m frankly surprised such a major pitfall isn’t documented better anywhere. Hope this saves someone a few gray hairs someday 🙂

Why I ffing *hate* mobile development

Okay, maybe not exactly *hate*, but it comes pretty darn close. I just need to rant about this, so stay with me.

I don’t often do mobile apps (due to the whole “stick with what you know” principle) but every so often I can’t avoid it and, being a web guy, I choose Cordova as my platform. I’m sure there are many advantages to building native apps, but I seriously don’t want to spend the time learning Objective-C, Swift or Android’s take on Java. Spare me. This last week was one of those times, so not having touched Cordova in a fair while I decided I’d give it a spin again.

First, the good news: the project has come a long way. Almost everything can be handled elegantly from the command line, which of course I as a Linux nerd highly applaud. Almost everything would “just work” out of the box, the plugin system has matured and the fact that you need AAPL hardware to even test an iPhone version isn’t their fault at all. So far so good.

So what I normally do is this: I start with laying out the basics in just a local HTML file. Firefox has a great “mobile view” built in, so you can quickly layout your app in general without going through tiresome compile/deploy cycles – just write and let Grunt live-reload your testing page. Sweet.

Of course after a while you need to work with something resembling an actual mobile environment, e.g. because your app is using Cordova plugins. Even sweeter: Firefox still has you covered! Recent versions come with a built-in WebIDE that offers – among a few other things I’ll probably never use – a simulator of a mobile device running FirefoxOS. I don’t think anybody actually uses FirefoxOS in the real world, but at least I have a simulator with familiar debugging tools at zero cost, and deployment is easy enough too (FirefoxOS apps are just zipped local sites, essentially). My versions were acting up a bit but that’s prolly due to the fact that I insist on running Debian Unstable on my desktop, and I managed to get it to work reliably by trying a few different versions.

But then I get to the “actual” version, which should run on iOS and Android (sorry WinPhone, Blackberry and whatnot lovers: this project doesn’t at the moment care about you ;)). Which brings me to reason I find mobile development excruciatingly frustrating:

If you want developers to love your platform, for Pete’s sake make sure your tools don’t suck.

I can’t stress this enough. Let me explain:

I started with Android, since at least I can run it on my Linux machine. The Android SDK comes with an emulator too. Ok, sweet. Only, it’s slower than a turtle with 3 broken legs and the fourth amputated. Seriously Google? This is a machine that happily runs multiple virtual instances of Windows next to each other, and your emulator is so slow it makes me want to chew my balls off? The package manager included seems to download a lot of updates too, but to be fair I hadn’t touched it in months so there will have been quite a few, and it did finish in under an hour. It’s not getting my speed-of-update prize, but it’s doable.

But then I turned to Apple. Oh boy.

Right off: people who know me will know I’m not a Mac fan, quite the contrary. Yes I own an iPhone and an iPad and a MiniMac, but that doesn’t mean I’m particularly fond of them. It just means I think they suck less than Android. In fact, if any carrier will support a FirefoxOS phone any time soon I’m game. But anyway, that’s no excuse to make your tools suck…

So I remote into my MiniMac via VNC. I used to use KRDC for that, but it was dropping the connection for no apparent reason so I switched to TightVNCViewer. Whatever, VNC is VNC. I could basically deploy to the iOS simulator using CLI tools (which was awesome since Xcode also makes me lose the will to live) so everything seemed fine. I had my iPhone hooked up the Mac for charging and was just about to deploy the test version to it, when I noticed there was a new iOS version out, so I figured I’d install that first while I was at it. Y’know, updates are there for a reason and it said “security”.

This was a big mistake.

After the upgrade had completed, the phone refused to charge via the Mac. I checked and it was charging fine on AC, so weird. But I had other stuff to do so I ignored it for a bit. Note: this was yesterday around noon. Around three, I did a bit of Googling and apparently making sure your Mac is also up to date could help solve this. It’s a device connected via USB just to power up. Seriously Apple, who cares about versions? But whatever, the Mac could use an update anyway. So I open the app store and check upgrades and sure enough, there’s some Xcode stuff and an OS update. Fine, I select “upgrade all” and go do something useful.

I got back from something else a few hours later, and the update seems to have finished. Nice. I connect the iPhone and I get a weird message about a developer disk image missing. WTF, I just updated, amirite???? A bit of Googling told me it was indeed a version mismatch (though why the message didn’t just say that and offer to download the correct version is anyone’s guess) so I check the appstore again. Oh, an upgrade for Xcode. Okay, let’s get it over with. Note: these upgrades are a few gigabytes each, and my MiniMac is on wifi. It’s not a particularly fast process. If you’re keeping count: this is the second upgrade to Xcode within 24 hours.

So I let it finish, hook up the phone, same error. I’m about to throw the entire machine through the window when I check the appstore again. Oh, now there’s another OSX update, because it’s obviously too much trouble to just package crap. Fine. I’m getting really annoyed by now, the update is another 7gig but apparently it needs to be done. Pet peeve: when it had finally downloaded and started installing, it warned that it might have to reboot a few times during the upgrade. What the f*ck do you mean “might”? You don’t know? Why the hell not?

Sooooo that eventually finished, I hook up my phone, and you’ve guessed it: same error. Streetwise by now I check the appstore and lo and behold: another XCode update! Of 2 frigging gigabytes. Apple, come on?

Well, nothing to be done, so I download and install it. After what seems like eternity it’s done, I restart XCode and… it needs to download required components! Why the fsck weren’t these in the download in the first place????

It’s now been chugging away at “installing documentation” (isn’t that on your, you know, website?) for over an hour. I think I’m just going to call it a day, but seriously: what the hell are these people thinking? I just want to deploy something to a perfectly fine device, why should the OS need to care about its version? And why, in 2016, can’t you use an efficient package manager like Linux has been doing for over a decade?

Argh.

(Ab)using ngRoute to build a hybrid app

I love AngularJS. It makes building rich internet applications easy and does so in a structured manner. These days, whenever I need to fallback to jQuery (due to client requirements, for instance) it feels like going back to the Dark Ages. But enough praise.

I also love server side programming. Single page apps are fine and dandy and all, but there’s also a lot to be said for having a URI map to a specific blob of HTML consistently. It makes search engines happy (yes there are ways around that, but they’re clunky) and screen readers too. Besides, a simple wget should get the page I request, not some placeholder which is supposed to run heaps of Javascript before it renders in a usable way.

So for a while now I’ve been taking the following approach: web applications are written “old skool” in a server side language (PHP in my case, usually). The emitted HTML should return a usable page. Then, for all the interactive bits I use Angular directives. In short, I get the goodies from Angular but can still use “normal” routing via the server and it gracefully degrades if the Javascript barfs for whatever reason.

But, I was wondering, what if we can combine the two? What if the server would respond to a full page when a URI gets requested, but let Angular take it from there? Without, of course, having to duplicate our routing table? Then we can have the best of both worlds: a fluid experience for the user due to HTML5 push state URL handling, and HTML rendering logic on the server.

Turns out we can! And it’s actually a lot simpler than I thought.

Setting it up

First we need to include ngRoute so we can leverage Angular’s built-in URL handling. We could handle this manually via window.history.pushState, but why bother if the hard work has already been done.

let app = angular.module('foo', ['ngRoute']);
app.config(['$locationProvider', $locationProvider => {
    if (!!(window.history && window.history.pushState)) {
        $locationProvider.html5Mode(true);
    }
}])
app.config(['$routeProvider', $routeProvider => {
    // We don't actually _have_ routes, but just define these dummy routes
    // so the ngRoute logic will kick in and make our site HTML5 history
    // compatible.
    $routeProvider.when('/', {});
    $routeProvider.otherwise({});
}]);

The first config block conditionally sets Html5Mode for the $locationProvider as explained here. This is so that older browsers just fallback to “normal” routing, period. The post explains this further.

Next, we tell Angular that yes, / is handled by the $routeProvider, and we have no other routes. It’s imperative that you add both of these statements! If either is left out, ngRoute seems to give up. For this particular project, / will redirect to a language (/en/ for instance) anyway so it was perfectly safe. YMMV though – you might have to pick some dummy URL not otherwise used.

When we now reload the page all anchors will be “handled” by ngRoute, but won’t do anything yet. On to the next part!

Handling the route changes

ngRoute helpfully fires Angular events when it does something, so we’re going to tap into those:

let initial = true;
app.run(['$http', '$rootScope', '$cacheFactory', ($http, $rootScope, $cacheFactory) => {
    let cache = $cacheFactory('fooTemplate');
    $rootScope.$on('$routeChangeSuccess', (...args) => {
        if (initial) {
            initial = false;
            return;
        }
        let target = location.href.replace(location.origin, '');
        $http.get(target, cache).success(html => {
            $rootScope.$broadcast('fooTemplate', angular.element(html));
        });
    });
}]);

The exact naming doesn’t matter, but what happens here is:
1. When the $routeChangeSuccess event fires, we use the $http service to get the requested page from the server “under water”.
2. Once it returns, we broadcast an event with the HTML wrapped in a jqLite element as a parameter.
3. The calls are cached for efficiency.

Note that we also have an initial variable. This is because the route events also fire on page load, and since we already have our HTML we want to ignore the initial call.

Now, for each click you’ll see the page being requested in your browser console.

Changing the HTML

Define a directive (e.g. fooTemplate) that we’re going to apply to all elements that expect such a dynamic update:


app.directive('fooTemplate', ['$rootScope', '$compile', ($rootScope, $compile) => ({
    restrict: 'A',
    link: (scope, elem, attrs) => {
        $rootScope.$on('$routeChangeStart', () => {
            if (!initial) {
                elem.addClass('loading');
            }
        });
        $rootScope.$on('fooTemplate', (event, parsed) => {
            elem.removeClass('loading');
            angular.forEach(parsed, el => {
                if (!el.tagName) {
                   return;
                }
                // Replace & recompile the content if:
                // - the tagname matches
                // - the classnames match (if specified)
                // - the id matches (if specified)
                if (el.tagName.toLowerCase() != elem[0].tagName.toLowerCase()) {
                    return;
                }
                if (el.id && elem.attr('id') && el.id != elem.attr('id')) {
                    return;
                }
                let c1 = el.className.split(' ').sort((a, b) => a < b ? -1 : 1);
                let c2 = elem[0].className.split(' ').sort((a, b) => a < b ? -1 : 1);
                if (!angular.equals(c1, c2)) {
                    return;
                }
                elem.html(el.innerHTML);
                if (elem[0].tagName.toLowerCase() != 'title') {
                    $compile(elem.contents())(scope);
                }
            });
        });
    }
})]);

First we add a “loading” class to the element in question on $routeChangeStart. This is just for visual feedback to the user that something is loading under water, e.g. show a spinner. Again we only do that if we’re not in the intial run.

We then listen for our “template event” and traverse the returned HTML. If an element seems to match the element we put our foo-template directive on, we replace the HTML and – and this is important – run it through the $compile service. This allows any returned Angular components to also work after the page is updated.

Usage in HTML

Just apply the directive to any elements that contain “page specific content”:

<article id="content" foo-template>This is for a specific page and will be replaced when the URL changes.</article>

Todos/excercises

I’m going to tinker with this some more and might turn it into an actual package. In particular, it would need to:
– gracefully handle errors the $http call returns (e.g. a redirect to /login/ due to session expiration)
– do some intelligent pre-parsing on the returned HTML to it only contains elements actually tagged as templates (for efficiency)
– also handle form submissions in a likewise manner (unless of course Angular should handle them – probably check for the action attribute)
– clear the cache when needed, since POSTing stuff might invalidate certain HTML pages outright

Caveats

The HTML snippet the directive is applied to should be uniquely identifiable, so either give it an ID or make sure the combination of class/tagName is unique in the document.

Update

This is now an actual Angular module y’all can use: documentation GitHub Obviously directives have been renamed, but check out the documentation. It’s actually really easy to use (I’ve used it in a number of live projects already).

Note to self: next time, look at the source first

A while ago I blogged about configuring Apache to handle web sockets. Of course, haters all be like “why use Apache for this”. Well, simple really: I had time nor inclination to switch our entire environment to something like NginX.

This weekend I got fed up though, and bit the bullet. Most of it was actually surprisingly easy – there’s already a gazillion tutorials on the web on how to config NginX properly (they even have an official section in their docs on websockets).

One thing got me stuck though: my socket.io.js was loading fine, but the websocket calls themselves were returning “Bad request” errors. Upon inspecition, it was complaining about unknown session IDs. I assumed, of course, that I had missed something in the setup (like passing on query strings during proxying or something) since I could see in the logs socket.io was responding to the polling requests perfectly fine.

Of course, assumption is the mother of all fuck up.

After a good night’s sleep and digging into socket.io internals, I found out that the query parameter socket.io currently uses to convey session ids is called “sid”. (In pre-1.0 versions it was called different, since they worked for me but I desperately wanted to upgrade to 1.4.)

And you’ve probably already guessed it: I was also passing a PHP session ID to handle authentication in the backend. And it was called “sid” too. And of course had a different, incompatible value.

Duh. So anyone running into myserious “bad request” errors (and it gives quite a few results on the Googlezzz), I very much recommend sticking a few console.logs in socket.io itself to see at what point it’s failing. Chances are your setup is fine, you’re just sending it something it doesn’t expect

Websockets with Angular, socket.io and Apache

Seeing as I just spent the better part of the afternoon trying to figure this out, I thought I’d write a quick blog post on how I eventually got this working – both for other people struggling with this and for my own archive since I’ll probably forget again and want to look it up next time I do such a project 🙂

socket.io is a pretty awesome implementation of HTML5 websockets with transparent fallbacks (polyfills) for non-supporting browsers. However, its documentation can be…sketchy. For instance, their examples all assume everything is handled by nodeJS (including your HTML pages), which no doubt some people actually do but if you’re anything like me you’ll prefer to use Apache or nginx or the like in combination with a server side language like PHP for serving up your HTML pages. That’s not documented, ehm, extensively (to use an understatement), and also stuff like authentication isn’t really covered. So this is my solution, many thanks to Google, Stackoverflow and the rest of the interwebz:

1. The node server

socket.io is in the end a node module, so there’s no escaping that. Get it installed:

$ npm install --save socket.io http express

The most basic server script would look something like this:


let http = require('http');
let express = require('express');
let server = http.createServer(app);
let io = require('socket.io').listen(server);
io.on('connection', socket => {
    socket.on('someEvent', function () {
        socket.emit('anotherEvent', {msg: 'Hi!'});
    }
});
server.listen(8080);

Port 8080 is arbirtrary, anything not already in use will do. I usally go for something in a higher range, but this is good enough for the example.

To wrap your head around this: We have an HTTP server created using an Express app with socket.io plugged in, and it listens on port 8080. We can now run it using node path/to/server.js (in real life you’ll want to use something like PM2 for running it, but that’s not the issue at hand here).

2. The Angular client

There’s a bunch of Angular modules for tying sockets into the Angular “digest flow”. I chose this one: https://github.com/btford/angular-socket-io, but others should also work. The client script will be auto-hosted under /socket.io/socket.io.js (let’s ignore our port 8080 for now), so make sure that and the Angular module are loaded in your HTML (in that order, I might add). Adding an Angular factory for your socket is then as simple as


app.factory('Socket', ['socketFactory', socketFactory => {
    if (window.io) {
        let ioSocket = window.io.connect('', {query: ''});
        return socketFactory({ioSocket});
    }
    // I'm sure you can handle this more gracefully:
    return {};
}]);

This uses the defaults (the name Socket is arbitrary, you can call it Gregory for all I care), and sets up the query parameter. This is going to come in handy later on.

That’s really all there is to it; e.g. in your controller you can now do something like this:


export default class Controller {
    constructor(Socket) {
        Socket.on('anotherEvent', data => {
            console.log(data); // object{msg: 'hi'}
        });
    }
}

Controller.$inject = ['Socket'];

3. Configuring the actual web server

This is the part that mainly had me banging my head. Of course, we could just instruct socket.io to use our “special” port 8080, but that leads to all sorts of problems with proxies, company networks, crappy clients etc. We just want to tunnel everything through port 80 (or 443 for secure sites). An important thing to understand here: A web socket connection is just a regular HTTP connection, but upgraded. That means we can pipe it through regular HTTP ports, as long as the server behind it handles the upgrade.

Apache comes with two modules (well, as of v2.4, but it’s been around for a while) to handle this: mod_proxy and mod_proxy_wstunnel. “ws” stands for “Web Socket” of course, so yay! Only, the documentation didn’t go much farther than “you can turn this on if you need it”. In any case, make sure both modules are enabled.

This is the configuration that finally got it working for me (the actual rules vary slightly between socket.io versions, but this works for 1.3.6):


RewriteEngine On
RewriteCond %{REQUEST_URI}  ^/socket.io/1/websocket  [NC]
RewriteRule /(.*)           ws://localhost:8080/$1 [P,L]

ProxyPass        /socket.io http://localhost:8080/socket.io
ProxyPassReverse /socket.io http://localhost:8080/socket.io

A breakdown:

  1. First, we check if ‘websocket’ is in the request_uri. socket.io provides fallbacks like JSON polling for older clients – which is cool – and it does that by requesting different URLs and seeing which one works. I think in my version is tries /socket.io/1/xhr-polling next if websockets fail. Anyway, the point is: if Apache gets a request for the websocket URL, we rewrite to the ws:// scheme (on localhost, which is fine for now – if you’re running a gazillion apps this way you’ll probably want to handle this differently ;)) and just pass on the URL including query parameters (the P flag) and end it there (the L flag). If that works, the request was succesfull and our client support actual sockets. If not, the rewrite returns an invalid result and we move on to the next rule…
  2. For anything else under /socket.io, we now proxy to localhost:8080 via regular http and let the fallbacks handle it. This rule also makes sure that we can safely server socket.io.js (since the URL doesn’t contain /websocket, it just gets forwarded).

If you’re using something else than Apache (e.g. nginx) there’s similar rules and rewrites available, but I’m not experienced enough in those to offer them here 🙂 The principle will be the same though.

4. Bonus: authentication

I rarely build apps without some form of authentication involved, and I’m guessing I’m not the only one. Regular authentication in a web app is usually something with cookies and sessions, but since socket.io is actually running on a different domain (localhost) – and besides doesn’t know about cookies – this won’t work. That’s where that ‘query’ option when we connected earlier comes in.

The query parameter is just something that gets appended to the socket.io requests as a regular GET parameter, and which can be read on the server. Exactly how you implement this is up to you, but a very simple (and by the way not extremely secure) option would be to just pass the session ID:


let ioSocket = window.io.connect('', {query: 'session=' + my_session_id});

And then in your node app on the server you can do something like this:


io.on('connection', socket => {
    socket.session = socket.handshake.query.session;
    // perform some validation, perhaps including a query to a database that stores sessions?
});

Server side languages like PHP by default store their sessions in proprietary flat files, so setting the server up to store them in a way that your node script can reach them depends on your platform. At least in PHP it’s pretty trivial to use a database like MySQL for that.

And that’s it really: you can now start building a real time application!

On software pricing

I’ve been giving some thought lately on software pricing, and I think I’ve come up with an analogy that works (no it does not involve cars). The analogy concerns music, since that’s also something I feel comfortable writing about.

The central issue at hand is this: whenever I quote a client for (complicated project), they usually initially be like “SO MANY ZEROES AFTER THE COMMA???”. Sure, dude, you’ve just asked me to commit to 3 or 6 or whatever month’s of fulltime work, what did you think? Still for some reason non-technical people tend to think of “programming” as “well you just change a few things in an Excel file, right?” Ehm, wrong.

Let’s rewrite that in terms of music. Your 50th birthday is coming up – you’re giving an awesome party, and you need a band to play there. Okay, can do. You go to an agency and state your budget. It’s EUR500,- (which is sort-of reasonable for a not-too-awesome band, but really pushing it already). The agency shows you some samples, and one coverband plays a lot of Queen. You happen to like Queen, but this particular singer isn’t a very good Freddie Mercury impersonator.

“You got a better one in the Queen department?” you ask.

“Sure” says the agency, “we have AwesomeQueenCoverbandAlmostCantDistinguishFromTheRealThing”, but they charge ten times as much. Ouch. Okay, that’s WAY out of your budget.

“Ehm, anything in between?” you ask. “Well,” the agency tells you, “we also book The Kings” who are reasonable but only have two or three Queen songs on their repertoire.”

My point is this: you can’t expect Freddie Mercury to play your birthday bash unless you’re willing to pay top bill. It’s that simple. You also can’t expect to have a perfect impersonator play there unless you’re willing to pay serious money – not millions, but still a serious amount. These people have skills and are in demand. It’s the same for good programmers – they won’t code your local webshop for peanuts (unless maybe if you owe them a favour). These people have serious skills and would rather think about encryption schemes than your petty HTTPS connection. Learning and mastering an instrument takes a lot of time. The same goes for code.

I’ve seen McCartney a few times (which was awesome, by the way) and I’ve also seen good Beatles cover bands. I’m sure Macca got paid more, but the cover bands also didn’t come free. If you don’t care about quality – I’m a crappy singer – I’ll come and play the whole Beatles catalog at your party in exchange for a few beers. Pay peanuts, get monkeys – it’s as simple as that.

Nota bene: this is about custom proprietary software. I’ll write later on what I think is wrong with selling “software packages” a.k.a. Word, Photoshop etc.

The poor man’s PHP daemon

We have this project lying around from a while back that’s based on PHP/AngularJS and also sports a socket.io server for (among other things) a real time chat. Pretty nifty (no, we didn’t do the design :)). The socket part is powered by a NodeJS process, and since we didn’t feel like (nor did the client have budget for) rewriting all our PHP code to Javascript, we used dNode-PHP (well, actually a fork with a few small project-specific adjustments) to let the Javascript code in the NodeJS process communicate with our existing PHP libraries. So now we had two processes which is suboptimal but worked at the time.

As a poor man’s solution (time pressure, limited budget, etc.) these processes were simply kept running in a permanent screen on the server. In theory, that worked well enough for then – the idea was that once the client found new budget, we’d finally properly daemonize them. And our server doesn’t reboot that often anyway, so remembering to restart the screen and processes on those occasions wasn’t a big deal.

Of course, that day never came.

There was however one annoying issue: PHP’s database resource would go stale after a while. According to the docs it should automatically reconnect, only it didn’t. This brought the need to manually restart those processes every once in a while (we put the MySQL timeout to a week to alleviate the burden, but the exact moment was random depending on the last moment of activity. Of course, one could argue that a site that’s completely inactive for >7 days regularly isn’t worth the effort, but a) that wasn’t our problem and b) the client was still working on his marketing plan. Fair enough).

Today I got fed up with it, had an hour to spare, and the marketing plan was ready as well. Time to bite the bullet; here’s what I came up with.

1. The NodeJS process

That part was easy; essentially I followed the instructions here. We don’t use Ubuntu but rather Debian, but it should be similar on all *nix systems.

2. The dNode-PHP process

This is where it got interesting, and which had been the actual bullet I’d been putting off biting. PHP isn’t very well suited to run as a daemon. It’s possible, but that doesn’t mean it’s desirable, in the same sense as writing out all your CSS in <script>document.write('<style>...<' + '/style>')</script> tags is only a theoretical option. But in this case it was still better than duplicating code in Javascript.

Now, there are ways to turn a PHP script into an actual daemon. It was still overkill for our purposes, so I simply went with a cronjob that killed any existing process and restarts it every hour. If the client ever needs an actual daemon, we’ll get to that then 🙂

Yes, this “daemon” is a beggar, but as they say: beggars can’t be choosers…

Graceful routing fallback in hybrid AngularJS apps

My current favourite way of working with the otherwise-awesome AngularJS framework is to only apply it to those sections of a page that actually need Angular-behaviour, and let common PHP solve the rest (what I call a “hybrid Angular app”, lacking a better term). While Angular does offer stuff like ngRoute and uiRouter to handle “HTML5-mode routing”, this comes with some other problems, the most notable being search engine indexing. While there are solutions for that they have their own drawbacks, so today I’d rather just have a regular site with some <element my-awesome-directive/> where needed.

I did however run into a problem: IE<10. (Yes, it’s Explorer again…) IE9 and before don’t support the HTML5 history API of course, so Angular has a fallback using hashed URLs. Fine. I assumed that as long as you’re not actually using routes, Angular would leave your URLs alone.

I was wrong.

For any regular URL /foo/, Angular insists on redirecting to /#/foo/ in older Explorers, which of course will just render the homepage with a useless hash URL since I’m not actually letting Angular handle the routing. Bummer. The solution turned out to be simple enough though: don’t force $location.html5Mode(true) if the browser doesn’t even support it. This will cause your IE<10 to just treat all URLs as regular links, will let Angular handle HTML5 URLs where you do define them (e.g. an image gallery) in supporting browsers, will still allow old IEs to run all other Angular code and thus provides the perfect graceful fallback.

Hence:

angular.module('mythingy', []).config(['$locationProvider', $locationProvider => {
    if (!!(window.history && window.history.pushState)) {
        $locationProvider.html5Mode(true);
    }
}]);