It’s super simple – it makes you a different password for each web site, and it does so without a central server or any personal details from you. I think its approach can offer one of the strongest tools for your password strategy if used carefully – I’ve written more about why here – so it felt high time to bring Android & iOS a modern (hybrid) native app to do this.

You can download the app for Android and iOS now, and I’d recommend reading its dedicated page for more about the approach, available settings and limitations.

Happy passwording!

]]>https://noellh.com/blog/webful-passwordmaker-app-out-now/feed/1Ionic e2e tests – scroll to your goalhttps://noellh.com/blog/ionic-protractor-e2e-tests-scrolling/
https://noellh.com/blog/ionic-protractor-e2e-tests-scrolling/#respondMon, 28 Jan 2019 15:08:28 +0000https://noellh.com/blog/?p=150Another slightly niche tip on end-to-end testing with the latest Ionic – 4.x stable as of this post. Maybe it’s obvious to more seasoned JS testers, but I struggled to find all the info in one place to successfully implement a test that needs to scroll content in an Ionic app. (I would assume a similar approach will also work when using Protractor with any other modern Angular app.)

The specific scenario? You’ve got a ‘thing’ your test needs to click, but it’s a scroll / drag away. It might technically be in the viewport, but below something else, like Ionic’s tab bar. Seems like a relatively common occurrence, and one that should be simple to deal with in your test.

I was surprised to find this so fiddly to do, and for a while I was simply making the window so big that it didn’t need scrolling as a workaround. Not great, since huge windows are exactly the opposite of what you need to accurately simulate the experience of those on mobile phones.

Used Protractor’s browser.executeScript() to scroll the viewport to the button I needed to click, as suggested here. This requires the browser running your tests to support scrollIntoView(), but if you use the latest Chrome as is the Ionic default you should be all set.

Used browser.sleep() to explicitly give the browser 500ms to catch up. I have seen advice against this, but when the preceding action has to happen outside of events Protractor handles natively, it’s not always easy to see what expected conditions you could wait for instead – and this does work. (Alternatives and pull requests always welcome!)

Made the test helper method that performs the button click in my app.po.ts return a Promise, resolving to boolean true only after the click()‘s promise resolved. This can only happen after the sleep() above.

Made the actual tests in app.e2e-spec.ts verify the value to be true with an expect() call. I think this is the cleanest way to ensure the tests wait exactly as long as necessary for the button click to have been fully processed, which is when our promise resolves.

Here’s what my save() helper method now looks like:

public save(): Promise {
const ionicSaveButton = element(by.css(`ion-button[name="save"]`));
return new Promise(resolve => {
// Scroll to the bottom of the content, so we don't have to make the browser viewport
// huge to avoid the tab bar stealing focus when clicking Save. https://stackoverflow.com/a/47580259/2803757
const ionicSaveButtonWebElement = ionicSaveButton.getWebElement();
browser.executeScript(
`arguments[0].scrollIntoView({behavior: "smooth", block: "end"});`,
ionicSaveButtonWebElement,
);
// As the scrolling happens with custom JS and 'outside' the normal E2E event flow, a
// short explicit sleep seems to be needed for the scroll to complete before we try to
// click() the button. Waiting for `executeScript()`'s promise resolution wasn't sufficient.
browser.sleep(500);
ionicSaveButton.click().then(() => resolve(true));
});
}

I’d also installed and used the linting package designed to highlight likely issues when upgrading. Unfortunately this turned out to be my downfall when also using the very latest versions of other packages, including moving from @ionic/ng-toolkit + the Ionic schematics helper to the new @ionic/angular-toolkit as described here.

I wasn’t clear why Webpack was relevant, but suspected use of a custom Webpack loader in my previous tests was somehow interfering with the normal (non-test) build toolchain.

Because of this suspicion I also tried explicitly adding Webpack as a dependency, but this led to a different ContextElementDependency error, resulting from an apparent clash between multiple Webpack binaries.

I now think my own Webpack use was a red herring. Upon removing the dev dependency on v4-migration-tslint, everything worked! So as useful as it is in initially checking your code for required markup & TypeScript updates, it seems this is definitely not a package to keep around for longer than necessary.

]]>https://noellh.com/blog/ionic-4-cannot-find-webpack/feed/0A broken promise; your e2e test is not honesthttps://noellh.com/blog/e2e-broken-promises/
https://noellh.com/blog/e2e-broken-promises/#respondSat, 03 Nov 2018 22:57:27 +0000http://noellh.com/blog/?p=126It’s comforting to believe you can rely on promises. But anonymous JavaScript test functions… yeesh, flaking is literally part of their DNA.

With apologies to any Placebo fans, this post is actually about automated tests that rely on asynchronous behaviour, and therefore on Promise fulfillment. I’m working with TypeScript, Ionic & Angular and the Protractor test runner, and will give an example based on these. But I think there’s a range of software stacks where async tests could hit the same issues.

Some posts introducing end-to-end testing in new Ionic versions implicitly acknowledge that to work with some components, you need your test to work through a chain of promises and nested then()s. And some older writing sensibly suggests writing custom helpers that return their own promises, to avoid nesting deeper and deeper for complex tests, reduce repetition and keep things readable.

However neither of these seems to flag the killer gotcha I found with promises in Protractor tests: if your assertions occur within the success callback (then(() => {...})), as they must to happen after the async part is done, you will never find out if the promise fails. The assertions will simply not be executed and your test will pass, even if everything is broken.

This is a really good way to write a big, reassuring-looking test suite that does absolutely nothing. Terrifying! Once you spot it, it’s obvious, but I believe this should be in huge writing at the top of every introduction to testing with Protractor.

The solution is simple once you notice the problem, especially if you already abstract async component interactions to helper test methods. Using TypeScript and native Promises, here’s an example of the pattern I’ve used to fix this for Webful PasswordMaker.

We’ve fixed a few things here, compared to nested then()s in the test itself:

The main test case now has no nesting from promises.

Protractor automatically waits exactly long enough for the toggle to be checked, and verifies that it’s actually happened rather than leaving a later step to fail as a side effect. If the promise fails, we’ll see a failed assertion at the specific step that went wrong.

All subsequent assertions are now unconditional, and guaranteed to actually be evaluated.

Better test readability, and no more broken promises!

]]>https://noellh.com/blog/e2e-broken-promises/feed/0Enabling FileVault encryption on a restored-from-backup Machttps://noellh.com/blog/mac-filevault-time-machine/
https://noellh.com/blog/mac-filevault-time-machine/#respondTue, 15 May 2018 08:54:39 +0000http://noellh.com/blog/?p=118This might be a niche one. It’s potentially relevant for those restoring a Mac from a Time Machine backup, especially onto a Mac which started life running something older than macOS 10.13.

I just wanted to quickly document my experience with this for future searchers, including myself in 5 years.

Background

Yesterday I went through the following steps:

Upgrade my Macbook’s main SSD, in a mid-2012 15″ Pro Retina model. I largely followed this and this with some small ‘modifications’ that became necessary with my choice of parts, which I don’t want to admit to in public are outside the scope of this post.

Boot into recovery – ⌘+R during start up.

Use the Disk Utility to create a single partition on the new drive with the most suitable available format. I think this was Mac OS Extended (Journaled). Notably it was notAPFS, because the version of the recovery tools I was looking at dated from the era when Apple hadn’t yet run out of big cat breeds. (When scoping out the option of doing a clean install first, it offered me OS X 10.8.)

So far, so kind-of-OK. I had forgotten about APFS at this stage although I knew the recovery tools were outdated. But the restore had eventually completed and I seemed to have a working system.

Where’s my encryption?

As a moderately paranoid developer with important Client Stuff among the files I just restored, my next step was to look at getting FileVault (2) full disk encryption, the pretty-solid offering built into macOS, enabled on my new primary disk partition.

As expected FileVault was correctly showing the status for the new drive as not encrypted yet.

Slightly less expected, when I tried to enable it (and after it made me write down the recovery code, thanks Apple), I was told my drive format was not compatible, without much detail as to why.

My restored system was claiming to be on macOS 10.13.4 at this point, but the upgrade helpers from its installer had never actually run on this disk yet. This turns out to matter.

Back to ‘real’ macOS 10.13 and APFS

My plan next was to wipe the system, do a clean install from the newer macOS image and then restore on top of this.

Fortunately the first step of just downloading macOS offered me an in-place (re-)install of what I’d just downloaded. This turned out to be the much simpler and quicker solution!

Allow time for a reboot and a fairly slow OS upgrade process.

Go to the App Store and find macOS. Download it, even if the version is exactly the same as the version your Mac says it’s running.

An install process will ensue, with unspecified upgrade & optimise steps which you might remember from the ‘real’ upgrade from 10.12 to 10.13. One of these steps is actually converting your filesystem to APFS. There might be other system changes required for FileVault, like updating or generating synthesised recovery volumes – it’s pretty opaque what the scope of these steps actually is. All I know for sure is that they’re magic and they make FileVault usable.Warning: One piece of less exciting magic is that the process resets some system config like Apache virtual hosts, so you may need to manually merge the new stock configs with any custom changes you made, manually getting these from your backups. This is behaviour I’ve seen with normal macOS upgrades before, and is a pretty good argument for using Homebrew in preference to the built-in development tools wherever that works for you.

Once you get back in, Disk Utility should show your primary internal disk is now an APFS Volume. And FileVault should now let you go ahead and set up disk encryption. Easy.

]]>https://noellh.com/blog/mac-filevault-time-machine/feed/0In defence of deterministic password managershttps://noellh.com/blog/deterministic-password-managers/
https://noellh.com/blog/deterministic-password-managers/#commentsSun, 22 Oct 2017 19:13:43 +0000https://noellh.com/blog/?p=114[Update 30 January 2019 – I made an Ionic mobile app that does just this. Read all about it!]

If there’s one thing two decades on the internet has taught me, it’s that techies enjoy binary battles between opposing and equivalent technologies.

iOS vs Android. Vi vs Emacs. Tabs vs spaces. Didactic and fundamentalist exposition is kind of the norm when it comes to these discussions.

One topic I had not expected to see alongside these traditional battlegrounds is the nature of the tool one uses to manage distinct passwords across web sites. Until now.

While the post highlights some trade-offs that potential new users should definitely consider, I wanted to sum up why none of them are a big deal for my use case, and why I still consider the alternative drastically worse.

The trade-offs

Password policies

Some sites will always disallow passwords of the default format you choose. Yes, this is annoying. But in 2017, it’s also not that common. Perhaps 10% of sites I use these days will disallow my strong generated passwords due to length or symbols, and they don’t tend to be the ones where I consider security the most critical (with the slightly outrageous exceptions of two UKbanks).

In most cases there’s an easy fallback. Tweak the generated password as required by the overly specific rules. If the site isn’t a critical one, saving the password with a browser sync account is fine – if they get hacked it’s still unique. And if the sync account ever breaks, it’s pretty rare for sites to lack a reset function. Not ideal, but hardly a deal breaker.

Replacing compromised passwords

Again, I’ll agree the user experience here isn’t optimal, but I also don’t think it’s a big deal. You can apply a workaround like the above, or if the leaked credential was for a site you consider very important and therefore don’t want to use browser sync, keep a note locally of the one-off tweak to the input for password generation. And perhaps reconsider your choice of provider for this security-critical service.

Existing secrets can’t be imported

This is mostly a matter of opinion on the appropriate scope of these tools, but I don’t really see this as a problem. Before I used a password manager at all, I mostly shared a few passwords across many sites.

If I’d had a convenient way to keep using them, I would have unique-ified fewer of my important credentials and consequently remained more vulnerable to attacks on leaked databases. If anything, the absence of this feature has made me safer!

I would consider tools like Keybase for other encrypted or signed secret sharing use cases, which I consider to be outside the natural scope of a browser-centric password manager.

A leaked master password is really bad

Of course this is completely true, and rotation for all sites would be a nightmare. However I would argue that in practice an attacker who manages to get you to share this by accident is probably quite on the ball. In the event that you inadvertently sent them your LastPass master password, their bot would probably have a copy of your data within a second or so – they’re unlikely to be waiting around long enough for you to realise what happened and re-encrypt your vault.

So I’d argue that in practical terms, this is a nil-nil draw and is the major drawback of all types of password manager. If your main secret is leaked you’re in huge trouble.

It would be enough to make me consider all types of manager to be terrible options, if it wasn’t for the absence of any better alternative.

Why accept a trade-off?

So I believe the user experience trade-offs are relatively small for me, but still, why would you accept them? Well…

The incentive to attack centralised password managers

There is one huge drawback to a centralised service as I see it. While we can all agree that security by obscurity isn’t security etc., there is inevitably an increase in risk when you use a popular service that is such a valuable target for attacks. Arguably the most valuable out there. Dozens of important credentials for millions of users, all reversibly encrypted in one convenient place.

These commercial services need to support different devices and browsers, which typically means numerous different software products in the wild, each with the power to unlock all of a user’s secrets. Each has a distinct codebase and the potential to carry its own critical security holes. The chances of there being no way into these systems are vanishingly small.

This is not a hypothetical threat. LastPass has had a pretty bad run of attacks against it – some are summarised on Wikipedia. They’ve paid out for 172 reports on their bug bounty programme – and while of course running such a programme is to be commended, the two most seriousbreaches in the last two years should definitely at least merit discussion of the strength of a centralised approach.

While 1Password seems to have a better track record, not appearing to have been directly in a public attack but only vulnerable as a side effect of an Apple exploit, the 25 payouts to date on their bounty are a reminder that no software is bug-free. The generous offers to white-hat security researchers is an absolute necessity for these companies rather than a ‘nice to have’, given how incredibly lucrative a target their tools are to nefarious attackers.

While acknowledging these issues and running bounty programmes is crucial, it does also raise questions around whether the risks inherent in these services are worth it. Yes, these companies invest much more in security than you do individually. But you have two major strengths:

You are probably not, on your own, a massively valuable target.

If you’re handling your local security competently, you likely don’t expose lots of services on the public internet that potentially have access to all your secrets.

I find the comments in the other post about marketing the security of deterministic managers a bit strange. Not that it’s super relevant to their security policies, but in practice the current landscape consists mostly of presumably quite profitable paid subscription non-deterministic services, and a handful of free or donation-ware single-purpose, single-platform deterministic ones. If anybody has sales teams and an incentive to exaggerate the safety of their approach, it seems odd to assume it’s the PasswordMakers of the world and not LastPass and 1Password.

Finally, I should add that this post led me to read several others on Tony Arcieri’s blog, which I found thoughtful and interesting and almost invariably agreed with. I think the lesson here is that over-generalising is never helpful and that the best choice of password tool for you depends greatly on your appetite for risk, level of technical know-how, and attention to your local system’s security.

Well, I went to a PHP meetup presenting some common implementations and comparing them to the speedy native PHP implementation of sort() and related functions.

I figured that while efficient sorting isn’t the most common use case for PHP – and in a web context you often don’t want to let each user spin up numerous threads – for command line cases this approach should be able to provide a real performance boost.

In particular, the already-PHP-favoured quicksort algorithm works with a divide and conquer approach that should be perfectly suited to divvying up work between threads.

Like all great half-formed ideas, the impetus to actually try this out was brought on by insomnia. The results were (forgive me, clickbait pioneers)… underwhelming.

Most of the info on why is summed up on the GitHub repository for my experiment, but the headline is that while threads seem to help my method very marginally, it’s not enough to stop it being 100 times slower than PHP’s approach, for 1-million item arrays.

I went for a Promise abstraction from ReactPHP (confusingly, no relation to React).

My main questions so far (detailed further on GitHub) are:

Was this library the right choice for threads here?

Is there something obvious we could steal from PHP’s own logic to make this better? And

Whether you consider it a success will probably depend on your quantity of files, and how much money you have lying around for month #1.

Previous efforts

I’ve been looking for my optimal off-site backup system for years now. I even spent some time building a cross-platform GUI wrapping rsync, with a view to offering a managed service bundled with storage like rsync.net (which I’ve also used happily before). The idea was to benefit from buying server space at scale, and lower the barrier to making backups this way. This might have been a worthwhile endeavour several years ago, but after seeing the progress of similar front ends and the plummeting cost of large scale cloud cold storage, I’ve now decided it’s not a great solo project!

To the cloud!

Other clouds are available, but up front AWS appeared to have the cheapest cold storage options out there, so that’s where I started. Somewhat confusingly, there are two only tangentially-related Amazon entities carrying this name – the standalone Glacier service, and the Glacier S3 storage class.

I tried out a cross-platform desktop GUI app intending to use it for Glacier-the-service on macOS, before realising it only supported that mode on Windows. I also tried it with S3 using my own transition-to-Glacier lifecycle rules, but found it very unreliable.

However by this point I was more or less sold on using the much more widely supported S3 protocol with some tool or other, and saving storage costs by transitioning to the Glacier storage class with rules I could configure and understand myself. This may cost more than Glacier-the-service, but it means wider compatibility and allows AWS’s versioning system to do its job, without building in extra layers of backup tool-specific abstraction for this purpose.

After looking at a couple more tools I decided to try rclone, along with manually-configured Glacier transitions.

What’s the setup?

So what do my backups actually look like?

S3 lifecycle transitions

I’m sure there’s a nicer way to get this configured for all your buckets, but I did it manually in the UI for each one. This wasn’t one of the properties copied over if you use one bucket as a template for another.

rclone options

My main source of errors with rclone after getting the first sync right was from modification times being amended on files with no other changes. S3 won’t let you do that once an object has transitioned to Glacier class. In these cases I don’t care about mod times, so for me no-update-modtime was the perfect option.

I have 4 backups going to separate buckets, and a typical one now looks like

I trigger this with a terribly old-fashioned cron job with my email set in the MAILTO. And because it’s got quiet mode on, I get an email only if it hits an error. I also use flock to make sure the same job’s not running twice at once.

$£€!

After setting up all my jobs initially, I was a little surprised to receive a billing alert much, much sooner than expected, having set a limit which I thought I might approach towards the end of the month.

I knew that retrieving Glacier data was relatively slow and expensive – it’s designed to be accessed rarely. The main thing I missed – and not a crazy charge if you’re expecting it – was this:

Racking up a little over 2 million new backed up objects in my initial sync, this added $114 USD to my month’s costs – pretty much fully explaining the order of magnitude difference vs. my very rough cost estimate.

I think the main take-away here is to start small if you don’t yet fully understand the costs of this stuff. For all that the big cloud providers like to diss each other’s pricing models, none of them are particularly simple and there is always a chance you’ll miss stuff!

The glorious future

The good news is that now my files are there – and since most of them don’t change very often – this looks like a totally viable system cost-wise if you ignore the initial outlay. Which I might as well as it’s not coming back.

This may look like classic sunk costs rationalisation, but the average rate of change on these files is really low, and on a day with no changes I’m paying 4 US cents to keep my backups on Amazon’s Glacier tapes. Even with bigger changes, monthly costs look set to be dramatically lower than the £43 GBP I was paying to hire a dedicated server with lots of storage. (Even that price was only offered, back in the day, after putting down a large initial up-front payment.)

One point to keep in mind is that rclone’s GET requests to check the status of files actually make up about half the cost of my backups now, in a quiet month with few changes (the last of which cost me £9). So the frequency of your backups could make a real difference to the cost if you have a large number of files being checked, even if they change rarely. This is quite a big difference from a traditional rsync-to-server setup.

But while it looks like the price variation will be greater, I’ve found a backup frequency that works for me and should prove much cheaper for my use on average. I’m looking forward to retiring my old backup server very soon!

]]>https://noellh.com/blog/rclone-to-s3-glacier/feed/0Jenkins 2 & Apache 503 Service Unavailablehttps://noellh.com/blog/jenkins-2-apache-503-service-unavailable/
https://noellh.com/blog/jenkins-2-apache-503-service-unavailable/#respondThu, 28 Jul 2016 08:30:38 +0000http://noellh.com/blog/?p=83Just a quick post in case anybody else has started having problems with their Jenkins recently. Perhaps it’s an obvious problem but it took me a minute!

AJP reverse proxy

If you’re anything like me, you may have set up Jenkins CI to use an AJP reverse proxy, because at the time you were on an old CentOS where the supported Apache didn’t yet work with the recommended HTTP reverse proxy option.

If you then upgraded to CentOS 7 or another new distro, and to Apache 2.4, you’ll probably have forgotten you even did this and it will have likely continued working.

For a while…

Until this month, that is, when Jenkins 2 became the LTS release and the version you get from an innocuous yum update (or similar).

At this point you might realise that dropping AJP reverse proxy support is one of the very few breaking changes in Jenkins 2.x. Apache can’t see your favourite butler any more and returns 503 Service Unavailable.

HTTP to the rescue

Fortunately if you’re already on Apache 2.4, once you establish the problem it’s very easy to fix. The current guide to Running Jenkins behind Apache shows you the right configuration for an HTTP reverse proxy.

For me it was just a couple of lines change from my AJP setup in the virtual host configuration. A quick service restart later, I’m happily running Jenkins 2 with no further config changes.

]]>https://noellh.com/blog/jenkins-2-apache-503-service-unavailable/feed/0AngularJS and Symfony2 harmonyhttps://noellh.com/blog/angularjs-symfony-harmony/
https://noellh.com/blog/angularjs-symfony-harmony/#respondThu, 16 Apr 2015 16:03:44 +0000http://noellh.com/blog/?p=53Symfony2’s a good server-side framework. Angular’s a good front-end framework. What happens if you want to use them together?

It’s certainly possible, but if you want them in one project there are two complications: templates and routes.

My project

I’ve been trying to find a clean combination of Angular & Symfony for an open source project. Right now it’s largely Angular and doesn’t even need a database. But I wanted Symfony involved for a couple of reasons.

Firstly it’s a handy way to get access to Assetic‘s snazzy asset management. For some projects you might be better with a separate client app and Symfony server, but you’d lose this benefit and have to maintain two codebases.

Secondly we keep options open if we want to quickly add more admin management or a server API later.

However this combination inside one project does introduce a couple of interesting problems. Let’s fix them!

Software

I’ll be using AngularJS 1.3 with angular-ui-router, Symfony 2.6, and configuration that should work with Apache 2.2 or 2.4.

Templates

I’ve mostly followed the Symfony2 convention for an outer layout template, which lets us use Twig with Assetic magic in the usual way.

But AngularJS needs templates to be useful too. These are public-facing static template files, not Symfony templates, so let’s keep them in /web/partials.

Having done this, we can use angular-ui-router in the usual way. When we want to configure a template with its $stateprovider, we’ll give it a state definition with a property like:

Fortunately this is an easy fix on the Angular side. In your app’s config(), inject the service $interpolateProvider and then write, for example:

$interpolateProvider.startSymbol('{[').endSymbol(']}');

This will get you set up using {[ ... ]} for AngularJS interpolation instead of the usual syntax. You can choose any symbols that won’t clash with something else.

Routes

The more confusing problem for me was getting Apache, Symfony and AngularJS to play nicely together with respect to routing.

I settled on letting Apache’s mod_rewrite send most initial page requests through to Symfony’s index file as if they were all loading the home page – but not static files.

This lets Symfony pick up requests for all static files still, including generated & concatenated ones from Assetic. Meanwhile, other routes can have their values passed via a # in the URL. This lets Apache & Symfony ignore them while Angular can pick them up.

When combined with AngularJS’s HTML5Mode, the # is transparently removed again, so for most users the URLs look just like any other.

Apache setup

How do we know what to rewrite? I found some discussion that suggested letting Apache check for real files & directories with e.g. RewriteCond %{REQUEST_FILENAME}.

The trouble here is that Apache doesn’t really know about Symfony routes. To work with both frameworks I need matching that’s based only on a pattern and not on the filesystem.

What I ended up with was a check for a dot in the request. It’s crude but it works!

All requests that have no dot-extension (.js, .css, etc.) go to AngularJS via that index route, while the dotty ones goes through Symfony.

N.B. in development you can do the same, but you probably want to look for /app_dev.php/ at the start, not just /.

What was that?

Firstly we match the start of the request after the hostname, with ^, and match the initial /.

Next we want to capture (with ()) anything that’s not a dot. As many not-dots as it takes. (But at least one, so the / route won’t match.) We then keep going to the end of the whole URL – $ is where the buck stops.

Because we insist on looking to the end, any dots will break the pattern. We’re adopting the convention that only static resources will have dots – because these don’t match the pattern these will pass through unimpeded as with a normal Symfony app. Requests like .../bundles/.../asdf123.js are left alone.

So what happens to our matches? We redirect them all to our Symfony index file. Note that the redirect path uses #! – this means the web server won’t see the end bit, it’s just used by the browser. As far as Symfony’s concerned all these requests are going to the default / route.

Finally we append $1 – this is the bracket value we matched in the pattern. So now AngularJS has the URL piece it needs to do its own routing work.

[R] (Redirect) tells Apache this should be a browser redirect and not an internal one – this defaults to a temporary 302. This is essential! Without an HTTP redirect none of this would work.

[NE] (No Escape) leaves special characters alone – also vital. Without it the added # causes a crazy infinite redirect loop, as Apache morphs it to an ever-expanding list of %23s. We need Apache to keep its nose out of our hash, so browsers can just request / from the server and pass the # bit to AngularJS.

Angular HTML5 routes

If we want to have URLs appear without a # in modern browsers, we can do this too in the normal way.

Our AngularJS app needs a config() block with $locationProvider injected. Then we do:

$locationProvider.html5Mode(true).hashPrefix('!');

Easy!

Symfony routing

I mentioned that Symfony would see everything as going to /. But it does still need to route those correctly.

Aside from some bundle placeholders, the project’s only route is a single catch-all set up with an annotation. So routing.yml has:

app:
resource: @MyMainBundle/Controller/
type: annotation

And my only controller method uses one annotation:

/**
* @Route("/", name="Home")
* @Template()
*/

Keeping Symfony in the loop

What if we wanted a Symfony admin panel involved here too?

Well, luckily mod_rewrite and regular expressions are really flexible. We could easily add a new rule which limits the times we do those crazy #! redirects. We could add one that excludes URLs starting, say, /admin.

We’d be free to use regular Symfony routing with that prefix, while letting AngularJS transparently pick up any dynamic routing without it. Perfect!

Another way?

I hope this is one of the simplest ways to make this work while mostly respecting both framework’s conventions. But is there a better way? Would love to hear your comments.