Author: hiburn8

Echosim.io is a nice experimental site which puts a virtual Amazon Echo in your browser!

You give the site access to your microphone, and then full control over your Alexa account (which it will keep indefinitely as you are guaranteed to forget you did this), and then you speak your Alexa questions and commands to Echosim.io. It works pretty damn well actually. Props to its developers at @iquariusmedia.

Well, it turns out there’s no CSRF defences (the lord giveth props, and the lord taketh props away). This means you can just trick users into uploading your own voice commands to Echosim.io and do whatever you damn well please on their Amazon/Alexa account.

Considering the site is predominantly for use by developers and in Beta, i was reasonably forgiving… but then i took a few moments to look into what you can do exactly with full access to Alexa. Turns out, its lots.

To demonstrate, I’ve made my own version of the classic Rick-Roll, entitled the ‘Rick (and morty) Roll’. Its a simple HTML page which uploads audio samples of my voice to Echosim.io in secession to do the following:

If you have an Amazon Fire device, it plays Rick and Morty on Netflix (your TV turns on if it supports CEC).

Now, all Echosim.io need to do is throw up a ‘same-site’ cookie policy, and the issue goes away. But what i found quite interesting about making that page was learning that chained-interactions (Alexa interactions which require more than one command… for example ordering a product from Amazon.com – which requires a ‘yes’ confirmation), do not have the level of security measures in place I just assumed they would.

I expected, that chained-events like ordering a product or paying a bill would be strung together with a series of tokens, each validating the next request. This would be done to keep the order of events in check, avoid potential race-conditions, and basically just provide an additional level of security and integrity for more complex interactions. Well apparently it doesn’t work like that… third-party controlling apps at-least, authenticate via an API key, and that key can do anything. The end. There is no contextually aware security; asking the time has the same security as asking to disable your burglar alarm.

Ordering 54 rolls of toilet paper, as the experiment page does, is achieved by first uploading audio of me saying “Order Andrex Supreme Quilts Toilet Tissue, 54 Rolls” followed shortly-after by my ‘Yes’ confirmation. I really expected this ‘Yes’ request to require some shared secret from the previous response, but nope… just the API key is fine. The order is placed.

I think the whole authentication and authorisation model of Smart Home Assistants is going to need a pretty good facelift in the next few years, or mine will be going out the window. It feels very much like the days of the WWW before the concept of the Same Origin Policy. In the mean time, be sure you are careful with those 3rd-party apps!

As a penetration tester who specialises in mobile apps, I get good visibility of how the enterprise is adopting/using/misusing various iOS capabilities and MDM features. One trend I’ve seen increasingly, is the use of ‘Guided Access Mode‘ to lock down devices.

Guided-Access Mode (GAM), for the unfamiliar, locks the device into a single app. It’s typically considered a handy feature for parental guidance, but the official documentation also suggests its effectively the same as the more enterprisey ‘Single App Mode‘; which is true. ‘Single App Mode’ (SAM) does exactly the same thing, but can only be enabled via MDM on a device which is in ‘Supervised’ mode. So it makes sense that organisations which cant easily manage devices use GAM instead. Although many just don’t know that Single App Mode exists.

I’ve seen GAM in the medical, financial, industrial, and retail sectors now. I know that it’s being used to protect highly sensitive data from prying eyes and, in certain scenarios (industrial), I would not be surprised to hear that GAM is defending against life-threatening incidents. That’s a lot of pressure for a parental guidance feature.

I recently performed a penetration test for a prototype self-checkout kiosk/POS solution which used an iPad as the kiosk’s display. Long story short, the solution used Guided Access Mode and, yet again, I was foiled in my attempts to get around it. The test finished a few weeks ago and, not one to give up, i’ve been testing Guided Access mode in my own time. Here’s the bypass 😉:

Finding and exploiting unique attacks on web applications is, of-course, satisfying. But I also find that performing the most basic of attacks, but as efficiently and effectively as possible, can also pose a decent mental challenge that’s equally rewarding.

In this short post i’ll show you how writing just a few lines of code can have immense gains on web request brute-force attacks, versus using the tools you would probably reach for right now (let’s be honest, it’s Burp).

TLDR: This post is about some late 90’s level hacking. But the fact is, that there just doesn’t exist a decent explanation of this vulnerability anywhere on the internet.. and yesterday, in 2018, I found another application vulnerable to it (to quite serious effect). I’m afraid that was the straw that broke the camel’s back. So now we’re doing this… we’re making the blog-post that should have been made 20 years ago. There is a simple zipped-up MySQL/PHP lab at the bottom of this post, feel free to skip to that if you are so inclined.

This is just a short post about toying with the Badoo app for iOS, but also touches on something ever-so-slightly useful about testing the app-upgrade mechanisms of mobile apps. “Urghh more dating app hacking” I hear you say. I know, I know, this is getting old. At some point i’ll get a real hobby, I promise.

Years ago, one of the first posts I ever wrote was about my experience scripting a bot for the dating site OKCupid. It was just a PoC bashed together over a few beers with a friend.
Since then (and becoming single) I’ve scripted bits and bobs for virtually every major dating site/app… its become a bit of a weird hobby.
A while ago I wrote a reasonably feature-filled script for managing a user account on the dating app Happn, imaginatively called“Happn.py”. It was immediately spotted by a few Happn employees on my github, who starred the project, but then prevented it from actually working by blocking the python user-agent on the Happn servers. I made the repo private and updated it to work again, with the intention of spending some more time developing it. That time never really came and I stopped using Happn a while ago, so I made the tool public and this is just a quick post to share it.

I’ve been travelling on Virgin trains a lot recently and finally decided to take a look at their free movie-streaming app “BEAM”.
Super-excited to be about to watch Forest Gump on my journey, I found that whenever I hit play, the app’s custom video-player decided to freeze and eventually crash the app on my device of choice; an iPhone 6s.
Determined to watch Hanks’ award-winning performance, this is how I figured out the problem and patched it in 12 minutes.

Instead of doing my final-year project at University, I made (another) open-source CTF/Lab framework, primarily for my own learning benefit during its development, but also because I realised how powerful a group learning environment like a CTF is and I wanted to deploy one at my University. Keep reading to learn more… Continue Reading “PentestCTF – Another CTF Framework”→