I care about things. In the same way Anger does in Inside Out.

At the end of Kiwicon 10 the Crue decided they needed a break from organising the beast - a multimedia extravanganza that catered to a couple of thousand people. In light of that much-deserved rest, some public-spirited folks stepped up to organise B-Sides Wellington to give us a security conference in Wellington.

Communication - an underrated tool in the infosec revolution

Katie is on the internal security team for Rapid 7. “We’re at an exciting time for infoec right now. We’re in the boardroom now. We may even get the funding to do our jobs properly.”

“If we are in the middle of an infosec revolution, then we are the revolutionaries. And I am here to suggest communication can be a useful tool in our arsenal.”

Invisible InfoSec Team

Katie did a survey of people across industries about their experience with infosec teams.

Most people had never directly interacted with their team; in big companies some people believed they didn’t have one.

Maybe you should make improving the quantity and quality of communication with your organisation in place for this year.

Some ideas:

Hang out with your co-workers in real life? Katie noted the first time she suggested this, she literally got booed. So maybe, maybe not. Personally I’m not a fan, although not to the point of booing1.

Templates: do you have standard ways of communicating with the rest of the org for security alerts? If you work on templates, or improving templates, you can improve the quality as well as quanitity of what you send.

Documentation: what do you want to make it easy for people to talk to you about? Is it easy for people outside your team to understand how to talk to your team?

Demos: in Rapid 7 the security team are part of the product team, so are in the habit of Agile-style demos. Can you do something similar? Lunch sessions? Presentations?

Chat applications: are you on IRC/Slack/whatever? Does your team have a public channel to chat? Do you participate in other people’s channels?

Make sure you measure whether the things you’re trying are actually working: define what success looks like, and continually test what is and isn’t working, and adjust accordingly.

“We should be approaching communication like we approach anything else: vulnerability management for example.”

It’s also easier if you hire communicators.

Ye Olde Talent Gap

Infosec thought leaders - “I am a self-appointed thought leader” - like to talk about a talent gap. Katie posits that one of the reasons for the talent gap in infosec is that we have an overly-narrow idea of what a security professional looks like: we focus heavily on the technical aspect of the job.

This leads us to have security engineers trying to do things they may be bad at - like presenting a security policy to a wide audience, for example - while discounting the infosec value of people who are not hardcore techies.

Katie has some stats around this: recruiters are trying to hire communication and analytics skils, while people looking for infosec work on the hands-on skills. This is exacerbated by a tendancy to mock people with broader skillsets (Katie cites, for example, the mockery of people with BAs); so long as a people perpetuate a culture of hostility to people who don’t look like Mr Robot, it won’t get better.

Our Whole Lives?!

Katie cites work by Claire Tills, who has spent a lot of time working on security messaging: in particular, people in general, not merely in infosec, respond poorly to scare stories. Trying to frighten people into compliance doesn’t work that well. An example is encouraging people to use sunscreen: experimentally exposing one group of people to a scare campaign and another to positive messages resulted in higher use of sunscreen by the group with positive messages.

The blame culture, with the language of blame, is incredibly unhelpful.

When you attack individuals, rather than understanding the context that leads people to behave in a particular way.

You should team up with people to understand why incidents have occurred.

It’s not “you fucked up”, it’s “something bad happened to us and we need to understand why so we can improve things.

“Don’t git-blame, git-solutions”.

Listening: yes, try asking questions and actually listen to the answers instead of merely waiting for our turn to talk. When people tell you they don’t want, or feel that they can’t do a thing, you need to understand wheir point of view, their drivers. If the conversation leads them to come to the conclusion as you, great! But you shouldn’t see this as just another way to get people to agree with you; you should be prepared to learn that someone has different risk profiles and drivers to you.

The more opinions you listen to, the more likely you are to understand the right answers.

Confessions of a Red Teamer

Pipes

After years of organising Kiwicon and being too busy behind the scenes to be part of the conference as such, “it’s pretty awesome to actually speak at a conference.” “This is a talk about how to make my job harder.”

Threat modelling is important. We love to talk about threat modelling and understanding risk, but we do it really badly. You need to be realistic about your threat2.

Red teams aren’t special forces, “even though we like to think we are.”

Fundamentally, attackers want to get creds and own stuff. The red team’s job is basically pinball: the attacker needs to get the ball on the table, and then flip it around the table, looking for ramps and multipliers and multiball. So if you’re a defender, your job is to make the table harder and harder.

And like pinball, the attacker has to spend something to play: using tools, time, money, techniques to get into the network. “Attackers have bosses and budgets too” - Phil Venables. If you can burn their time and money without them winning, they’ll probably move on to the next target. “Safety is achieved when attacker cost exceeds the value”- Dino Daizovi.

This goes well beyond patching.

Change the game - make people change their play book. This drives up cost, time, and most importantly, risks. When attackers use more risky techniques - new, novel, poorly understood - they’re more likely to make mistakes, and then attackers get caught.

Frustration is the thing you don’t hear about: attackers get frustrated when things don’t work right.

MFA

MFA all the things. Proper MFA. MFA all the things, don’t MFA your on-prem and not your cloud (or vice-versa).

Non-phisable MFA is ideal; a YubiKey is better than SMS, for example. But pipes is a strong advocate for the idea that something is better than nothing. SMS is a no-brainer if that’s all you can roll out. If you do a number portability attack, for example, there is a non-zero change the victim will quickly notice the compromise. The CEO ringing the helpdesk to find out why her phone suddenly stopped working should ring alarm bells.

MFA is a great distributed alerting system. Ever user in the org might notice you trying to attack via their credentials and report it. pipes’ heart drops any time he launches an application and sees it hang, because he knows it’s hanging because somewhere, a user is looking at an MFA prompt and (hopefully) alerting their security team.

Restrict Operating Environments

Locking down endpoints is not the only concern - consider applications and so on3.

Sandboxing is good.

Restrict execution on the endpoints.

A VDI can be useful because the attacker doesn’t know what additional security is happening in the hypervisor.

Qubes is great.

Priviledged access workstations are a pain in the neck.

Whitelisting can be very useful. Yes, there are bypasses, but they need to be tailored to the environemnt.

Say Yes

Shadow IT is getting you owned.

If you think that you don’t have a Shadow IT problem, you probably do. You just don’t know about it.

The biggest gap for attackers is defenders who don’t know their own environment - and against sophisticated attackers it may be the case that the attaker has done more recon and analysis4.

Everything seem locked down beautifully, but then there’s something sitting out the side.

Cloud to on-prem compromise is common.

If you say no, everyone works around you. If you say yes, you can make sure things are done right.

You get the controls you need and the insights that help you understand risk.

If the problem is something like a third-party box, then at least get some visibility of what it’s doing.

Distributed Alerting

Crowdsource your alerting.

Users know what’s up. They’ll tell you if something’s not right.

BUT ONLY IF YOU’RE APPROACHABLE.

This is an out-of-band monitoring system.

Restricting Privilege

Get on top of your AD controls.

Are we auditing the right things?

Attackers have tools like Bloodhound now.

e.g. how many people have seperate accounts for regular and domain admin profies - and then re-use passwords.

Password management is awful. This is why people are working around it; for example:

Bless.

Identity aware access controls (only able to reach things the endpoint user is allowed to access).

IAM delegation/firefighting attacks.

No direct production access!

Limit Macros

At the gateway. Kill ‘em all.

There are very few, if any cases, where there is a legitmate business case to accept untrusted documents with macros enabled.

If you do need to accept files from externals:

sign them.

don’t use email for sharing, use proper file transfers.

Situational Awareness

Know when the house is burning down!

You don’t need to read every log, but you need to be looking for signs.

Work out baselines and do visualise.

Avoid getting caught up in being so overloaded with false positives you ignore the real ones.

Use canaries on your shares, documents, DNS, and so on.

These keep attackers up at night.

These are simple honeypots. You can be sure that if someone tries to RDP to a box that is never used, they’re an attacker.

To Summarise

If you do just one of these things, you’re better than most people. If you do two, you’re better than 80% of people out there.

Something is better than nothing. Even a half-arse visualisation is better than nothing. SMS 2FA is better than no 2FA.

Saying yes to be aware of things is the best thing you can do.

Beer, Bacon, and Blue Teaming

Chris Campbell

This talk is focused on cost-concious blue teams.

Threat hunting is the art of finding needles in haystacks.

Hunting consists of collecting large datasets and identifying anomilies.

Pkit Finder

Working in a CERT Casim sees a lot of phishing attacks in his day job - but most CERT is simply seeing an attack, filing a takedown, and moving on. He is interested in getting a better understanding of where the attacks come from and how they work.

This involves delving into “the dark side”, since the information isn’t freely available.

Phishing kits - you can buy a payload quite cheaply, or you can buy the developer tools and help.

The people selling you the kits will walk you through how you need to modify the kit for your target. Customer service!

The earlier options are faster. That’s where you want to be. The normal advice is “watch your logs”. No-one likes to watch logs. But people like to watch fire.

logfire.io: You send us logs, we set them on fire.

(It implements an endpoint which you POST a log, and then adds it to a blockchain. Until it runs out of memory and the blockchain gets deleted.)

OK, so fire for logs are not realistic. Jeremy looked at a bunch of options and settled on StreamAlert:

ServerLess.

Works on AWS.

Python rules.

Terraform samples.

Multiple imports (S3, SNS, Kinesis) and exports (Slack, SMS, etc).

Kinesis is quicker (~10 seconds), S3 is slower (10 minutes), but Kensis has a lot of pain to use with CloudWatch cross-region.

(The AWS CIS Foundations Benchmark is a really useful starting point for understanding what sensible things to do are with CloudWatch.)

This is great and all: write alert snippets to trigger off lamba functions and run across your alert stream. But Jeremy can’t stop thinking about lp0 on fire. How can he draw better attention to the presence of errors? Maybe you could trigger something else.

Like balloons.

Jeremy has managed to find an air compressor. That can be rigged for remove control. “I may have a problem,” he opines.

Maybe. But as I’m watching the cloud inflate a ballon I reflect that it’s a pretty awesome problem.

IOP: The Internet of Pancakes

Peter Jakowetz

Quantum Security and a background in electrical engineering.

While looking for a CNC machine, Peter found a pancake maker on TradeMe. It’s PancakeBot, and Peter and wanted to put it onto the Intenet. PancakeBot started as a home project that turned into a kickstarter, and is now onto its third revision. It takes GCODE, like a CNC machine or 3D printer, and turns it into pancakes, all for the measly cost of $250.

Peter has identified the main codes that PancakeBot uses to send co-ordinates, control speed, and turn the pumps on and off. It’s built out of an ATMega 2650 (basically an Arduino), stepper motors, and a pump. The PancakePainter and PancakeBot firmware are open source and available on github; it all appears to be based off a older 3D printer and software stack.

The biggest obstacle to using it on the Internet was the fact the USB port doesn’t work out of the box. You can’t put it on the Internet if you have to move stuff with an SD card every time. So Peter replaced the original controller (after bricking it) with an Arduino controller and shield to drive the pancake maker itself, paired up with a Rasberry Pi running Octoprint to manage the printer.

Peter noted that he found a few hundred OctoPi instances available online. Which is less than ideal; apart from things like RCE, you can set printers on fire in a worst case scenario.

Investigation of recent targetted attacks on APAC countries

Noushin Shabab

Noushin is a Senior Researcher at Kaspersky, specialising in attack investigation and forensics.

Stuxnet was an early major, high-profile APT. Since then APTs have been growing, both in terms of their number, but also their spread: they are no longer limited to “four Middle Eastern countries”, but are happening all over the world and across many industries.

So what else could we do? nmap? Fuzzing? Spidering? Bear in mind there’s a 300 second limit, and it’s not a conventional execution environment. So ECS (the AWS container service) might be a good alternative, but there’s still a lot of scaffolding to run data in and out.

Another option is OpenFaaS, a project to allow FaaS on your own hardware; it makes use of Docker containers. It needs about 4 lines and one extra layer in your docker container to make all this work.

Some other tools:

Kubebot is a slackbot which runs a security testing Slackbot that deploys to the Google cloud platform.

UPX: The Ultimate Packer for eXecutables: using Golang and gobuster can pull you down to a megabyte per image.

goland CLIs with Cobra - makes it easy to write good command line tools with Go.

Clone the repo, start a new project, you’ve got a CLI tool.

GopherBlazer

Replace a pile of shell script wrappers with a single golang tool.

Unfortunately there are too many ideas and rabbit holes, so it’s a bit stalled.

Some things to think about when evaluating container images:

Is it the official container?

How much is it starred? How often has it been downloaded?

Is the Dockerfile updated?

Is it an automated build?

Is it recently updated?

How large is it?

Takeaways: be curious. Play with new things to change your job. Don’t do things because that’s the way you’ve allways done them. Share what you learn. Let’s bring everyone up together.

Secrets of a high performance security focused agile team

Kim Carter @binarymist

Writing “Holistic InfoSec for Web Developers.”

Purple teams: teams who are their own attackers and defenders.

How Development Teams Fail

“Hire code monkeys”. Ugh. Yeah, not loving this. Hi contempt culture.

Reward pumping out features at the cost of technical debt.

As debt mounts up, problems begin to mount up.

“Professional developer vs code monkey” is bullshit framing and frankly beneath the conf. Doubly so from a speaker who is obviously smart enough to describe how incentives create and encourage behaviours.

How to Succeed with Security as a Developer Team.

Security testing is part of sprints.

A definition of done needs to be included in the definition of done.

You want to continually catch these defects as quickly and cheaply as possible; the the further along the release pipeline you get, the more they cost.

But pen testing is expensive! How can we do this?

Define your security story. “Please refer to the first chapter of my book.”

Establish a security champion; not someone external to the team, but someone from the team who wants to adopt this role.”

Hand crafted pen testing is much cheaper at the start of development. “There’s lots of guidance in my book.”

Automate security tools. “There is lots of guidance in my book.”

Consuming Free and Open Source. “This is addressed in my book.” “This is risky software created by amateurs.” Apparently not the same quality as commercial software.

“Don’t install node.js the official way.”

I stopped taking notes at this point, because this talk doesn’t deserve it. The only reason I didn’t leave early is because it would have meant walking over/through too many people to be polite. I would not normally be this negative about any talk, no batter how bad, but this was an abrasive sales pitch for a book that expressed bottomless contempt for so many people - developers, managers, customers, and - by running well over time - for the other presenters and his audience - I don’t feel compelled to be polite about it. Maybe Mr Carter is actually a nice person, but it sure as hell didn’t show here.

If you want to throw money at someone to help you with these sorts of problems, talk to safestack.io or Eiara, who will actually help you and won’t be dicks about it.

Lies, Damned Lies, and Security

Michael Shearer

What can CERT NZ do for you?

Incident response and analysis.

Advice, awareness, and education.

Co-ordinated vulnerability disclosure.

CERT want to know everything they can about what’s going on in New Zealand: they’d like to help you with ongoing incidents, but they’d also like to know about incidents that are done, too. Everything they can learn about what’s going on helps. However, CERT is not:

The Internet Police.

IT support/helpdesk.

A security agency - “we are “100% blue team.

Problems we see:

“The workstation LAN is trustworthy.”

The edge/DMZ gets all the attention, while the inside is soft and delicious.

We need to be better at giving the users tools to be better. Have password managers as part of your standard build for example.

Perfect security is hard, but basic security isn’t.

Don’t try to eat the elephant n one bite.

Incremental improvement today, rather than perfect one day.

Accept that sometimes the new things isn’t perfect, but is still better than the current state.

Running Linux or whatever doesn’t make you safe. It’s not about the OS, it’s how you configure it. Basic hygiene is still critical.

Michael gives a demo of how to use a yubikey to secure your ssh keys. One of the things it demonstrates is that PGP is not transparent even to smart people!

End of Day 1

Off to a flying start. Things have been well-run and I’ve enjoyed (all but one of) the talks. I’m really impressed by what the team have put together, and I’d make special note of the abundance of blue team talks, which gives a different and valuable spin for a security conference. Many thanks to Erica, Kate, Skooch, and Chris.

I like many of the people I work with but I’m not wild about spending my personal time with them, especially when I’m so lousy at keeping up with my non-work friends. And, moving beyond the personal, there’s a lot been written about how this kind of moving into personal time can create unhealthy and discriminatory structures in the workplace: if the key to advancement is to spend evenings or weekends at happy hours, you build in a work environment that is hostile to, for example, people with childcare responsibilities.
↩

I have a whole rant about this. Just as cryptocurrency is rediscovering the principles of financial systems from first principles, infosec is rediscovering the principles of risk management from first principles. One of the most neglected points here is that security is not, in and of itself, an absolute virtue. It’s possible that the risk around a security breach may be lower than the cost of an outage for a patch going wrong.
↩

If you’re obsessed with locking down your domain admins while leaving open user accounts that pay money out of your company, you are probably doing the wrong thing.
↩

A thought that came to my mind for this: tying back to the keynote, when you are a punishment-oriented infosec team, you will experience this, because you’re an scary obstacle no-one talks to.
↩