--show-session-key can be used to reveal content of single encrypted OpenPGP messages session key, without giving out the private key. Proving that you've got access to the the session-key meaning that you've also probably got access to the one of the private keys needed to decrypting the session key.

Producing so called 'documentation' for sake of documentation. There's too much documentation, it's inaccurate, and doesn't reflect reality. But it has to be, because they want documentation. Just the typical case. Seen this happening over and over again. It's funny how project management is often completely separate process, without any connection to the underlying reality. Maybe this is because of project management professionals, which run the project as they would wish it would go. But truth is, that the reality is very different. Maybe this is the reason why there are surprises later. Creating overlapping documentation is very common way to end up with conflicting documentation, which is extremely annoying. Another classic question is managing documentation scope and detail level. Some documents can be way too detailed and others lack basically all information. True classic. Personally I prefer very short, technical and informative documents. Some people love high level visualizations. But when you start accounting for details, it's easy to often notice that the visualizations are misleading and do not actually reflect reality. Only good thing about this documentation usually is, that nobody's going to read it ever again. If lucky, it gets read once at the project start. It's also extremely easy to say that documentation is bad. But doing good documentation and or requirements specification is actually extremely hard and time consuming. That's also why it's often very rare. It's a nice question how to keep documentation coherent, simple, clear and still include all the technical details. Most of projects also badly lack knowledge, time and resources required to make anything which would make sense. But I guess that's also more norm than exception.

Doing things so you can say things are done, even if the things won't serve any other purpose. One of my favorite things. I especially loved the requirement by the customer. This documentation isn't enough, we need more documentation. Excellent, just list a few things which you've need additional documentation for. Guess what, they never delivered any extra requirements. So typical. We need at least 200 pages of documentation. For what? - I don't know. It's nice to have documentation, right? - This is just where the clueless 'documentation department' can step in and make nice looking documentation for 200 pages. It doesn't matter if it's accurate or not. It just needs to be a nice documentation which might be somewhat related to the project. Could you add a few cool slides, plz? And maybe some hype words?

It's also great question what information is essential. What can be safely assumed or kept obvious and which parts have to be documented in very fine detail.

Had long discussion about Network functions virtualization (NFV) / Virtualized network functions (VNF, VNFI) with one service provider. This is all related to Management and orchestration (MANO). Afaik, it's better to get "private cloud from public cloud" with software isolation. Than getting true hardware private cloud. Which I've been talking about earlier. Now I got two service providers which can provide this reliably.

It's amazing how persistent (RDP / RDC / RDS) aka Microsoft Windows Remote Desktop attacks are. The network banning application I wrote has now banned more than 1000 IP addresses and +40 subnets (/24 using IPv4 and /48 using IPv6). Most interesting observation is that large number of systems located in totally different data centers, as numbers and networks and countries are being attacked by same IP addresses. This means that some of the attackers are extremely active. This is also great reason why centralized banning which protects whole network works so well. If server network in New York is attacked from some IP address, it makes perfect sense to ban same IP address in London, Singapore and Helsinki. First I thought it would be kind of overkill, but in reality this is highly beneficial. - After some questions I've received. I would define this as distributed fail2ban implemented on network level. - Yes, it rarely produces a few false positives. Not nice, but it's still much better than letting all the attacks through.

I've got many questions via social media channels. Here are some answers I wrote to social media: Q: Why use banning, why not use whitelist alone? A: Whitelist is used for environments where it's possible to get required IP information for whitelisting. Unfortunately there are tons of users and businesses which seem to prefer consumer grade Internet connections due to costs. Which practically means that they're ending up using dynamic addresses and operator won't even provide option for static or dynamic permanent address. Another option would be using IP geo location to build blacklist. So there would be large national IP space on normal address space. Foreign IP addresses would be black listed and even national addresses could be banned on request. But managing that national IP list has been proven to be problematic in practice. There are some environments which use this kind of large whitelists. Let's say that company X is using cell phones from operator Y. Then we just block everything except IP address space of operator Y mobile data. Works, but requires often constant updates. Especially now when IPv4 space is running low. With IPv6 these kind of rules are much more manageable.﻿ Q: Technology being used? A: I'm using SaltStack to collect failed authorization attempt data (among many other things of course). And custom Cloud Orchestration to manage network access globally. Technically this means updating ban access rules to three separate service providers, using their own APIs.﻿

Made with the new Google Sites, an effortless way to create beautiful sites.