Topic mega dump 2014 (1 of 3)

Post date: Jan 3, 2015 7:41:47 AM

Year 2014 is ending, and it's time to simply dump the topic I've studied, read but haven't had time to blog about.

PostgreSQL LOCK modes. Why? Because someone told me that I should acquire write locks on table, to get consistent read snapshot. Well, at least with PostgreSQL it isn't true. But it might be with some non MVCC databases. Naturally locking whole table to get simple read snapshot is really bad idea which easily leads to lock contention.

About Finnish Service Channel: I've seen it so many times. People claim that it's impossible, it's too complex, etc. Yet they're wrong. I've been doing that for ages. If engineers and programmers say, it can't be done, then I do it my self. At the same time, they'll bring it up. Even if the better solution is then done and technically working, it doesn't mean that it would be used. Because everyone is just so tied to the old methods and thinking.

Microservices not a free lunch! - I completely agree with it. Many abstraction layers and separation can be nice in some way, but also adds a lot of overhead and interfaces and stuff like that. Which all requires maintenance and complicates things. - Yet I know a few systems like this, which are in highly demanding critical production use and work out great. Also monitoring is easy when it's all built in. But as said, it all makes project more complex, even if it's cool. It's a good question if customer(s) are willing to pay for that overhead.

Hacker News discussion - Great comments about coding styles and project management. I've seen it over and over again. Absolutely ridiculous hardware resources are required to complete simple tasks, because coders don't really understand at all how things technically work. They just write code that works, well in testing environment. But in production performance is worse than awful. Basically causing DOS with just a few % of full production traffic. Yeah. Like I said, been there, done that. And it seems to be happening over and over again, developers never learn.

"Whole password hashing and salting etc stuff is pointless. What does matter? Is the fact that passwords aren't reused anyway. Do I really care if you know that my password for service X was: c'EyqXnrq-bCyfF_dK67$j I don't really couldn't care less, if you get it hashed or not, really. If they owned the system, and they were after my data, they got it already. Password(s) is just minute and meaningless detail. I've always wondered this pointless discussion about passwords. It just doesn't make any sense.

First mistake is to let user to select arbitrary username os password. Just provide them with secure ones, that's what I do. I always have wondered also the fact, what's the point of having separate username and passworld fields in the first place. Username: bjdndgEC2S4rHRZy7c8rdQ Password: 6TWe8EvxfRxxCvcyZTaBM6 Isn't it enough to concatenate username and password, it's just as secure to use one field instead of two. Btw. Do many of these services pay enough attention to potential weaknesses presented by the potentially weaker session cookies?"

Password can be exchanged using challenge protocol and so on, or hashed with salt, what ever is the selected method. But still if there is security requirement, it's just silly to reuse same passwords. Yes, I know it happens all the time, but it really should not.

The SQRL is technically exactly what I described, it's just a "random blob" of 256 bits (32 bytes), which is verified using bidirectional service specific challenge.

"A few thoughts about Tribler - Good comments about the details of the protocol. But I'm wondering why nobody found anything to comment about it on higher level than the crypto? We all know(?) that Tor and multihop data passing isn't efficient way to implement 'anonymity' for distributed file sharing.

For that particular reason I was personally amazed that they did select Tor as example. Tor wastes a lot of bandwidth as well as allows easy traffic correlation attacks in the cases where that's generally feasible. I really loved Freenet and GNUnet designs, because those use really efficient caching, partitioning, routing compared to Tor. At least in theory anonymous downloads could be even faster than when using non-anonymous downloads, due to improved efficiency of the network resource utilization due to distribution and caching. When Tor is used as base, all these benefits are lost and in addition there will be huge bandwidth overhead causing about 600% slowdown.

Does anyone agree with me? I was almost sure that someone would immediately comment this aspect, but as far as I can see, nobody has noticed these facts(?) yet."

Another nice analysis about Tribler flaws, but from totally different point of view. This post is about traditional security flaws.

Lot of discussion with friends about VAT MESS what EU created. This will add substantial management overhead to small businesses.

Checked out Azure Pack. Actually I didn't know about it. But I started to look for such product when I noticed that several service providers beside Microsoft is offergin Azure services.

Had long discussions about private vs public cloud. I don't currently see any benefits from private cloud. If service provider is right, they can offer public cloud cheaper than what private cloud would cost. Even if many people are trying to push hybrid cloud forward. It's almost impossible to beat scale benefits of public cloud. Some service providers are hosting well over one million servers. So they must have clear scale advantage on costs on every mission related front. Security could be bit worse, but honestly. If the cloud platform is run by professionals, I wouldn't worry about it. I would worry about own software and it's configuration, which is almost infinitely more vulnerable than the hosting platform.

I still hate sites which require login. If I'm going to just read content, why I would need to login? Sigh. - No thank you!

Checked out: LXD and Rocket. It's more traditional combination to beat Docker. Actually I've been using LXD for years, mostly for security purposes and server process separation & segmentation on more hardened level than user / process isolation only.

Office 365 password change works slowly: I mean if I change password, services logged in, remain logged in for a quite good while, even if clients would be restarted. This could be a privacy & security issue, when password change doesn't lock out other sessions immediately.

Tor uses TCP connections. I2P would be better for some services. Also Tor clients do not automatically work as hidden services, making connecting back to nodes of normal non expert users awkward in cases where it's required. OpenBazaar uses TCP for Tor compatibility, but TCP isn't good for DHT which requires quick short lived connections to multiple hosts. Using Bitcoin to pay for services, like bandwidht and relays, etc. Proof of Stake, Proof of Work. P2P.

Peer-to-Peer (P2P) is bad for mobile, consumes bandwidth, storage space, CPU time, power. Same problems also apply to mesh networking, which is just possibly indirect routed or flooded Peer-to-Peer networking.

Some systems which were fully P2P and beautifully constructed have even dropped the original Peer-to-Peer design. One of the well known examples is the Skype.

I remember that when usign Emule Kad DHT implementation, if I manually changed my client ID to largest or smallest ID on the network, I basically DDoSed my self out. Why? I guess the client's did look for "near by nodes" as well as "highest and lowest" ID on the network. So the DHT implementation didn't for some reason implement full "wrap-a-round" circular address space causing the clients with highest or lowest ID be seriously overloaded.

Talked about WebRTC based P2P networks which would run in browser as HTML5 app, just as any other code on web-page. This would allow CDN networks which scale on demand and so on. As well as building applications like OpenBazaar, without traditional servers or client "software". I'm sure someone is already working on something awesome like this. Actually I've been contacted by a few guys talking about these things. But I really can't tell you any details of their new projects.

It's interesting to see if MegaChat can cash out these promises. Because Mega is already using JS encryption stuff and communicating with severs using HTML5 client, it's probable that they'll be using WebRTC and related stuff to make true in browser chat client. There are also many in web-browser video chat apps, so that's nothing new either. Unfortunately the browser support could be spotty at times.

Once upon a time, we had a problem. Application crashed repeatedly. Customers were upset. Developers told that there's nothing wrong with the app. After investigating the problem, I found out that it was some kind of strange UI related problem. If the application was the ONLY application running with UI, then at times, it would hang whole system. I fixed this problem by starting notepad with do-not-close.txt in the background when app was launched. This 'fixed' the problem. It took over two years, before this same matter came up via someone other. I just told them what I did and there's "nothing wrong with it", it's just how things are. Lol.

HTTPS / SSL for everyone. Let's encrypt - If you don't know what this is, it's better to check it out.

Is the web dying or not? - I personally love web and hate apps due to many security & platform related performance and so on problems. So if there are two versions of the same application. I would prefer the one which runs in the browser. - Thanks

I've found out that most of programmers simply do not have any understanding or even concept of productivity. What are the things that should be done. How those should be done. What's the ROI for the project and if systems should be self run or out sourced, how expensive part OPEX can be compared to CAPEX and so on. I just wrote how they don't understand even how computers work, so how could they understand how business works? No self-guidance / self-directed thinking. What could I do today, which would most benefit this organization?

I personally like giving people large abstract tasks, and then leave them alone. If they're not active getting it done, they won't get it done and that's it. Task usually includes finding things out and thinking what should be done and how. But as we know, that's all way too complex for many people. If you don't trust the employees, you shouldn't hire them in the first place?

Studied environmental impact of future nano technology. This is something which remains to be seen in the years to come. I hope we don't make same mistakes which we did with radioactivity and many chemicals... Except I assume humanity is going to just those mistakes, unfortunately. Future of nano technology http://en.m.wikipedia.org/wiki/Nanotechnology is almost limitless? Is it going to be bigger or smaller change than Genetic Engineering? https://en.wikipedia.org/wiki/Genetic_engineering Don't know, in short term GE/GMO could have larger impact, but in long-term? Maybe nano technology got more possibilities.

Quantum Dot Display - This is exactly what we've been waiting for. Even if people got scammed by LED TV's which aren't actually LED TV's at all. Those are LCD TV's with LED backlight. Real LED TV doesn't require LCD at all. Actually QDDs are just one way to make LEDs known as QD-LED. Also checked out Quantum Dots.

Google isn't what it seems (?) - This is a good question, no comments. But as said, cloud isn't inherently bad, but of course you have to think about what stuff you're directly pushing into some of these higher level "mass surveillance" systems.

Nice post about software testing, I can completely agree with them. It's just so common to see that a few of the most used features have been lightly tested and whoa, all tests passed. Even most of the program hasn't been tested even lightly. Not to even mention thorough testing. Great example why automated testing should be tested for coverage automatically.

I should write something about cryptography, p2p, distributed systems, etc. What could be done? What would be interesting, what would be beneficial to whole world? - Don't know right now, but if you have some ideas, just contact me.

Reminded me about Static program analysis. Unit testing, Technology Testing, System Testing, Mission / Business Testing. I currently do Unit testing, to test small level code. Then I usually have some system level testing. And if problems are suspected then I do business level full end-to-end testing. Auditing whole data path and results from the original source to the final destination. Often these tests are based on carefully crafted test cases and checking the final end results on the output system. So all automated processing steps have been taken into testing, even if it would involve multiple different systems and processes, even potential manual processing steps. Often these high level complex test are done when system is introduced into production or when major changes are done. Or if there are some reliability or data corruption issues, then automated additional production monitoring could be implemented on parallel. It skips all the internal details of the complex process and only compares what was fed in and what came out. Do those results match what we expected to get? Can be hard task to when multiple systems are tightly integrated and something goes wrong, without any errors and nobody knows what the problem is. 1000000€ went in daily, and for two days of the year 999999,98€ comes out. These sums can be formed from 100k+ individual transactions that went in. But where's the problem, because all unit tests do pass. Uhh, business as usual. Due to multiple different logics, it can be really hard to even find out what subsystem caused this skew. Some times I just add debug code to the main project and at times, I'll write completely separate "check program" which uses different approach to get same results. So if I got some hidden logic problems, those won't affect the check program at all. Of course it's preferable that the check program is done by different coder using different programming language and so on.

LXC Linux Container Security - A good post. I agree with them, like I said, I prefer LXC over full virtualization due to a lot lower overhead.

It's just interesting to see, how all this password and credit card mess ties up to very same principles. And fundamentals. Password / credit card number re-use, protecting against it. Authenticating parties, verifying the payment etc. All of this could be also easily done using basic GnuPG. I'll receive your signed token to pay 5€, and I'll sign it with my key, and now you got my token worth of 5€. When you forward it to your (or my bank) you'll get the money. Really simple, so why these simple things seem to be almost impossibly complex at times? We could even get similar results using challenges and without the public key crypto.

In some cases even hashing and encryption are basically same things, like in case of ThreeFish cipher. If hash is good enough, it's output can be used directly for encryption. without using separate cipher. Of course hashing and encryption servers different purposes, but at least in theory those are directly inter exchangeable. Input, with key, produces something. Where you can't easily derive the key or input. Using key+ctr > sha256 x data might be terrible cipher in reality, but at least in theory it should be ok. DISCLAIMER: Don't use it, consult a professionals, this is just my random thoughts blog.

SSLsplit http://www.roe.ch/SSLsplit - A nice tool for transparent SSL interception. Of course there have been devices and software doing this stuff earlier, but this is nice implementation and free to use.

LinkedIn Leaks - This is exactly why I've been asking browser vendors to provide PER TAB security. LinkedIn leaks. My request to browser vendors: "Web browsers should allow optional mode, where each tab is running as it's own isolated environment. It would efficiently prevent data leakage which is way too common with all current browsers. Basically data leak is basic feature of all current browsers and there's nothing much you can do about it."

New kind? No, not really new kind at all. I've been very aware about this as long as I've been using the net. Most people who are telling that email encryption would help, or using secure chat app or something, mostly fail to consider meaning of metadata. Sociogram, your contacts tell already a lot about you. Also communication patterns are very important. As everybody shoud know. At least SIGNINT people have been knowing it for ages. Who receives messages and when, and if those messages are forwarded gives you a good visibility in chain of command, even if you don't know the content of the messages.

How POODLE Happened - POODLE SSLv3 stuff, discussion about it etc. It's just interesting to see, how long it takes before TLSv1 is found out to be leaky too.

My Ubuntu 64 bit / Nvidia + Intel display adapter driver stuff won't still work. Issue have been getting fixed or months. Problem has changed a little during this time. But alas, it still doesn't work. First everything worked, then after one upgrade, xorg.conf file started to get deleted on every boot. Then they fixed it. Now the file content just gets reset on every boot. This nice feature practically prevents using all four screens on computer and drops me to two screen only configuration using the Intel driver. So, so, annoying. Ahh.... Nobody knows when working fix will get delivered.

Compared 10 cloud service providers very carefully, created complex comparison Excel (LibreOffice Calc) sheets etc. Price differences are really staggering. How do you know you're getting over priced service? Well, if they're willing to discuss the prices, you'll immediately know they're ripping you off. If prices are low and already visible on the pages, yet services are large scale an reliable. That's what you should choose. Many asked how the expensive companies can even live, when others are 75%. That's a good question. I guess many people do not bother to do proper comparison between cloud services and buy from these heavily overpriced service providers. Because they might be local or they offer "exceptionally good service". Afaik, 100% automation is what I want. I don't want anyone who isn't 24/7 reachable or is having a vacation or simply messes up things to provide me a "great service" duh. But it seems that some companies are much more old fashioned than others on this spectrum.

Sorry if there are typos and stuff. I'm just trying to dump my backlog as soon as possible. Now there's "only" 899 items left.