Gallery

Blogs

Downloads

Everything posted by lattera

Due to some unfortunate incidents with the administrators of CryptIRC, BinRev has moved its official IRC server. You will need to reregister your handles and channels. Work is still being done on the IRC server, including stats and kwotes. Please be patient as we work to resolve any bugs and add necessary features. The new server is located at irc.binrev.net. You can join in the same channel as always (#binrev). Please start making the transition off CryptIRC and on to irc.binrev.net. Thanks for the patience and all you do to further the community.

I'll be getting in on the 3rd and leaving the 8th. I'm speaking on the 5th at 3pm about Runtime Process Infection, the talk is called Runtime Process Insemination. On twitter, I'm @lattera. My Google+ profile can be found at http://0xfeedface.org/+

what is it that you're trying to find? Im willing to bet no one is going to give up their password for their msdn account I need a Admin MSDN account. I will buy if need be. Thanks. Thread closed and account banned due to illegal activity and requests thereof.

Virtualization is great for developers. It allows us to test different scenarios, keep organized, and maintain a safe and sane environment. I only develop inside VMs. I hate cluttering my main OS install with non-production-ready code, especially if I'm dealing with touchy things like the kernel. Virtualization in the enterprise allows for server consolidation, cloud hosting, failsafes, etc. I use virtualization heavily at work. I use multiple computers and multiple VMs on each computer for a vuln-dev lab. If virtualization wasn't an option, My employer would have to provide me with over ten servers if virtualization technology didn't exist. However, virtualization isn't the end-all-be-all solution. Sometimes you need to test your project on real hardware or in real-life situations. As with all decisions, evaluate your needs and see if virtualization is a good option.

I tend to use the OS that fits the job best. On my laptop, I run OSX. On my workstation at work, I use Solaris 11 Express. On my vuln-dev lab, I use a mixture of Linux, Windows, and Solaris. I'm more biased towards Solaris because of ZFS, Dtrace, Xen, and Crossbow.

Hacking is very much alive. Take a look at full-disclosure. Take a look at the industry. I would be considered a whitehat hacker--I get paid to hack (legally, of course). I think you just need to know the scene. The scene is much broader these days, encompassing groups of script-kiddies who somehow get their hands on 0days to very talented individuals. You'll find varying degrees of expertise and maturity in all hacking communities. It's definitely hard to pinpoint a definition of hacking. Is it merely finding vulnerabilities and writing exploits? Is it using developed exploits against others for profit or fame? Is it limited to the digital world? I'll leave the definition up to you; but suffice it to say that whatever hacking is, it isn't dead.

My employer's flagship product generates thousands of PDFs. We have three different copies of our product: a development copy for Quality Assurance testing of recently-written code, a staging copy to test pushing a new versions of our product, and a production copy that our users utilize. Each copy requires 42GB worth of PDFs. New PDFs are generated every day. To maintain a sane development environment, we pull fresh copies of the PDFs every month from production. We pull from production to staging, then from production to development. The PDFs are stored on two separate servers that both run NTFS. We use Microsoft SyncToy to sync the PDFs across environments. The process can take several hours for each environment. The network load is high due to the PDFs being stored on multiple servers. I recently had an idea. What if we store the PDFs on our ZFS NAS? We could use ZFS snapshotting and rsync to refresh the environments. We can do that on a regular basis via a cron job. ZFS snapshots take a few seconds and rsync is a really efficient tool. No network traffic will be involved since the synchronization is taking place all on the same server. Here are the commands we would run: DATE=`date '+%F_%T'` zfs snapshot tank/site_data/prod/PDFs@$DATE rsync -a /tank/site_data/prod/PDFs/.zfs/snapshot/$DATE/ /tank/site_data/dev/PDFs/ I really like this solution. Right now, we have to jump through a lot of hoops to sync up these PDFs. This will save us time, space, and internal bandwidth. This article originally posted on my tech blog.

It really depends. For me, if the project is just some hobbyist tool meant to solve a little problem, I likely won't write documentation. I'll let the code document itself. However, if the project is meant to be more serious, I'd document it both inside the source through comments and through API documentation. If the code is obscure, but meant to be reused within a few years, I'll likely just comment the code.