tag:blogger.com,1999:blog-90743182017-08-17T03:04:40.177-04:00Security in Industry and AcademiaWelcome to my information security blog. I hope the information I publish and comments I provide can offer some insight, for better or worse, into current industry trends, technologies, and innovations.
One of the purposes for this blog is to encourage creative and constructive dialogue, so feel free to comment. If you do, please provide your name.
If you have any feedback or would like to contact me offline, don't hesitate to email me: mike[@]cloppert[.]orgMichael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.comBlogger131125tag:blogger.com,1999:blog-9074318.post-75332863638008676072010-11-22T23:33:00.005-05:002010-11-23T02:00:20.716-05:00Let's Enable Cloud ComputingI've been thinking a lot about "cloud computing" over the past few months, and I keep coming back to the same conclusion every time: the InfoSec community is inhibiting IT innovation by throwing up weak, largely unsubstantiated concerns over the security risks of "cloud computing." Overall, our industry's reaction smacks of "fear of the unknown." [1]<br /><br />After some research[2][3][4][others], I've found that most security-related arguments against cloud computing qualitatively fall into one of the following risks, in no particular order:<br /><ol><li><span style="font-weight: bold;">Context-hopping.</span> A compromise of one virtual environment may facilitate access to another virtual environment. This is a technical risk.<br /></li><li><span style="font-weight: bold;">Supervisory control.</span> A compromise in a virtual environment may lead to an "escape" from that environment to the supervisory process that controls it and other environments. Together with #1, these are also called "VM Escapes." This is a technical risk.</li><li><span style="font-weight: bold;">Inferential data loss.</span> Others could make inferences about your environment by inspecting their own (resources available, etc.). This is a technical risk.<br /></li><li><span style="font-weight: bold;">Change management.</span> Virtual environments can be changed rapidly, meaning a possible loss of control. This is a procedural risk.</li><li><span style="font-weight: bold;">Role confusion.</span> Virtual environments, being controlled by different actors at different layers, may lead to confusion about important task execution (think: backups). This is a procedural risk.</li><li><span style="font-weight: bold;">Forensics.</span> Virtual environments may complicate or limit forensic investigations and e-discovery. This is a technical risk.</li><li><span style="font-weight: bold;">*Control.</span> In outsourced situations, loss of control of the underlying hardware and supervisory process externalizes certain risk-introducing actions like misconfigurations. It also may inhibit validation of controls at lower levels of the software or hardware, and outsiders have administrative access to the underlying environment. This is an implementation risk.</li><li><span style="font-weight: bold;">*Data location.</span> In a virtual environment, the location of data at any given point is uncertain, with possible legal or export control implications. This is an implementation risk.</li><li><span style="font-weight: bold;">*Privacy.</span> In outsourced scenarios, another entity dictates the conditions and depth of law enforcement cooperation. This is an implementation risk.</li><li><span style="font-weight: bold;">*Continuity.</span> Hosting infrastructure on a company's servers could be at risk if the company folds or experiences other stability issues. This is an implementation risk.<br /></li></ol>I've marked the risks exclusive to outsourced cloud services with an asterisk.<br /><br />Let's focus on those risks that impact all implementations of cloud computing; that is, items 1-6. To be blunt, the only risk that deserves special attention is [6] Forensics, because of the loss of the often-invaluable unallocated space on a disk or in memory. Every single one of the technical risks [1]-[3] are already accepted by organizations at the network layer: this includes VLANs, MPLS tagging, and other network abstractions we have been using for years. I've yet to hear an argument as to why we should treat virtualization on the host any differently than we do on the network for these risks. Procedural risks [4] and [5] already exist in production environments, and should already be managed by established processes and organizational responsibility. If these are issues for cloud computing, they're issues for the broader IT organization. If nothing, they are not unique nor limited to the cloud.<br /><br />Looking at the other half of our risks, again we see risks either already accepted or not specific to cloud computing, with the exception of privacy and possibly data location. Organizations that have this concern, however, can easily work with their provider to manage the privacy risk, and I'm not convinced that the data location issue is a problem - after all, packets are routinely routed around the world irrespective of the export status of their content. In any case, it's likely that this is easily addressed as well. [7] and [10] are already an accepted risk at the network layer by any organization with a WAN managed by an ISP.<br /><br />In contrast, I'm going to provide a few reasons cloud computing could actually <span style="font-style: italic;">help</span> security, if properly implemented.<br /><ol><li><span style="font-weight: bold;">Intrusion detection.</span> The supervisory process is a place where all network and host activity can be monitored from a single vantage point. This holds great promise for intrusion detection and behavioral analysis by exposing far more data than could be afforded previously.</li><li><span style="font-weight: bold;">Compliance monitoring</span>. User activity could easily be monitored across multiple systems and applications. Restrictions on where data resides could similarly be implemented across systems easily (think: DRM).</li><li><span style="font-weight: bold;">Availability</span> (yes, it is a security concern). Redundancy and rapid recovery become far more affordable.</li></ol>That's just off the top of my head. Of course, with some careful thought and collaboration with virtual machine vendors, other opportunities are likely to arise. However, if our industry takes a "no" stance, in spite of the lack of any appreciable risk increase, we will be cut out of this evolution and lose valuable opportunities to turn cloud computing into a benefit rather than a cost from a security perspective.<br /><br />I find it appropriate that the iconic security object is a firewall, because this is how most security professionals think. Classic InfoSec mindset is as a gateway; a veto-holding non-voting member of the IT community. The correct role, in my opinion, is as an active participant in technical innovation, architecture, and the engineering process, making sure requirements are met in a way that balances risk with cost - not eliminating risk at extraordinary cost. Compliance and auditing are my key suspects in holding us back from this goal, but that's an argument I'll save for another day.<br /><br /><span style="font-weight: bold;">References<br /></span><ol><li>C|Net - Risks outweigh rewards according to most professionals: http://news.cnet.com/8301-1001_3-20001921-92.html<br /></li><li>Lenny Zeltser's blog: http://blog.zeltser.com/post/1525310925/top-ten-cloud-security-risks</li><li>Infoworld, quoting Gartner: http://www.infoworld.com/d/security-central/gartner-seven-cloud-computing-security-risks-853</li><li>NYTimes Op-Ed by Johnathan Zittrain: http://www.nytimes.com/2009/07/20/opinion/20zittrain.html?_r=1<br /></li></ol>Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com0tag:blogger.com,1999:blog-9074318.post-24250137676328485042010-10-01T22:21:00.002-04:002010-10-01T22:24:59.458-04:00Why there shouldn't be a dot-secureA few days ago, Cyberwar Chief Gen. Alexander <a href="http://www.nytimes.com/2010/09/24/us/24cyber.html">proposed</a> building a separate, secure network for the nation's critical infrastructure. By now, this has been widely derided by many security specialists, but I wanted to throw my hat in the ring with a few comments.<br /><br />Separation is an effective control in theory. One chronic problem our industry suffers is "ivory tower" syndrome, with decisions divorced from reality. This is an example.<br /><br />SIPRnet is an example of where separation has effectively mitigated risk. The DoD's network is largely isolated, and as a result, has mitigated risk that internet-connected networks experience. Notice how I said "mitigated," not "prevented." Security is about risk management, not risk elimination.<br /><br />The problem with separation comes in the form of exceptions and enforcement. The more exceptions, and and less enforcement, the less effective the separation, and the less risk mitigation. The diminishing role of firewalls as an effective security device is a stark example of this.<br /><br />Think of this in terms of "meatspace": the Great Wall of China, the Berlin wall, the Maginot line - all were colossal failures for their stated goals. Additionally, the massive investment of resources for construction and maintenance detracted from other more effective strategies, amplifying their detrimental impact. Yet island nations such as Britain, which has had a <span style="font-weight: bold;">complete</span> water barrier, has enjoyed the security benefits of this isolation throughout its history.<br /><br />The general's proposal is a fool's errand. I would say the same about an isolation regime only for the defense industrial base and the DoD, given the interconnectedness and overlap of those networks. What he proposes is a geometrically larger problem, with corresponding increases in the need for exception and difficulty of enforcement. The exceptional cost of such an approach could not possibly justify the resultant risk mitigation IMO. That amount of money would go much further in mitigating risk by investing in broadly-adopted and linked authentication mechanisms, secure DNS, counterintelligence, and cross-industry threat focused network defense.Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com0tag:blogger.com,1999:blog-9074318.post-58065260904528570412010-07-07T22:21:00.003-04:002010-07-07T22:24:55.635-04:00Why my Twitter Feed is Hilarious...or, the yes-huh, nut-uh of "cyberwar":<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_yh9qMmyzuAU/TDU2wklecZI/AAAAAAAAAFo/ifwUhANhfEU/s1600/Picture+6.png"><img style="display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 400px; height: 248px;" src="http://4.bp.blogspot.com/_yh9qMmyzuAU/TDU2wklecZI/AAAAAAAAAFo/ifwUhANhfEU/s400/Picture+6.png" alt="" id="BLOGGER_PHOTO_ID_5491355528730669458" border="0" /></a>Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com0tag:blogger.com,1999:blog-9074318.post-66832769987295431742010-06-06T18:18:00.003-04:002010-06-06T18:55:42.004-04:00Security Academia: Stop Using Worthless Data<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_yh9qMmyzuAU/TAwg-M2WGQI/AAAAAAAAAFY/xzdW9empYdY/s1600/bad_data.png"><img style="float: left; margin: 0pt 10px 10px 0pt; cursor: pointer; width: 200px; height: 154px;" src="http://1.bp.blogspot.com/_yh9qMmyzuAU/TAwg-M2WGQI/AAAAAAAAAFY/xzdW9empYdY/s200/bad_data.png" alt="" id="BLOGGER_PHOTO_ID_5479791099576195330" border="0" /></a>I have a new litmus test that I use to help me vet the many intrusion detection related academic papers that come across my desk. I call it the "relevant data test." If your approach does not study relevant data, I will not read it. You may indeed have found a new way to leverage Hidden Markov Models in some neat heuristic, layered approach. I do not care. Novel or precise as your approach may be, the applicability of it is predicated upon the relevancy of your data. You may as well have found a new way to model the spotting of a banana as it ripens, if your data has nothing to do with intrusions in 2010.<br /><br />It's time to wake up, folks. A 10-year-old data set for intrusion detection is utterly worthless, as your conclusions will be if you use it. I will never again read further than "benchmark KDD '99 intrusion data set." There is no faster way to communicate to an informed audience that you just don't understand intrusions than by analyzing data that is this old. Such attacks are generations behind those that modern network defenders face today. Understand this: you are solving the problems exemplified by your data set. If your data is 11 years old, so is your problem, and your solution is only as effective as that problem is relevant. Few, if any, attacks from 1999 are relevant today.<br /><br />Make no mistake about it, I understand the researcher's lament! There is no modern pre-classified data set like those relics of careers gone by. Finding a good corpus is excruciatingly difficult. But in legitimate, scientific, empirical studies, this is absolutely no excuse for using irrelevant data. In fact, without first establishing the relevancy of ANY data set, even those used in the past, one's findings fall apart.<br /><br />To pick but one example, in the last two issues of IEEE Transactions on Dependable and Secure Computing, two of the three IDS-related articles<date><x><n> based their findings on data sets that are 7 or more years old. This is emblematic of why so much research is ignored by industry, and that which isn't often falls flat in practice. If I were an editor of that periodical, which I have been reading for quite some time, I would have rejected nearly every intrusion detection paper submitted in the last 3 years outright on this basis alone.<br /><br />The data commonly considered the "gold standard" by academics has not been relevant for at least half a decade. Research done in that period whose findings relied on 2001 and prior data is not in any way conclusive, in my professional opinion.</n></x></date>Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com3tag:blogger.com,1999:blog-9074318.post-65212010935708187632010-04-28T11:17:00.004-04:002010-04-28T11:45:12.539-04:00Spy Museum opens FUD exhibit<a style="" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_yh9qMmyzuAU/S9hWdxl_edI/AAAAAAAAAFM/xnYzx-zN8p8/s1600/spymuseum.PNG"><img style="float: left; margin: 0pt 10px 10px 0pt; cursor: pointer; width: 200px; height: 35px;" src="http://1.bp.blogspot.com/_yh9qMmyzuAU/S9hWdxl_edI/AAAAAAAAAFM/xnYzx-zN8p8/s200/spymuseum.PNG" alt="" id="BLOGGER_PHOTO_ID_5465213217342978514" border="0" /></a>It is really bothersome to see a museum as popular and, until recently, esteemed as the <a href="http://www.spymuseum.org/">Spy Museum</a> open an <a href="http://www.spymuseum.org/weaponsofmassdisruption/">exhibit</a> pandering to fear. In the two-sentence description, a "cyber attack" is compared to Pearl Harbor, immediately discrediting anything that might be contained therein. Disturbingly, this analogy is made by Richard Clarke, someone with serious pull in matters of national policy. Such ludicrous hyperbole may make the museum some serious coin, but it sets back understanding of real-life CNA and CNE issues, the balance between them, and their practical use in modern society and warfare. The result will be misplaced priorities by decision-makers for whom these visitors vote, poorly-invested research and defense dollars, and if left unchecked, economic, military, and intelligence disadvantages on the world stage. Like the CNN-broadcast "Cyber Shockwave," the only thing missing from this exhibit is an <a href="http://www.amazon.com/Live-Free-Die-Hard-Unrated/dp/B000VNMMR0/ref=sr_1_6?ie=UTF8&amp;s=dvd&amp;qid=1272468830&amp;sr=8-6">F-35, Bruce Willis, and the "I'm a Mac" guy</a>.<br /><br />An exhibit headline, visible on the museum's website, reads "If cyber spies break America's security codes, could power lines turn into battle lines?" A better question is "who is the curator, a 16-year-old World-of-Warcraft gamer?" On second thought, even a pizza-faced teen would probably know this doesn't make one bit of sense.<br /><br /><span style="font-weight: bold;">Update<br /></span>A description of the <a href="http://www.washingtonian.com/blogarticles/Arts%20&amp;%20Events/afterhours/13820.html">phear</a>. Sadly, it's recommended as something to do. And believe.<span style="font-weight: bold;"><br /></span><blockquote><span style="font-style: italic;">It’s a frightening thought—and an exhibit that, for better or worse, is designed to imbue its viewers with the <span style="font-weight: bold;">reality</span> of that <span style="font-weight: bold;">fear </span>as well as <span style="font-weight: bold;"> educate </span>them. This is the kind of thinking that led to an extra gift, tucked into the Spy Museum’s Field Guide to Asymmetrical Warfare and passed out at the reception: a flash drive.</span></blockquote><br />(Emphasis my own)Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com0tag:blogger.com,1999:blog-9074318.post-24334149449546830472009-12-31T17:06:00.004-05:002009-12-31T17:35:29.251-05:00TL;DNT: Academia and industry are both failing<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_yh9qMmyzuAU/Sz0nFd4jx1I/AAAAAAAAAFE/IuIiUnCu1kc/s1600-h/fail-owned-car-security-fail.jpg"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 200px; height: 150px;" src="http://4.bp.blogspot.com/_yh9qMmyzuAU/Sz0nFd4jx1I/AAAAAAAAAFE/IuIiUnCu1kc/s200/fail-owned-car-security-fail.jpg" alt="" id="BLOGGER_PHOTO_ID_5421532501298628434" border="0" /></a><span style="font-style: italic;">(Too long, did not tweet) I think this is more applicable to my personal blog on industry and academia anyway.</span><br /><br />On the cusp of 2010, the state of information security in our society can only be described as a mess. I've come to the conclusion that my career path will now and forever be an effort to bring more science of computing to security in practice (severely lacking now), and reality of security to academia (also severely lacking now). This is at the heart of our mess, and will also be the solution to it. Few-to-no tenure-track professors at accredited universities have real-world experience.<br /><br />Academic papers are written around decade-old problems, using decade-old data sets, demonstrating a decade-old mindset and ignorance to the volatility of security in practice. There are few models - even fewer that are relevant - and little agreement on terminology as fundamental as risk, threat, and vulnerability.<br /><br />Industry makes risk decisions with scant or no objective data, builds models on subjective criteria, suffers from <a href="http://www.businessweek.com/magazine/content/06_08/b3972034.htm">physics envy</a>, and is often totally incapable of performing analysis that adheres to the scientific method. In some cases, industry still fails to recognize that security is risk management, evident by the all-too-common requests for ROI to justify security spending. I've seen nearly every word in the English language prefixed by "cyber-" in the last 24 months, simply because it's a buzzword. It's so overused I cringe the few times I have to say it, and the hype risks an overcorrection in the coming years that will back-burner the issues at hand, or water them down with gimmicks and sales pitches to the point where serious concerns in need of resolution are met with the eye-rolling more appropriately reserved for notions such as "cyber Katrina" or "cyber 9/11."<br /><br />The US now has a "cyber security czar," virtually ensuring failure of public policy just as we've seen with most other "czars" (how's that war on drugs going?). Policymakers don't realize that electronic espionage is just as serious if not moreso than traditional methods of espionage. No agreement has been made on how conflicts (espionage and outright aggression) escalate beyond the internet into the real world, despite having very serious real-world implications in and of themselves. We are not holding to account other countries who tacitly or explicitly permit attacks against our country's critical infrastructure, ensuring their continuity for lack of any sort of risk associated with their actions. Open dialogue is taking place, but only on the most greatly exaggerated, dated, or unlikely risks, reducing national information security strategy to the same level of effectiveness as airline security.<br /><br />I normally don't like rants without solutions, so for that I apologize. Maybe I'm just in a bad mood. At the risk of reducing all these problems to one oversimplified solution, I strongly feel that bringing academia and industry closer together in how to approach information security issues is the only way to begin to fix most of these problems.Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com2tag:blogger.com,1999:blog-9074318.post-87934570370762627642009-12-17T11:01:00.001-05:002009-12-17T11:03:19.374-05:00A song for the seasonEnjoy. Thanks to my coworker Roger for the assist.<br /><blockquote>On the 12th day of Christmas, my CIRT did find for me...<br />12 users clicking<br />11 hackers hacking<br />10 sites cross-scripting<br />9 drives receiving<br />8 gigs a-taken<br />7 widgets stolen<br />6 passwords broken<br />5 forged emails,<br />4 PDFs,<br />3 word docs,<br />2 hyperlinks,<br />... and a hole in Adobe new-Player<br /></blockquote>Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com0tag:blogger.com,1999:blog-9074318.post-27992650014227239042009-11-09T21:13:00.005-05:002009-11-09T21:52:49.276-05:00Speaking at 2010 DC3 Cyber Crime Conference<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_yh9qMmyzuAU/SvjVb-p8lPI/AAAAAAAAAE8/mnK0glMopBI/s1600-h/2010_dc3.png"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 200px; height: 44px;" src="http://2.bp.blogspot.com/_yh9qMmyzuAU/SvjVb-p8lPI/AAAAAAAAAE8/mnK0glMopBI/s200/2010_dc3.png" alt="" id="BLOGGER_PHOTO_ID_5402302429683029234" border="0" /></a>I'm happy to share that a presentation of mine has once again been selected for the DC3 Cyber Crime Conference, held in St. Louis at the end of January, 2010. I'm very excited to be speaking again. You can read about my past presentations <a href="http://blog.cloppert.org/2006/10/2007-department-of-defense-cybercrime.html">here</a> and <a href="http://blog.cloppert.org/2007/12/2008-dod-cybercrime-convention.html">here</a>. If you're planning to attend, I'd love it for you to drop by on Thursday from 1:30-3:30PM.<br /><br /><span style="font-weight: bold;">Intelligence-driven Response for Computer Network Defense<br /></span><span><span style="font-style: italic;">Abstract</span></span><span style="font-weight: bold;"><span style="font-weight: bold;"><br /></span></span><blockquote>Network defense against sophisticated adversaries requires a new approach than what the information security industry typically prepares its analysts for. From the overarching incident response process down to the specific questions each analyst must be able to answer, classic incident response techniques and procedures are insufficient in the face of persistent and focused intrusion attempts. A detailed understanding of one’s enemy, specifically, is an overlooked concept in industry-standard information security pedagogy and mindset which can offer strategic, actionable insight into effective response. This presentation extends some information warfare concepts to discuss how intelligence-driven analysis and response can improve the defensive posture of organizations facing advanced persistent threat actors. Examples will be given at the micro and macro level; attendees should be technically well-versed as they are able to see the “big picture” of computer network defense.<br /></blockquote>Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com1tag:blogger.com,1999:blog-9074318.post-58588373259673314262009-11-09T20:19:00.004-05:002009-11-09T21:11:11.324-05:00Speaking at SANS CDI<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_yh9qMmyzuAU/SvjLkLUhgQI/AAAAAAAAAE0/at89S556CxY/s1600-h/sans_cdi.png"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 140px; height: 114px;" src="http://4.bp.blogspot.com/_yh9qMmyzuAU/SvjLkLUhgQI/AAAAAAAAAE0/at89S556CxY/s200/sans_cdi.png" alt="" id="BLOGGER_PHOTO_ID_5402291575405510914" border="0" /></a>I will be participating in four separate events at <a href="http://www.sans.org/cyber-defense-initiative-2009/?utm_source=web&amp;utm_medium=text-ad&amp;utm_content=text-link_Featured_Link_Home_Page&amp;utm_campaign=CDI_East_2009&amp;ref=45748">SANS CDI</a> this year. While the panels aren't yet listed on SANS's website, they should be soon, and Richard Bejtlich has a good overview on his <a href="http://taosecurity.blogspot.com/2009/11/tentative-speaker-list-for-sans.html">blog</a>. Specifically, I will be involved with:<br /><ul><li><span style="font-style: italic;">Commercial Security Intelligence Service Providers</span> as a moderator</li><li><span style="font-style: italic;">Noncommercial Security Intelligence Service Providers</span> as a moderator</li><li><span style="font-style: italic;">Unix and Windows Tools and Techniques</span> as a panelist</li><li><span style="font-style: italic;">CIRTs and MSSPs</span> as a panelist</li></ul>If you have budget left for the year, you should definitely check it out. It's going to be a great few days of material, paired with the usual selection of great SANS training.Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com0tag:blogger.com,1999:blog-9074318.post-46873051425914102032009-10-24T22:46:00.002-04:002009-10-24T22:49:54.858-04:00Fighting the cyberwar hyperboleIs it possible that a mainstream media outlet is finally starting to interview people that have a clue when <a href="http://fcw.com/Articles/2009/10/19/FEAT-DOD-cyber-warfare.aspx?Page=2">talking about</a> "cyber warfare?"<br /><blockquote><p>The concerns are real, but the concept of a digital Hurricane Katrina and similar doomsday theories might be embellished, said Jim Lewis, director and senior fellow at the Technology and Public Policy Program at the Center for Strategic and International Studies. “It’s really hard to derail a large country that has a lot of infrastructure,” he said. “People tend to exaggerate. I love the Bruce Willis movies, but that’s just not the truth.”</p> <p>Lewis said less dramatic but equally dangerous espionage and crime represent the true perils.<br /></p> <p>“How would you feel about China getting our designs for the F-35" stealth fighter jet? he asked. “What about those who rob U.S. banks over the Internet from Russia, with no chance of prosecution? [Hackers] that are breaking into our systems to steal military secrets or prepare for potential sabotage…these are the real threats.”</p></blockquote>Well said, sir.Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com0tag:blogger.com,1999:blog-9074318.post-46033355852469285672009-07-30T16:36:00.009-04:002009-08-01T14:37:58.067-04:00Blackhat 2009 Round-Up<a style="" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_yh9qMmyzuAU/SnIniJ2eHiI/AAAAAAAAAEs/7lp6j08kjvQ/s1600-h/blackhat.PNG"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 200px; height: 86px;" src="http://3.bp.blogspot.com/_yh9qMmyzuAU/SnIniJ2eHiI/AAAAAAAAAEs/7lp6j08kjvQ/s200/blackhat.PNG" alt="" id="BLOGGER_PHOTO_ID_5364393573865102882" border="0" /></a>This being my first BH, experiencing it juxtaposed against what has been roughly a decade of impressions about what the event is like was interesting. No doubt, BH 2009 is quite different than it was back then. Nevertheless, it was a fantastic educational experience.<br /><br />In terms of event attendance, I appreciated for the first time the value of Twitter as a social situational awareness tool. Following #blackhat inspired me to switch presentations on at least two occasions to see something better, and kept me abreast of the dynamic nature of peripheral events like happy hour gatherings, etc. It also helped me keep track of and share my thoughts on presentations as they happened - notes I'm happy to share with the public, and which allow me to summarize the event here.<br /><br />On to the presentations. Below I'll summarize my notes only on presentations that I feel I attended enough of to speak intelligently on.<br /><br /><span><span style="font-weight: bold;">Rod Beckstrom: Beckstrom's Law</span><br />I won't attempt to recreate Rod's <a href="http://valuenetworks.com/public/item/230074">law</a> here, but the gist of it is that the value of a network to an individual is the difference in cost of that individual performing an action without the network and with the network. His example was buying a book: if one could buy a book at a brick-and-mortar store for $26, but buys it online for $16, the value for that transaction is $10. Extrapolating this, the value of a network is the cost savings of all actions for all users of that network. It's an interesting academic exercise, but I do not really see its applicability even in microcosms of the internet or limited scope environments for two reasons: first, the notion of <span style="font-style: italic;">value </span>seems to be subjective in nature, making any derivative metric itself subjective. Second, and more indisputably, it is an exponential evaluation to compute this value, severely limiting the size of a "network" (however you may define it) that could be valuated.<br /><br />One argument Rod made in his talk was that the best investment we can make in security is to improve internet protocols. I disagree. The threats we face in 2009 are so far up into the application layer that internet protocols really aren't a serious risk by comparison. If we want to invest money, we need to make it in areas that reduce the profit margin for the adversary, or increase their risk when they attack. This means a major shift in public policy, lobbying congress and the presidency for sane, threat-driven measures to go after perpetrators, and financial backing for investigations (local, federal) and prosecutions. There also needs to be more accountability on the part of software manufacturers, something that the government can assist with as well possibly via FTC incentives. These are "softer," more ambiguous investments than re-architecting protocols, but they will go much further in their effect.<br /><br /><span style="font-weight: bold;">Nathan Hamiel &amp; Steve Davis: Weaponizing the Web</span><br />Nathan and Steve spent a lot of time building up and pontificating about proper web design, but when they got to the meat of their presentation the material was quite valuable. The primary focus was on different ways to leverage cross-site scripting, with a heavy emphasis on <a href="http://en.wikipedia.org/wiki/Cross-site_request_forgery">Cross-site Request Forgery</a>. This is a technique I was naive to until their talk, and their discussion definitely piqued my interest. A lot of good work has been done on browser-based trust exploitation of late. I suggest everyone check out the work done on this modern twist on the browser trust issues first exploited with XSS. I will add as a critique, though, the material could have been presented more clearly. Even with a pretty strong understanding of related exploits, I found their presentation hard to follow at times.<br /><br /><span style="font-weight: bold;">Nitesh Dhanjani: Psychotronica</span><br />I was a little worried about this presentation given its name, but I certainly was not disappointed. Nitesh's presentation was one of the most insightful and effective presentations I've seen in a long time. In it, he discussed his research based on open-source intelligence on relative "happiness" of people, using various words and contexts to quantify the overall attitude of, say, a blog entry. Nitesh then takes this technique and builds it into a tool which can digest tokenized social network entries to illustrate how satisfied or happy a person is over time. In one stunning demonstration of this tool, he maps the long misery of a man, married with a child. At one point in the timeline, the nature of the man's language changed for the positive, rather unexpectedly. Days after this behavioral change, the man killed his wife, child, and then himself. It was a shockingly poignant example of how attitdues can, in retrospect, amplify understanding of the behavior of individuals. There are many possible applications of this technique to OSINT in terms of known threat actors in our field - perhaps not in the dimension of happiness, but maybe financial status, busy-ness, or stress level, to name a few. If patterns of open-source intelligence can be established prior to certain security "events," then perhaps detection can be pushed into the reconnaisance phase of an attack in a very new way.<br /><br /></span><span style="font-weight: bold;" class="wht">Steve Topletz, Jonathan Logan &amp; Kyle Williams: </span><span><span style="font-weight: bold;">Global Spying</span><br />My mother always said "if you don't have anything nice to say..." I'll make an exception here. This was a tinfoil-hat presentation that made sweeping generalizations and rattled off 'facts' without a single citation, all to sell fear to the audience that their every move is being monitored by the government - an attitude that conveniently maps to <a href="https://xerobank.com/">their company's</a> business model of protecting your privacy. The cherry on top was giving everyone a free trial of their company's software, because of course you can trust a for-profit entity so much more than a democracy...<br /><br /></span><span style="font-weight: bold;" class="wht">Alessandro Acquisti:</span><span style="font-weight: bold;"> I just found 10 Million SSNs</span><br />I don't need to say much on this, as the beans were effectively spilled weeks ago. I will say this was a fantastic presentation that clearly followed the scientific method to present and defend a theory using statistically relevant conclusions with heavy - if somewhat unsurprising - social implications. I don't think I can personally pay a higher compliment to a presenter. Alessandro summed it up nicely when he pointed out that identity and authentication cannot be the same thing, but that is precisely what we've been doing with SSN's: the public identifier is also used as a private authenticator, and thus we have the identity theft problems of today. The contrast to the previous presentation in the same room (Global Spying) was truly amazing.<br /><br /><span style="font-weight: bold;" class="wht">Nick Harbour: </span><span style="font-weight: bold;">Win at Reversing</span><br />Nick always puts on a good show, and this was no exception as he illustrates an elegantly simple, but brilliantly constructed tool to facilitate malware unpacking. I'll do my best to describe it here, begging your pardon if my memory isn't dead-on. Nick starts off by articulating the limitations of kernel-level API hooking when analyzing malware behavior. While certain common procedure calls used by malware (like GetHostByName) are executed in ring 0, many other common ones like GetProcAddress are strictly user-land. Makes sense. Nick then turns the user-land rootkit on its head by inline hooking the malicious code, opening access to all API calls by the code, not just those touching ring 0. From here, a procedure likely to be called <span style="font-style: italic;">after</span> the code has been unpacked in memory is identified. Replace this call with an infinite loop (only 2 bytes) prior to execution, and bam! Running the patched PE leaves the unpacked code idling in memory for extraction &amp; analysis. To take it to the next level, Nick then introduces Apithief, which automates much of the complexity of this process for the analyst. The tool should soon be available on Mandiant's <a href="http://www.mandiant.com/software.htm">site</a>, according to Nick (it wasn't as of the writing of this entry).<br /><br /><span style="font-weight: bold;" class="wht">Bruce Schneier:</span><span style="font-weight: bold;"> Reconceptualizing Security</span><br />I can't really say anything here that you won't read on Bruce's blog, nor would I be so eloquent in doing so, but I will say this was my first time seeing him talk, and it was a pleasure to do so. A few take-aways I found particularly significant:<br /><ul><li>One underlying problem that facilitates the divergence between feeling secure and being secure is language: 'security' can apply to both states.</li><li>Everyone has their own model of reality from which they make decisions. This applies on an individual level as well as instinctual within our species. This is the first time in the history of humankind where our reality is changing faster than our individual and natural model of reality. Whether or not we will ever be able to catch up remains to be seen, but the gap seems to be accelerating.</li></ul>Unrelated to the subject of his talk, Bruce also discussed one of the recent <a href="http://csrc.nist.gov/publications/fips/fips197/fips-197.pdf">problems</a> revealed in AES. My understanding is the issue lies in AES's key scheduling algorithm, for the 256-bit 10-round implementation. The shorter 128-bit key is not long enough to propagate the scheduling issue, and the 14-round implementation, which is what we typically use, sufficiently dillutes the effect of the vulnerability. Bruce's comment was that, while none of the recent AES vulnerabilities represent significant risk on their own, they are concerning as possibly a harbinger of improved attacks to come.Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com1tag:blogger.com,1999:blog-9074318.post-71569403272172726352009-07-27T22:46:00.003-04:002009-07-27T22:49:03.988-04:00Blackhat 2009I will be tweeting BlackHat 2009 (my username is, you guessed it, <a href="http://twitter.com/mikecloppert">mikecloppert</a>). If we happen to be in the same place, drop by and say hi!<br /><br />Never been before, but looking forward to the chaos. I'm going to try to attend DefCon, but if I can arrange it, I'll only be there Friday. Due to some housing shenanigans, I must be back in DC for the weekend.Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com0tag:blogger.com,1999:blog-9074318.post-20546482363980261282009-07-09T20:19:00.003-04:002009-07-09T21:19:03.104-04:00Dear Information Security Industry,<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_yh9qMmyzuAU/SlaW7J_XAKI/AAAAAAAAAEk/THEYp3EPFZs/s1600-h/HMLogo10s.gif"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 180px; height: 180px;" src="http://4.bp.blogspot.com/_yh9qMmyzuAU/SlaW7J_XAKI/AAAAAAAAAEk/THEYp3EPFZs/s200/HMLogo10s.gif" alt="" id="BLOGGER_PHOTO_ID_5356634749841899682" border="0" /></a>Stop exploiting current events by making dubious or outright false statements in order to advance your own agenda. You do nothing more than devalue yourselves and the credibility of the rest of us when you do so.<br /><br /><span style="font-weight: bold;">Case in point #1</span>: Allen Paller's <a href="http://www.nextgov.com/nextgov/ng_20090707_7972.php?oref=mostemailed">statements</a> on the recent (and long overdue) analysis of the <a href="http://www.pnas.org/content/early/2009/07/02/0904891106.full.pdf+html">predictability of SSN's</a>. To wit,<br /><span style="font-style: italic;"><blockquote>"I don't think this is a high priority, because it doesn't deliver a big enough payoff" for hackers, he said. "You do identify theft so you can steal money, but it's easier to steal money by taking over someone's computer."</blockquote></span>Are you serious? One compromises a computer to impersonate another. If you have an SSN, name, and other basic information like birthday, etc (that's often publicly available on social networking sites), it's Game Over - impersonation can be achieved at a much deeper level than simply userid/password - nevermind that more and more sites are implementing some sort of 2-factor authentication. This reeks of "look over here where I can make money," ignoring reality. SANS has a lot to offer the information security community, but when its leaders make such questionably accurate and profit-driven comments, it hurts all of our credibility (what professional doesn't have a cache of SANS certs these days) and devalues the institution as a whole.<br /><br /><span style="font-weight: bold;">Case in point #2</span>: The questionably accurate stories floating around about this alleged <a href="http://www.nytimes.com/2009/07/09/technology/09cyber.html?_r=1&amp;hp">North Korean-sourced DDoS</a> against a completely random set of targets. I don't know for sure, but it seems the source of this attributional rumor is the Korea Communications Commission. Here's a sample of one of their statements:<br /><span style="font-style: italic;"><blockquote>“An aggressive distribution of vaccine programs against the attack has helped fight back,” the official, Shin Hwa-soo, said. “But we are not keeping our guard down. We are distributing the vaccine programs as widely as possible and monitoring the situations closely because there might be a new attack.”</blockquote></span>A vaccine? Really? Please tell me we're not taking these people seriously. It seems to be a fact that some sort of DDoS attempt took place, but keep in mind the attribution to DPRK is hinging on people who distribute "vaccine programs" against a DDoS - whatever the hell that means. Initially, the attacks were downplayed - until 24/7 news got a hold of it and realized that <a href="http://en.wikipedia.org/wiki/Computer_network_operations">CNA</a> can be sexy. Then the "cyber security professionals" realized there was a platform for advancing an agenda and poured fuel on the hype fire. There are plenty of examples. Below are a few.<br /><blockquote><br /><a href="http://www.google.com/hostednews/ap/article/ALeqM5iaaWwzg--SOmIz9Qjdju4UYFB5GgD99B7LNO0">Google hosted news:</a><br /><span style="font-style: italic;">"Just from looking at footprint, it was Bigfoot, not Bambi," said Charles Dodd, founder and chief technology officer for Nicor Cyber Security.</span><br /><br />What started off as "Cyber Attacks" on the east coast became "massive" by the time they got to <a href="http://www.sanfranciscosentinel.com/?p=34171">San Francisco</a>.<br /><span style="font-style: italic;">The US sites experienced a “massive outage”, according to Keynote Systems, a company which monitors 40 government sites in America.<br /><br /></span><span>Even Ron Beckstrom, whose comments were mostly well tuned, eventually <a href="http://www.foxnews.com/story/0,2933,530984,00.html">fell victim</a> to the hype cycle in a most spectacular way:</span><span style="font-style: italic;"><br /></span><span style="font-style: italic;" id="intelliTXT">"[It's] a little bit like launching some Scud missiles towards the U.S.," noted Beckstrom. "These are cyber-scuds, very low-tech, but a lot of them, and kind of annoying."<br /></span><span id="intelliTXT">No, Ron, it is nothing like this.</span><span style="font-style: italic;" id="intelliTXT"><br /></span></blockquote>All of this hype, yet when you ask<span style="font-style: italic;"> </span>the victims, they tell you that the impact was negligible [source: <span style="font-style: italic;">ABC World News Tonight</span>, 7/8/2009]. This underscores the classic properties of CNA that makes it much less effective in terms of real economic impact than <a href="http://en.wikipedia.org/wiki/Computer_network_operations">CNE</a>:<br /><ol><li>Its effectiveness is often limited to the period over which it can be sustained - except when machine or software destruction is involved, in which case it simply becomes a DR exercise,<br /></li><li>It is difficult to sustain,<br /></li><li>It is open conflict and identifiable immediately, and<br /></li><li>It rarely maps to the intended strategic or tactical goals of the executor (what, for instance, was achieved here?)</li></ol>So, can we please stop participating in the hype and lend some credibility to our young and rapidly emerging field by focusing on factual and rigorous investigation? Exaggeration and misrepresentation in the media is inevitable, but we encourage it when we reinforce it with expert opinion.Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com0tag:blogger.com,1999:blog-9074318.post-62984709265755475582009-07-09T20:10:00.003-04:002009-07-09T20:19:33.263-04:00Administrivia Jul 2009After a few months off, I'm resurrecting this blog. I've been busy with a variety of personal issues, like relaxing, over the past few months, as well as focusing what little time I have available on the SANS Forensics &amp; IR blog. I'd considered abandoning this blog altogether in lieu of my contributions there, but have realized that I need an outlet for more spontaneous and opinionated entries that I feel do not belong there. Also, my criteria for contributing here is lower - I do not feel the need to positively contribute something new and meaningful with each entry, as I feel is appropriate for SANS.<br /><br />In any case, a quick update. After many months of consideration, I decided it was in my best professional and personal interest to join <a href="http://www.facebook.com/michael.cloppert">Facebook</a> and <a href="http://twitter.com/mikecloppert">Twitter</a>. If I don't understand these communication and interaction technologies as I understand others, I will inevitably find myself falling behind and unable to exist at the forefront of security (whether I will ever get there is debatable as well, heh). I likely won't be very active with these accounts, but will likely tweet at BlackHat this year in an effort to keep in touch with all the folks I'll know there. It'll be my first BlackHat, and I'm looking forward to it!Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com0tag:blogger.com,1999:blog-9074318.post-49646939944764974582009-04-05T03:57:00.004-04:002009-04-05T04:22:16.310-04:00Security, DHS, and the NSA<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://upload.wikimedia.org/wikipedia/en/thumb/f/f4/Jtf-gno.jpg/180px-Jtf-gno.jpg"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 180px; height: 180px;" src="http://upload.wikimedia.org/wikipedia/en/thumb/f/f4/Jtf-gno.jpg/180px-Jtf-gno.jpg" alt="" border="0" /></a>A number of people have asked me my opinion on the recent <a href="http://www.federaltimes.com/index.php?S=3988926">reports</a> that authority for "cyber security" at the national level is <a href="http://blog.wired.com/27bstroke6/2009/03/nsa-dominance-o.html">moving</a> from <a href="http://news.cnet.com/8301-13578_3-10191170-38.html">DHS</a> to the <a href="http://federaltimes.com/index.php?S=3988926">NSA</a>. I think the most concise analogy I can give is this: It's like taking one of your valuables from your younger brother who's irresponsible, and giving it to your older brother who's greedy. We're substituting one set of problems with another.<br /><br />Opinions aside, I think it's interesting that the job of computer network defense at a national level is being placed subordinate to its equivalent offensive arm. An insight into fundamental policy shift? Time will tell...Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com0tag:blogger.com,1999:blog-9074318.post-53247812688514361002009-03-19T13:13:00.004-04:002009-03-19T13:26:33.955-04:00What passwords and condoms have in commonI just read my favorite blog post of the month, by Adam on Emergent Chaos <a href="http://www.emergentchaos.com/archives/2009/03/joseph_ratzinger_and_info.html">comparing</a> the Holy See's <a href="http://cnn.site.printthis.clickability.com/pt/cpt?action=cpt&amp;title=Pope+visits+Africa%2C+reaffirms+ban+on+condoms+-+CNN.com&amp;expire=-1&amp;urlID=34798082&amp;fb=Y&amp;url=http%3A%2F%2Fwww.cnn.com%2F2009%2FWORLD%2Fafrica%2F03%2F17%2Fcameroon.pope%2Findex.html%23cnnSTCText&amp;partnerID=211911">comments</a> on condoms in Africa to our often-dogmatic approach to Information Security. His comments:<br /><blockquote>In information security, we often keep saying the same thing over and over again, because we know it's right. We tell people to never write down their passwords, to always validate their input, and to run IDS systems. Deep in our hearts, we know they don't, and yet we keep saying those things. We tell them they "have to" fix all the security problems all the time. </blockquote>I'd like to go further, and do, in my reply to his post. At issue is our propensity to reflect all of the hardest problems in security today onto those who are least equipped or capable of handling them: end users. Nobody asks to get in the security business when they buy a computer, they want to entertain themselves, or positively contribute to some task, or fill an everyday need... yet we do. We ask everyone who buys a computer to join us in our perverse universe of paranoia. This is a lazy, improper, and unsustainable approach. If anyone is looking for the hardest problems to solve in our industry, look no further than your parents' complaints about their computer, your friends' complaints about websites, or your coworkers' complaints about corporate policy. We've left them holding the bag on the hardest problems.<br /><br />My comment on Adam's post is reproduced below.<br /><blockquote><div class="comments-body"> <p>Adam,</p> <p>Fascinating and apt analogy. The "blame the user" fallback has bothered me for years... and it truly is a fallback.</p> <p>To follow on to your password example: Why do users write down their passwords? Because we insist they be complex, temporal, and different between systems. Why do we do this? So they're not easily guessable. Isn't, then, the authentication mechanism the problem? We have an obtuse, antiquated authentication mechanism that belies the nature of the beast using the system. We wouldn't ask a donkey to type on a keyboard - what we have built here is the psychological equivalent. We don't change it because it is hard - technologically, procedurally, institutionally - to do so. Therefore, we insist on a system poorly suited to today's computing realities, and blame the user.</p> <p>As you suggest, there are many manifestations of this, passwords being but one. Microsoft's sage advice to mitigate Office vulnerabilities ("don't click on attachments from people you don't know") is yet another of my favorites. But in the end, it seems many of these situations end up shifting the burden of blame to the end user, subjugating them to our whims of what is and isn't "easy," rather than facilitating their use of the equipment and letting them focus on what their real job is.</p> <p>It's going to be very, very hard for IT to break this very inviting habit...</p> <p>Michael Cloppert</p> </div></blockquote>I <a href="http://blog.cloppert.org/2008/04/on-blaming-user.html">write</a> on this topic <a href="http://blog.cloppert.org/2006/12/user-education-is-not-necessarily.html">frequently</a>... I can only hope more people begin to realize the seriousness of this problem, and that we <span style="font-weight: bold;">must </span>begin to make it a tractable one.Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com1tag:blogger.com,1999:blog-9074318.post-59434577589019898252009-02-18T10:09:00.002-05:002009-02-18T10:16:49.731-05:00Speaking Engagement: CMU INII will be a <a href="http://www.ini.cmu.edu/events/2009/02/0227seminar.html">guest speaker</a> for <a href="http://www.cmu.edu">CMU</a> <a href="http://www.ini.cmu.edu/">INI</a> graduate students next Friday, 2/27/2009. The abstract of my presentation is below.<br /><span style="font-size:85%;"><span style="font-weight: bold;"></span></span><blockquote><span style="font-size:85%;"><span style="font-weight: bold;">Careers in Information Security and Tales from the Front Lines of Network Defense</span><br /><br />In this two-part presentation, Michael will introduce the field of information security from a career development perspective, giving attendees a broad view of the industry and how their various academic backgrounds may align. As the lecture progresses, Michael will give an insider's view into what it's like to defend a network used for the design of the next generation of national defense technologies.</span></blockquote>Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com0tag:blogger.com,1999:blog-9074318.post-45753609162536857042009-02-15T15:03:00.001-05:002009-02-15T15:04:28.744-05:00Rethinking the network perimeterDoes anyone remember <a href="http://www.sans.org/resources/idfaq/bastion.php">bastion hosts</a>? Marcus Ranum describes them in his 1993 <a href="http://www.vtcif.telstra.com.au/pub/docs/security/ThinkingFirewalls/ThinkingFirewalls.html">paper</a> on firewalls, just to give you an idea of how old the concept is. There was an obvious problem in the notion of a bastion host (as originally devised): having a "<span style="font-style: italic;">critical strong point in the network's security</span>" provides a single point of failure and <span style="font-weight: bold;">big</span> target for exploitation. Leaving a system exposed, regardless of how secure it is believed to be initially, will inevitably lead to failure. The principle of least privilege needs to be enforced at the network level. Thus, we created the notion of the DMZ.<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://upload.wikimedia.org/wikipedia/commons/thumb/6/6f/DMZ_network_diagram_1_firewall.svg/400px-DMZ_network_diagram_1_firewall.svg.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 256px;" src="http://upload.wikimedia.org/wikipedia/commons/thumb/6/6f/DMZ_network_diagram_1_firewall.svg/400px-DMZ_network_diagram_1_firewall.svg.png" alt="" border="0" /></a><img src="file:///Users/cloppemj/Library/Caches/TemporaryItems/moz-screenshot.jpg" alt="" />Naturally, the notion of a bastion host evolved to be a not-so-exposed system, partially protected by firewalls and isolated from the internal network so as to mitigate the damage resulting from compromise. The crown jewels are, by this model, inside the LAN and isolation was tantamount. And thus have we operated since...<br /><br />Naturally, this model has made various evolutions. Initially, the focus on protection was outside-in. Various pressures - security, policy, and otherwise - necessitated greater control on network egress. If you want to make sure a compromised internal system can't arbitrarily funnel data outbound over some ephemeral port, you need to restrict what services can be accessed on the internet from clients on the LAN. If you want to keep your employees from surfing pr0n on the job, you needed to be able to restrict what web sites they access. From this came proxied services: HTTP, DNS, email, and other services now must be funneled through a relay for greater control.<br /><br />Do you see what's happening here? Our control over our networks has slowly crept up the <a href="http://en.wikipedia.org/wiki/OSI_model">OSI model</a> as we realize the perils of a lack of control over the next layer up. From the flat networks of the early 80's, to segmentation later in the 80's and early 90's, to control over the transport layer with firewalls, and finally up as far as the application layer with insistence on proxying all services in the most "secure" networks accessing the internet today, our defenses were pushed upward by adversaries who understood how to exploit the lack of control at higher layers.<br /><br />I've got bad news: even this isn't good enough. While we've definitely raised the bar for adversaries, they have nevertheless stepped up to the plate. How do you compromise systems and funnel data out of a protected network which insists upon protocol compliance and restricted connections? Obey the rules. Comply to the protocol. Repurpose the available communication points outside of the network. And this is precisely what adversaries are doing.<br /><br />If you didn't already know, I'm telling you now: protocol-compliant command-and-control channels that communicate to compromised websites are all the rage in sophisticated attacks today. How can one attack a computer? Use the inbound communication channel: email. How can one establish bi-directional control over a compromised host? Use the outbound data channels to initiate a connection, and proceed from there: HTTP, DNS, email, these all permit <span style="font-weight: bold;">bi-directional communication to every workstation in a protected network</span> connecting to the internet today.<br /><br />What does this mean? It means that <span style="font-weight: bold;">every host which can participate in these types of data transmission is an internet-facing host.</span> Bastion hosts, firewalls, proxied services, all exist in vain against these techniques. This is the very point of this whole post: your most exposed hosts are your workstations. And today, in 2009, you have as many internet-exposed hosts as you have workstations. Considering that today, all work is done on workstations, this means your data is residing on the most vulnerable systems on your network - even if only temporarily while in active use or development. There are many implications here, which I won't go into, beyond to say if you've been sleeping soundly because you believe your network controls are strong, I hope you've enjoyed it.<br /><br /><span style="font-weight: bold;">Update</span>: Somehow this got back-dated... fixing.Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com0tag:blogger.com,1999:blog-9074318.post-45013906044507374732009-02-14T23:08:00.002-05:002009-02-14T23:31:49.984-05:00Irresponsible disclosureDid you know that last year, Heartland Payment Systems <a href="http://www.2008breach.com/">suffered a data breach</a> that "<a href="http://voices.washingtonpost.com/securityfix/2009/01/payment_processor_breach_may_b.html">may have compromised tens of millions of credit card transactions</a>?" Me neither, until I received a notice in the mail that my card may have been one of the ones compromised. Why hadn't we heard of this? Perhaps because Heartland decided to announce the data breach... wait for it... on inauguration day. Curious timing, don't you think, considering the breach happened last year?<br /><br />A few other confounding aspects of this breach:<br /><ul><li>The date of compromise is unknown</li><li>Heartland <a href="http://www.bankinfosecurity.com/articles.php?art_id=1168">had to be notified</a> of this by Visa and Mastercard. They did not discover it on their own.</li><li>Transactions occur unencrypted, according to the bankinfosecurity.com report: '<span style="font-style: italic;">Data, including card transactions sent over Heartland's internal processing platform, is sent unencrypted, he explains, "As the transaction is being processed, it has to be in unencrypted form to get the authorization request out."'</span></li></ul>Heartland <a href="http://www.2008breach.com/Information20090127.asp">boasts their advocacy</a> for end-to-end encryption despite that last bullet:<br /><span style="font-style: italic;"></span><blockquote><span style="font-style: italic;">For the past year, Robert O. Carr, Heartland's chairman and chief executive officer, has been advocating for payments industry adoption of this technology — which will protect data at rest as well as data in motion — as an improvement for payment transaction security. </span><br /> </blockquote>Certainly this claim seems dubious. In any case, the data capture and exfiltration appears to be enabled by malware installed on hosts in their payment systems network. Disk, database, and transactional encryption won't prevent compromised hosts from having access to the data in clear-text form as it's processed - clearly, this data must be unencrypted at some point in the process in memory (at least).<br /><br />This is a whole bucket of fail right here.Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com0tag:blogger.com,1999:blog-9074318.post-90799339787800758942009-01-15T08:36:00.003-05:002009-01-15T09:11:34.243-05:00Analysis or Synthesis?I have a new, formal job classification at work. Since security - more specifically, security intelligence - as a profession is "new", this is really our HR department coming to grips with that reality. The classification I now have is <span style="font-style: italic;">Cyber Intel Analyst</span>. I detest that the word "cyber" is in my title, but I'll save that for another day.<br /><br />Partially as a consequence of this change, I began thinking on the definition of the word "analysis" and how its use has been watered down in our industry. On one hand, to the extent which my job encompasses computer and network forensic analysis, the word is most certainly applicable. Digging into the most nuanced details of the history of reads and writes to a hard disk, inspecting TCP sessions and packets to observe content, absolutely fits the definition of a word whose meaning is "to take apart." But security intelligence often represents an inflection point in vision, between re-creating the events that took place as a forensic task, and painting broader picture - assembling the comparatively scant data offered by forensic investigation, monitoring tools, logs, and other artifact sources to develop a <span style="font-style: italic;">modus operandi</span>, discover other past or future actions perpetrated in the same vein, and possibly even discover the individuals behind the activity and their motives. In short, intrusion synthesis - the antonym of analysis.<br /><br />Of course, this is all very academic. I will be doing the job I've done in the past regardless of whether my title is <span style="font-style: italic;">Cyber Intel Analyst</span> or <span style="font-style: italic;">Banana Peeler</span>. But as I've said in the past, <a href="http://blog.cloppert.org/2008/12/importance-of-vocabulary.html">vocabulary is important</a>, and it's an insightful exercise to see where such a description intersects and diverges from what one does, as that activity itself can yield insights into how to better do whatever it is we do.Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com0tag:blogger.com,1999:blog-9074318.post-72822989781368583932008-12-26T12:12:00.004-05:002008-12-26T12:25:04.271-05:00The best foreword I've (yet) read<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://ecx.images-amazon.com/images/I/51eOWkui6EL._SL500_AA240_.jpg"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 240px; height: 240px;" src="http://ecx.images-amazon.com/images/I/51eOWkui6EL._SL500_AA240_.jpg" alt="" border="0" /></a>Being of a scientific and engineering mind, I love me some empirical data. This is why it's a crying shame that I've taken so long to get around to Andrew Jaquith's <a href="http://www.amazon.com/Security-Metrics-Replacing-Uncertainty-Doubt/dp/0321349989/ref=pd_bbs_sr_1?ie=UTF8&amp;s=books&amp;qid=1230311659&amp;sr=8-1"><span style="font-style: italic;">Security Metrics</span></a> [Addison-Wesley, 2007]. I have owned the book for a year, and have only now completed the foreword by Daniel E. Geer, Jr. Sc.D.<br /><br />This is the best foreword I've read to date. It <span style="font-style: italic;">alone </span>has changed how I think about metrics that measure security. If you never own this book or read it to completion, read the foreword. At only 4 pages, it is a concise and fundamental articulation of how to think about quantitatively measuring security. If you haven't read it, stop by a bookstore and check it out when you can spare 5 minutes. You'll be happy you did.Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com0tag:blogger.com,1999:blog-9074318.post-69845417326091632992008-12-08T01:36:00.004-05:002008-12-08T02:34:18.359-05:00EWD on Information Security<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://upload.wikimedia.org/wikipedia/commons/thumb/d/d9/Edsger_Wybe_Dijkstra.jpg/225px-Edsger_Wybe_Dijkstra.jpg"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 225px; height: 300px;" src="http://upload.wikimedia.org/wikipedia/commons/thumb/d/d9/Edsger_Wybe_Dijkstra.jpg/225px-Edsger_Wybe_Dijkstra.jpg" alt="" border="0" /></a>Last week, <a href="http://www.slashdot.org/">Slashdot</a> <a href="http://news.slashdot.org/article.pl?sid=08/12/02/1410254">featured</a> <a href="http://www.cs.utexas.edu/users/EWD/ewd10xx/EWD1036.PDF">EWD1036-11</a> (handwritten manuscript by Edsger W. Dijkstra) titled <span style="font-style: italic;">On the cruelty of really teaching computer science</span>. Besides being fantastic reading for any computer scientist, Dijkstra inadvertently makes some points very salient to the security field specifically worth pointing out in this 1988 essay.<br /><blockquote><span style="font-style: italic;">[Lines of code] is a very costly measuring unit because it encourages the writing of insipid code, but today I am less interested in how foolish a unit it is from even a pure business point of view. My point today is that, if we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent": the current conventinoal wisdom is so foolish as to book that count on the wrong side of the ledger.</span><br /></blockquote>Could it be that our software development process is fundamentally flawed? That vulnerabilities are merely an artifact, or symptom, of a problem that transcends all software engineering? In his essay, Dijkstra insists upon building code guided by formal mathematical proof, as such code is correct by design. Does this sound familiar? Perhaps like "secure by design?" This is a grave and pessimistic evaluation of the state of software development that still holds a great deal of merit two decades after it was written. Today, we see Dijkstra's diagnosis painfully manifested as viruses, worms, hackers, computer network exploitation, and the resultant loss of intellectual property.<br /><br />Later, Dijkstra enumerates opposition to his proposed approach to development paired with formal mathematical proof. Again intersecting the security discipline, he writes:<br /><blockquote><span style="font-style: italic;">the business community, which, having been sold to the idea that computers would make life easier, is mentally unprepared to accept that <span style="font-weight: bold;">they only solve the easier problems at the price of creating much harder ones</span>.</span><br /></blockquote>And thus, on December 2, 1988 - almost exactly twenty years ago to the day as I write this - Edsger W. Dijkstra defines the source of computer security problems by reiterating the "law" of unintended consequences. Accepting this axiom, security practitioners focus on identifying the harder problems resulting from "easy," mathematically imprecise, logically dubious solutions upon which the bulk of our computing infrastructure operates. I feel very strongly that this one statement scopes our discipline better than any other that has yet been made - so strongly that it is worth re-evaluating what information security <span style="font-style: italic;">is</span>.<br /><br />Security is the identification and mitigation of the unintended consequences of computer system use that results in the compromise of the confidentiality, integrity, or availability of said system or its constituent data.Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com2tag:blogger.com,1999:blog-9074318.post-13103766134778714402008-12-06T12:44:00.000-05:002008-12-06T16:17:44.419-05:00Life Imitating Computers: The Evolution of Human Thinking<span style="font-style: italic;">I have been working on this short essay for a long time, sorting out my thoughts on the issue and trying to convey what I'm thinking in a clear and concise manner. I sincerely hope that you enjoy it, find it insightful, and do not think of me as a ranting lunatic after reading it.</span><br /><br />For thousands of years, mankind has relied on oral history to pass along anecdotes, stories of our history, lessons learned, and any other bit of collective knowledge that societies felt necessary to preserve in order to facilitate the survival of the species - explicitly or otherwise. It is the recognition of this benefit that has largely enabled humans to thrive in societies which wisely chose the knowledge to pass along, and has led to the creation of such constructs as "conventional wisdom," "wives tales," fables, stories, and even religion. While it was initially feared as challenging this status quo of knowledge transfer, Gutenberg's invention of the printing press around 1439 was an amplification of these constructs; an argument reinforced by the first book to be pressed - the Bible - and proven to be correct over time. This invention was the mother of all evolutionary inventions in man's history at that time.<br /><br />While the pairing of the printing press and widespread literacy opened the door of knowledge to many more of our species, the spread of and access to this information was still spotty and slow. It had been, and still was, necessary for mankind to keep much of the knowledge needed to process information and analyze various aspects of one's own life, surroundings, and society in our collective heads for daily use. This was the driving need for the continuity of our legacy constructs: while we could gain knowledge and share it far more easily, to leverage it in a practical sense we had to be able to keep that information in our heads. We had evolved to easily store knowledge in terms of these constructs through natural selection, and thus our conventional mechanisms for knowledge transfer between generations survived, and even thrived, under this new regime of recordation.<br /><br />Computer systems also have a problem of information access, for which various components have been developed to address. ENIAC, and early computers like it, had to be able to store information that would be processed in the "processor" itself. It <a href="http://www.cs.umd.edu/class/fall2001/cmsc411/projects/ramguide/pastandfuture/pastandfuture.html">only had one type of memory</a> - essentially, a flipflo<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://common.ziffdavisinternet.com/util_get_image/0/0,1425,i=1030,00.gif"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 200px;" src="http://common.ziffdavisinternet.com/util_get_image/0/0,1425,i=1030,00.gif" alt="" border="0" /></a>p. It was this single mechanism that was available for all type of data. This limited the computation to that which could be crammed into this expensive memory. Later, the concept of a slower "core" memory unit was developed. Data that had to be immediately operated upon had to exist in registers (memory) on the processor itself. That which did not need to be operated on immediately could be swapped out to the slower, larger "core" memory. Modern computers have many levels of memory, from registers that operate at the speed of the processor, to multi-layer on-chip cache from which the registers are populated, to RAM which holds necessary but less-immediately accessed data, to disk which holds infrequently accessed data. Along with evolutions in mechanisms for storing data have come evolutions in how to most effectively leverage them, including predictive algorithms for data caching and swapping from the slower to faster storage devices to minimize execution delays due to memory access.<br /><br />Like the development of slower, larger memory to support data computation in our modern computers, we have collectively invented this revolutionary tool known as the internet. As the ready availability of data to mankind increases, we are forced to rely less and less on our conventional (less accurate) mental constructs, just as computers needed to store smaller and smaller portions of the data and instructions that could be processed at ready access to the CPU. As a result of all of this, in the case of computers as well as mankind, the set of information available increased exponentially. When performing tasks, we now have a wealth of available information that doesn't have to be at the tip of our fingers, or on the top of our brain, in order to be processed in a reasonable period of time. We read things on the internet, perform research in a few minutes, and - if necessary - remember it to perform a task more quickly the next time. We may "swap out", or forget, something that we previously needed on a regular basis with confidence that if we need it again later, we will be able to find it. This is a rudimentary memory management algorithm, adapted to human nature.<br /><br />All of this raises some important questions that mankind needs to reckon with in the not-too-distant future. How might this revolution in the very essence of our thinking change our constructs? In what ways will fictional literature be impacted? Will we still tell our children stories? How will religion survive? Can computer memory management techniques be adapted by psychologists to train humans to more effectively leverage our new tools like the internet? Is this evolution leaving us vulnerable should we somehow "lose" this tool through war or regression in civilization like that which happened after the first Roman empire? These questions will be answered, implicitly or explicitly, in coming generations. How we answer these questions and resolve the inevitable conflict in between the question and answer will shape no less than the future of our species. It is essential that we recognize the existence and significance of these questions now, if we have hope of answering them as a civilized society, rather than through war or deterioration of our hard-won civilization.<br /><br /><span style="font-style: italic;">Research that recognizes the issue of technology fundamentally changing ourselves and society is now being highlighted by mainstream media outlets. Recently, USAToday published </span><a style="font-style: italic;" href="http://www.usatoday.com/tech/science/2008-12-03-digital-brain_N.htm">an article</a><span style="font-style: italic;"> that discusses technology's impact to our social interactions. Closer to the point I make above is </span><a style="font-style: italic;" href="http://www.reuters.com/article/technologyNews/idUSTRE49Q2YW20081027">this article</a><span style="font-style: italic;"> discussing how surfing the internet alters how one thinks. The latter seems to infer that this model of cognition will be more efficient than our legacy constructs by suggesting those who are able to leverage it will be ahead of others intellectually and socially in future generations.</span>Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com0tag:blogger.com,1999:blog-9074318.post-71150631365339327552008-12-02T16:34:00.004-05:002008-12-02T16:40:56.139-05:00The Importance of VocabularyA brief essay I wrote for the <a href="http://sansforensics.wordpress.com/">SANS Computer Forensics, Investigation, and Response blog</a> on language - let's see if they post it :-).<br /><p></p><blockquote><p>Over the past few days, a discussion has been forming on the GCFA mailing list regarding the use of the word <i>evidence</i>. Specifically, how appropriate is it to call a hard drive (or more logical construct such as a file) "evidence" when it may turn out that the object will serve no purpose in conclusively resolving an investigation? Is it evidence, or is another word more apropos?</p> <p>Reading the dialogue reminded me once again of the importance of vocabulary, particularly in technical fields where clear, precise communication is an operational imperative rather than merely a creative expression or embellishment. While it may seem academic, mutual agreement on the use of these critical terms serves as the basis for communication in computer forensics. The more clearly defined our language is, the more effective and efficient our communications will be. Even in the first-person, definitions carry great significance, influencing no less than the very way that we think. As George Orwell said, <i>if thought corrupts language, language can also corrupt thought. </i>This feedback loop cannot be overstated - clarity in language will influence a deeper clarity of thought.</p> <p>Insofar as our fields of study are concerned, largely in their infancy with respect to other scientific fields, disambiguation of terminology is a significant challenge. Various leading texts provide differing and sometimes conflicting word definitions &amp; usage - even with basics such as what an 'incident' is. Media coverage of security compromises often overlooks the significant differences between <a href="http://en.wikipedia.org/wiki/Computer_network_operations" mce_href="http://en.wikipedia.org/wiki/Computer_network_operations">CNA</a> ("taking out the DNS infrastructure") and <a href="http://en.wikipedia.org/wiki/Computer_network_operations" mce_href="http://en.wikipedia.org/wiki/Computer_network_operations">CNE</a> ("industrial espionage"). Our vendors are not exactly helping the situation either - as a high-profile example, see Microsoft's <a href="http://www.amazon.com/Threat-Modeling-Microsoft-Professional-Swiderski/dp/0735619913/ref=sr_1_1?ie=UTF8&amp;s=books&amp;qid=1228092371&amp;sr=1-1" mce_href="http://www.amazon.com/Threat-Modeling-Microsoft-Professional-Swiderski/dp/0735619913/ref=sr_1_1?ie=UTF8&amp;s=books&amp;qid=1228092371&amp;sr=1-1"><i>Threat Modeling</i></a>, which is really <a href="http://taosecurity.blogspot.com/2007/06/threat-model-vs-attack-model.html" mce_href="http://taosecurity.blogspot.com/2007/06/threat-model-vs-attack-model.html"><i>risk modeling</i></a>. It is easy to see that we, as professionals in our young field, wield great power in shaping the future through contributions to our common language where it is still unclear or improperly used. I encourage readers to participate in these discussions whenever they arise. Diversity in opinion and vigorous dialogue are necessary to solve these foundational problems and mature our industry.</p> <p>As to the definition of the word <i>evidence</i>, I'll leave that to a better discussion forum than a blog.</p></blockquote><p></p>Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com0tag:blogger.com,1999:blog-9074318.post-33955299256520605002008-11-25T00:06:00.006-05:002008-11-25T00:42:14.674-05:00What security can learn from the recent financial crisis<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_yh9qMmyzuAU/SSuNz-Y7vqI/AAAAAAAAAEQ/jCPJVoYgGr8/s1600-h/after-the-crash_2.jpg"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 160px; height: 160px;" src="http://4.bp.blogspot.com/_yh9qMmyzuAU/SSuNz-Y7vqI/AAAAAAAAAEQ/jCPJVoYgGr8/s200/after-the-crash_2.jpg" alt="" id="BLOGGER_PHOTO_ID_5272463712827719330" border="0" /></a>In the most recent <a href="http://www.sciam.com/article.cfm?id=after-the-crash">Scientific American <span style="font-style: italic;">Perspectives</span></a>, the editors lament the state of our economic system and place a great deal of blame for it on software models. In their words, <span style="font-style: italic;">risk management models should serve only as aids not substitutes for the human factor.<span style="font-style: italic;"><span style="font-style: italic;"> </span></span></span>While this is certainly not the only example of the perils of algorithms replacing analysts, it is perhaps the most poignant.<br /><span style="font-style: italic;"><span style="font-style: italic;"><span style="font-style: italic;"><br /></span></span></span>In the security industry, software vendors and managers have been pushing hard for years to supplant analysts with software -- the theory is that automated software can do just as good a job, and after all, labor in this day and age is expensive. The danger, of course, is that the security field is far less mature than the study of capitalism. Instead of dangerously repurposed algorithms originally designed for unrelated fields of physics and mathematics, though, algorithms never having a connection to any causal relationship are employed in our industry. Indeed, the end state of "security" has been elusive even in the most anecdotal of terms<span style="font-style: italic;"><span style="font-style: italic;"><span style="font-style: italic;">; </span></span></span>we are a long way away from quantitative methods to define the risk management that is our jobs. Yet software vendors are happy to hand-wave their way through a sale in an effort to provide what amounts to a false sense of security riding on principles that are often far enough from empirically proven that they are better described as "faith" than "science," even though they are presented as such. Management, without the requisite technical skill set or trust of their subordinates to identify the b.s., is too often eager to buy into the hype.<br /><br />The information security industry as a whole would be wise to learn from this painful lesson in economics: technologies are tools, to be used by skilled analysts to digest large and complicated data sets and produce actionable intelligence. Analysts drive the tools, the tools should not drive the analysts. Otherwise, you find yourself dangerously reliant on inflexible tools incapable of identifying the larger systemic problems, and the only means to identify a problem is the collapse of the entire system - in our case, a catastrophic compromise of security.Michael Clopperthttp://www.blogger.com/profile/04478065709387726187noreply@blogger.com0