Jeremiah Grossman

Wednesday, August 29, 2018

Below is a working theory on the evolution of The Press in the United States as it relates to their relationship with the government and the people. I expect to continue refining the theory as new perspectives and competing ideas are discussed.

Phase 1) TL/DR; The press’s primary value in the system is transmitting a message from the government to the people. The press’s customers are their subscribers who purchase news.

Consider the early days of United States of America throughout the late 1700s and 1800s. As elected officials governed and managed the business of a young country, operationally it was crucial they had a way to broadly communicate with their citizens. They needed to let the everyone know that there was a strong hand was on the tiller, that the people are safe, and they can sleep well at night.

Imagine government’s options to communicate across the country. Think about the technology that was available. How ideas and thoughts were recorded and how they were transmitted. There was no radio. There was no television. There certainly wasn’t an Internet. Ink and paper was the state of the art. While the government could physically write down their message, outside of standing at podiums surrounded by small local gatherings of people or leafletting, they did not have a scalable means of transmitting their message to the masses. So, the government and the country needed assistance. This need is where an entity called “The Press” established it’s value in the larger system — transmission of the government’s messages.

The press had journalists with the necessary tools to record the government’s message down on paper, who would perform some amount of fact checking, and then package the information as a cohesive and largely transcribed story. The press also had access to a new invention called the printing press enabling them to productize the message, such as a newspaper. And most importantly, the press created channels of distribution, such as horses automobiles, and the telephone to deliver the message to a variety of locations where it could be easily purchased. Put simply, the process was the press would be invited in by the government to document their message, print a large number of copies of newspapers, and then make the materials widely available to the people where the had the opportunity to buy it.

This predominately was the value the press provided to the system — transmission. Of course it was important for the press to be mindful about what they printed, particularly the accuracy and relevancy of the message, otherwise people might stop paying for it in favor of another newspaper. The people depended upon the credibility of the press to tell the story right. Let’s not forget this. This dynamic between the government, the press, and the people carried through until about the 40s and 50s when the radio and television began changing the paradigm.

Phase 2) TL/DR; The press’s value proposition in split between transmitting the government’s message to the people to telling them how to think about the message. The press’s customers are their subscribers and advertisers.

Over time communications technology advanced and became far more affordable. Radio became common place in society and television sets started appearing in the average U.S. household in the early 1950s. With these modern tools the government could transmit their message directly to the people across the country and cut out the middleman — the press. The government no longer exclusively needed the press to get its message out to the masses.

And since the government could bring their message directly to the people, and the country was in a more stable position, they didn’t necessarily have to always help people sleep at night. In fact, often the opposite was true. Causing some amount of fear actually helped the government further consolidate their power. As a result, the press needed to find a new way to provide value to the system, beyond just message transmission, in order to maintain their survival.

During this period the press began shifting their value proposition from solely message transmission to telling people how to think about the government’s message. The press would take the governments message, create a compelling narrative to help people interpret the story, and transmit their product to the masses over the television and radio airwaves. As a product, this method of news packaging and delivery was attractive to people. There had become a significant increase of information to parse from a variety of sources, too much for any one individual to decide what was important to consume. The Ted Koppel’s and Tom Brokaw’s of the television news world became the credible sources of the press and filled a void left by the government to help the country sleep well at night.

There were a couple of problems the press needed overcome though. For example, it was not possible for the press to make money with electronically broadcast news in the same way they did with print media. It was not mechanically possible to charge viewers or listeners for news transmitted electronically. The press’s solution was sponsored advertising. News content accompanied by commercials. As such, the more people that watched and listened, and the longer they did so, the more valuable their advertising slots became. Another challenge the press needed to overcome with television and radio was that the physical time available to watch or listen to content was more limited. There is far more space to pack in far more content into the pages of a daily newspaper than what’s possible in a couple of hours of daily broadcast news spots.

Collectively, the new adversing-based business model and a limited amount of space for content changed how the press covered the government’s message in two profound ways. First, it shifted the priority for the transmission and accuracy of the message as their main value proposition in favor of whatever kept people watching and listening. And secondly, the press had to be more choosey with what message and narrative filled the available time and what didn’t. Furthermore, the press had to narrowly cater to a particular demographic of person with their content than what was originally necessary with print. In television and radio the more the news captures emotions and attention, the better the press does financially.

Fast forward several decades under these conditions and the people begin to clearly see a lot of bias in the press and an agenda. And while bias and agenda is certainly present, how could there NOT be, but in this context it’s best not to think the press is taking a principled stand. They’re not. Instead think of their bias and agenda as simply the press’s way of focusing their product at a particular customer like any business would. The press is drawing a circle around a suitable demographic for their product and value proposition, which again is to both transmit the government’s message and tell people how to think about in a way that helps to maximize ears and eyeballs. For example, there effectively isn’t a left-wing or right-wind press in a truly principled manner. The exact opposite is true. There are left-wing and right-wing people where the press tailor makes a narrative based on the government’s message that is compelling to them.

Phase 3) TL/DR; The press’s value proposition is telling people how to think about the government’s message. The press’s customers are advertisers.

Enter the Internet in the early 1990s where transmission of information had become easy and inexpensive for everyone, and not just within the United States, but the entire modern world. The government no longer needed the press to transmit their message to the people at all. The government could transmit directly to the people or the people could go directly to the government. No middleman required. Without anyone needing the press for message transmission, as a business, print media fell off a cliff in under two decades. For survival sake, the press had to complete the transition away from transmission of the government’s message as a value proposition to nearly exclusively telling people how to think about it. That’s all of value they offer and in doing so message accuracy can be sacrificed whenever necessary. And of course the press’s content is heavily layered with advertisements.

As is turns out, the best way to attract more viewers for longer is to connect on a deep emotional level. Do whatever you can to rile up your viewers and they’ll continue coming back for more, even share the content forward to others in their social group, where even more ads can be lucratively served. Press outlets such as Fox News, CNN, MSNBC, and more all cross the political spectrum have strongly adopted this approach. The press outlets that didn’t adapt, died.

As a product, these sources offer people a compelling and packaged way to validate their worldview — and THAT’s what keep the press ultimately credible and trustworthy in their minds. As evidence notice how the Ted Koppel’s and Tom Brokaw’s of the press have been replaced by Alex Jones, Bill O’Reilly, Keith Olbermann and Don Lemon’s. Is this change of their starting lineup designed to give viewers access to more accurate news or instead get people emotionality invested? Even when the press is demonstrably biased, factually incorrect, call it ‘Fake News’ if you like, it’s extremely difficult for people to suddenly distrust the press they decided to loyally watch for so long and find another compelling source. Perception becomes reality and exists long after the occasional and quietly posted retraction.

Phase 4) TL/DR; If via the Internet people once again adopt a direct paid-for news model, the press’s primary value become providing people with an individually relevant, timely, and accurate news source of the government’s message.

Going forward into the future, many feel there is a demand for relevant, timely, and accurate news sources. News that’s devoid of the influences of advertisements and paid directly by the people. Several press outlets have set-up paywalls and the business model is showing signs of success. All people have to do is register an account on a website or mobile application and supply a credit card online to become a subscriber. Another business model is micro-payments, where viewers pay for their content a la carte — by the article. A relatively new web browser named Brave, which includes ad blocking, offers native push button micro-payment functionality which supports participating content publishers.

Here’s the thing: If any transition back to directly paid-for news truly starts gaining enough traction to threaten to the ad-based model, fierce resistance by the advertising industry is sure to follow. Google and Facebook, which dominate the online advertising industry, who along side many others who make all their billions annually off ‘free’ content, will do everything they can to prevent the transition. Their livelihoods depend on it. Regardless, if it so happens that the paid-for model once again takes hold, many positive externalities may also come with it. Fake news goes away. Click-bait headlines go away. Online spam goes away. Privacy invading ads go away. All of these shady practices found on the Internet depend wholly on advertisements to function. The adoption of ad blockers, which now stands over 20% marketshare, indicates that people are making a choice, even if they aren’t yet paying for their content. Broad access to new technology is once again causing a shift in the press and how the government communicates it’s message.

Tuesday, July 17, 2018

Over the last two decades the penetration-testing / vulnerability assessment market went through a series of evolutionary waves that went like this…

1st Wave: “You think we have vulnerabilities and want to hire an employee to find them? You’re out of your mind!"

The business got over it and InfoSec people were hired for the job.

2nd Wave: "You want us to contract with someone outside the company, a consultant, to come onsite and test our security? You’re out of your mind!"

The business got over it and consultant pen-testing took over.

3rd Wave: "You want us to hire a third-party company, a scanning service, to test our security and store the vulnerabilities off-site? You’re out of your mind!’

The business got over it and SaaS-based vulnerability assessments took over.

4th Wave: "You want us to allow anyone in the world to test our security, tell us about our vulnerabilities, and then reward them with money? You’re out of your mind!"

Businesses are getting over it and the crowd-sourcing model is taking over.

The evolution reminds us of how the market for ‘driving’ and ‘drivers’ changed over the last century. People first drove their own cars around, then many hired personal drivers, then came along cars-for-hire services (cabs / limos) with ‘professional’ drivers that you didn’t personally know, and now to Uber/Lyft where you basically jump into some complete stranger’s car. Soon, we’ll jump into self-drivers cars without a second thought.

As we see, each new wave doesn't necessarily replace the last -- it's additive. Provided there is an economically superior ROI and value proposition, people also typically get over their fears of the unknown and will adopt something new and better. It just takes time.

Monday, May 07, 2018

There is a serious misalignment of interests between Application Security vulnerability assessment vendors and their customers. Vendors are incentivized to report everything they possible can, even issues that rarely matter. On the other hand, customers just want the vulnerability reports that are likely to get them hacked. Every finding beyond that is a waste of time, money, and energy, which is precisely what’s happening every day. Let’s begin exploring this with some context:

Within any Application Security vulnerability statistics report published over the last 10 years, they’ll state that the vast majority of websites contain one or more serious issues — typically dozens. To be clear, we’re NOT talking about website infected with malvertizements or network based vulnerabilities that can trivially found via Shodan and the like. Those are separate problems. I’m talking exclusively about Web application vulnerabilities such as SQL Injection, Cross-Site Scripting, Cross-Site Request Forgery, and several dozen more classes. The data shows only half of those reported vulnerabilities ever get fixed and doing so take many months. Pair this with Netcraft’s data that states there’s over 1.7B sites on the Web. Simple multiplication tells us that’s A LOT of vulnerabilities in the ecosystem laying exposed.

The most interesting and unexplored question to me these days is NOT the sheer size of the vulnerability problem, or why so many issue remain unresolved, but instead figuring out why all those ‘serious’ website vulnerabilities are NOT exploited. Don’t get me wrong, a lot of websites certainly do get exploited, perhaps on the order of millions per year, but it’s certainly not in the realm of tens or even hundreds of millions like the data suggests it could be. And the fact is, for some reason, the vast majority of plainly vulnerable websites with these exact issues remain unexploited for years upon years.

Some possible theories as to why are:

These ‘vulnerabilities’ are not really vulnerabilities in the directly exploitable sense.

The vulnerabilities are too difficult for the majority of attackers to find and exploit.

The vulnerabilities are only exploitable by insiders.

There aren’t enough attackers to exploit all or even most of the vulnerabilities.

There are more attractive targets or exploit vectors for attackers to focus on.

Other plausible theories?

As someone who worked in the Application Security vulnerability assessment vendor for 15+ years, here is something to consider that speaks to theory #1 and #2 above.

During the typical sales process, ‘free’ competitive bakeoffs with multiple vendors is standard practice. 9 out of 10 times, the vendor who produces the best results in terms of high-severity vulnerabilities with low false-positives will win the deal. As such, every vendor is heavily incentivized to identify as many vulnerabilities as they can to demonstrate their skill and overall value. Predictively then, every little issue will be reported, from the most basic information disclosure issues to the extremely esoteric and difficult to exploit. No vendor wants to be the one who missed or didn’t report something that another vendor did and risk losing a deal. More is always better. As further evidence, ask any customer about the size and fluff of their assessment reports.

Understanding this, the top vulnerability assessment vendors invest millions upon millions of dollars each year in R&D to improve their scanning technology and assessment methodology to uncover every possible issue. And it makes sense because this is primarily how vendors win deals and grow their business.

Before going further, let’s briefly discuss the reason why we do vulnerability assessments in the first place. When it comes to Dynamic Application Security Testing (DAST), specifically testing in production, the whole point is to find and fix vulnerabilities BEFORE an attacker will find and exploit them. It’s just that simple. And technically, it just takes the exploitation of one vulnerability for the attacker to succeed.

Here’s the thing: if attackers really aren’t finding, exploiting, or even caring about these vulnerabilities as we can infer from the supplied data — the value in discovering them in the first place becomes questionable. The application security industry industry is heavily incentivized to find vulnerabilities that for one reason or another have little chance of actual exploitation. If that’s the case, then all those vulnerabilities that DAST is finding rarely matter much and we’re collectively wasting precious time and resources focusing on them.

Let’s tackle Static Application Security Testing (SAST) next.

The primary purpose of SAST is to find vulnerabilities during the software development process BEFORE they land in production where they’ll eventually be found by DAST and/or exploited by attackers. With this in mind, we must then ask what the overlap is between vulnerabilities found by SAST and DAST. If you ask someone who is an expert in both SAST and DAST, specifically those with experience in this area of vulnerability correlation, they’ll tell you the overlap is around 5-15%. Let’s state that more clearly, somewhere between 5-15% of the vulnerabilities reported by SAST are found by DAST. And let’s remember, from an I-dont-want-to-be-hacked perspective, DAST or attacker-found vulnerabilities are really the only vulnerabilities that matter. Conceptually, SAST helps find them those issues earlier. But, does it really? I challenge anyone, particularly the vendors, to show actual broad field evidence.

Anyway, what then are all those OTHER vulnerabilities that SAST is finding, which DAST / attackers are not? Obviously, it’ll be some combination of theories #1 - #3 above. They’re not really vulnerabilities, they’re too difficult to remotely find/exploit, or attackers don’t care about them. In either case, what’s the real value for the other 85-95% of vulnerabilities reported by SAST? A: Not much. If you want to know why so many reported 'vulnerabilities' aren’t fixed, this is your long-winded answer.

This is also why cyber-insurance firms feel comfortable writing policies all day long, even if they know full well their clients are technically riddled with vulnerabilities, because statistically they know those issues are unlikely to be exploited or lead to claims. That last part is key — claims. Exploitation of a vulnerability does not automatically result in a ‘breach,’ which does not necessarily equate to a ‘material business loss,’ and loss is the only thing the business or their insurance carrier truly cares about. Many breaches do not result is losses. This is an crucial point that many InfoSec pros are unable to distinguish between — breach and loss. They are NOT the same thing.

So far we’ve discussed the misalignment of interests between Application Security vulnerability assessment vendors and their customers. The net-result of which is that that we’re wasting huge amounts of time, money, and energy finding and fixing vulnerabilities that rarely matter. If so, the first thing we need to do is come up with a better way to prioritize and justify remediation, or not, of the vulnerabilities we already know exist and should care about. Secondly, we must more efficiently invest our resources in the application security testing process.

Let’s make up some completely bogus numbers to fill in the variables. In a given website we know there’s a vanilla SQL Injection vulnerability in a non-authenticated portion of the application, which has a 50% likelihood of being exploited over a year period. If exploitation results in a material breach, the expected loss is $1,000,000 for incident handling and clean up. Applying our formula:

In which case, in can be argued that if the SQL injection vulnerability in question costs less than $500,000 to fix, then that’s the reasonable choice. And, the sooner the better. If remediation costs more than $500,000, and I can’t imagine why, then leave it as is. The lesson is that the less a vulnerability costs to fix the more sense it makes to do so. Next, let’s change the variables to the other extreme. We’ll cut the expected loss figure in half and reduce the likelihood of breach to 1% over a year.

Now, if vulnerability remediation of the SQL Injection vulnerability costs less than $5,000, it makes sense to fix it. If more, or far more, then one could argue it makes business sense not to. This is the kind of decision that makes the vast majority of information security professionals extremely uncomfortable and instead why they like to ask the business to, “accept the risk.” This way their hands are clean, don’t have to expose their inability to do risk management, and can safely pull an, “I told you so,” should an incident occur. Stating plainly, if your position is recommending that the business should fix each and every vulnerability immediately regardless of the cost, then you’re really not on the side of the business and you will continue being ignored.

What’s needed to enable better decision-making, specifically how to decide what known vulnerabilities to fix or not to fix, is a purpose-built risk matrix specifically for application security. A matrix that takes each vulnerability class, assigns a likelihood of actual exploitation using whatever available data, and containing an expected loss range. Where things will get far more complicated is that the matrix should take into account the authentication status of the vulnerability, any mitigating controls, the industry, resident data volume and type, insider vs external threat actor, a few other things to improve accuracy.

While never perfect, as risk modeling never is, I’m certain we could begin with something incredibly simple that would far outperform our the way we currently do things — HIGH, MEDIUM, LOW (BLEH!). When it comes to vulnerability remediation, how exactly is a business supposed to make good informed decisions about remediation using traffic light signals? As we’ve seen, and as all previous data indicates, they don’t. Everyone just guesses and 50% of issues go unfixed.

InfoSec's version of the traffic light: This light is green, because in most places where we put this light it makes sense to be green, but we're not taking into account anything about the current street’s situation, location or traffic patterns. Should you trust that light has your best interest at heart? No. Should you obey it anyway? Yes. Because once you install something like that you end up having to follow it, no matter how stupid it is.

Assuming for a moment the aforementioned matrix is created, all of a sudden it fuels the solution to the lack of efficiency in the application security testing process. Since we’ll know exactly what types of vulnerabilities we care about in terms of actual business risk and financial loss, investment can be prioritized to only look for those and ignore all the other worthless junk. Those bulky vulnerability assessment reports would likely dramatically decrease in size and increase in value.

If we really want to push forward our collective understanding of application security and increase the value of our work, we need to completely change the way we think. We need to connect pools of data. Yes, we need to know what vulnerabilities websites currently have — that matter. We need to know what vulnerabilities various application security testing methodologies actually test for. Then we need to overlap this data set with what vulnerabilities attackers predominately find and exploit. And finally, within that data set, which exploited vulnerabilities lead to the largest dollar losses.

If we can successfully do that, we’ll increase the remediation rates of the truly important vulnerabilities, decrease breaches AND losses, and more efficiently invest our vulnerability assessment dollars. Or, we can leave the status quo for the next 10 years and have the same conversations in 2028. We have work to do and a choice to make.

Tuesday, March 27, 2018

The biggest and most
important unsolved problem in Information Security, arguably all of IT, is
asset inventory. Rather, the lack of an up-to-date asset inventory that
includes all websites, servers, databases, desktops, laptops, data, and so on.
Strange as it sounds, the vast majority of organizations with more than even a
handful of websites simply do not know what they are, where they are, what they
do, or who is responsible for them. This is also strange because an asset
inventory is the first step of every security standard and recommended by every
expert.

After many of
years of research, it turns out the reason why is rather simple: There are
currently no enterprise-grade products, or at least anything widely adopted, that
solves this problem. This is important because obviously it’s impossible to
secure what you don’t know you own. And, without an up-to-day asset inventory, the
most basic and reasonable security questions simply can’t be answered:

What percentage of our websites have been tested for vulnerabilities?

Which of our websites have GDPR, PCI-DSS, or other compliance concerns?

Which of our websites are up-to-date on their patches, or not?

An organization has been acquired, what IT assets do they have?

As of today, with
Bit Discovery, all of this is about to change. BitDiscovery is a website asset inventory solution designed to be lightning fast,
super simple, and incredibly comprehensive.

While identifying
the websites owned by a particular organization may sound simple at first blush,
let me tell you, it’s not. In fact, asset inventory is probably the most
challenging technical problem I’ve ever worked on in my entire career. As
Robert ‘RSnake’ Hansen’s, member of Bit Discovery’sfounding teamdescribes in glorious detail, the variety of challenges are absolutely astounding. Just in terms of
cpu, memory, disk, bandwidth, software and scalability in general, we’re
talking about a legitimate big data problem.

Then there’s the
challenges that websites may exist on different IP-ranges, domains, hosting
providers, fall under a variety of marketing brands, managed by various
subsidiaries and partners, confused by domain typo-squatters and phishing scams,
and may come and go without warning. Historically, finding all of an
organizations websites is typically conducted through on-demand scanning seeded
by a domain name or IP-address range. For anyone who has ever tried this model,
they know it’s tedious, time consuming (hours, days, etc), and false-positive
and false-negative prone. It became clear that solving the asset inventory
problem required a completely different approach.

Bit Discovery,
thanks to the acquisition and integration of OutsideIntel, is unique because we
take routine snapshots of the entire Internet, organizing massive amounts of
information (WHOIS, passive DNS, netblock info, port scans, web crawling, etc.),
extract metadata, and distil it down to simple and elegant asset inventory
tracking. As a completely web-based application, this is what gives Bit
Discovery its incredible speed and comprehensiveness. Instead of waiting days
or weeks for an asset discovery scan to complete, searches take just seconds
or less.

After years of
hard work and months private beta product testing with dozens of Fortune 500
companies, we’re finally ready to officially announce Bit Discovery and just weeks
away from our first full production release. I’m particularly proud and
personally honored to be joined by an absolutely world-class founding team. As
an entrepreneur you couldn’t ask for a better, more experienced, or inspiring
group of people. All of us have worked together for many years on a variety of
projects, and we’re ready for our next adventure! Our vision is that every
organization in the world needs an asset inventory, which includes what we like
to say, “Every. Little. Bit.”

As you can see, our
goals at Bit Discovery are extremely ambitious and we need strong financial backing
fully realize them. As part of the company launch, we’re also thrilled to
announce a $2,700,000 early stage round led by Susan
Mason (Managing Partner, Aligned Partners).

During our fund
raising process, we interviewed well over a dozen exceptional venture
capitalist firms, and we were very picky in the process. Aligned’s experience, style, and
investment approach matched with us perfectly. Their team specializes in
experienced founding teams who have been-there-and-done-that, who operate
companies in a capital efficient manner, who know their market and customers
well, and where the founders and investors interests are in alignment. That’s
us and we couldn’t be happier with the partnership.

Collectively,
between Bit Discovery’s founding team and investor group, I’ve never seen or
heard of a more experienced and accomplished team that brings everything
together for a company launch. We have everything we need for a runaway
success story. We have the right team, the right product, the right financial
partners, and we’re at the right time in the market. All we have to do is put in
the work, serve our customers well, and the rest will take care of itself.

Finally,
the Bit Discovery team wants to personally thank all the many people who helped
us along the way and behind the scenes. We sincerely appreciate everyone’s
help. We couldn’t have gotten this far without you. Look out world, we’re ready
to do this!

Friday, March 09, 2018

Two years ago, I joined SentinelOne as Chief of Security Strategy to help in the fight against malware and ransomware. I’d been following the evolution of ransomware for several years prior, and like a few others, saw that all the ingredients were in place for this area of cyber-crime to explode.We knew it was likely that a lot of people were going to get hurt, that significant damage could be inflicted, and something needed to be done. The current anti-malware solutions, even the most popular, were ill-equipped to handle the onslaught. Unfortunately, we weren’t wrong, and that was about the time I was first introduced to SentinelOne.When I met SentinelOne, it was just a tiny Silicon Valley start-up. It was quickly apparent to me that they had the right team, the right technology, and most importantly – the right vision necessary to make a meaningful difference in the world. SentinelOne is something special, a place poised for greatness, and an opportunity where I knew I could make a personal impact. The time was right for me, so I made the leap! Today, only a short while later, SentinelOne is a major player in the endpoint protection with super high aspirations.Since joining I have had a front row seat to several global ransomware outbreaks including WannaCry, nPetya, and other lesser-known malware events as the SentinelOne team "laid the hardcore smackdown" on all of them. One particularly memorable event was WannaCry launching at the exact moment I was on stage giving a keynote presentation to raise awareness about ransomware. Quite an experience, but also a proud moment as all of our customers remained completely protected. One can't hope for better than that!On SentinelOne's behalf, I have had the unique opportunity to participate in the global malware dialog, learn a ton more about the information security industry, continue helping protect hundreds of companies, and something I’m personally proud of: launch the first ever product warranty against ransomware ($1,000,000). I contributed to some cutting-edge research alongside some truly brilliant and passionate people. It’s been a tremendous experience, one which I’m truly thankful for.I wish I had all the time in the world to pursue all of my many interests, which as an entrepreneur, is one of my greatest challenges. For me, it will soon be time to announce and launch my next adventure -- a new startup! I’ll share more details in a few weeks, but it’s something my co-founders and I have been quietly working on for years.The best part is that I don’t have to say goodbye to SentinelOne. I’ll be moving into a company advisory role. This way I still get to remain connected, in-the-know and continue helping SentinelOne achieve its full potential.For now, a very special thank you to everyone at SentinelOne, especially Tomer Weingarten (Co-Founder, CEO) for leading the charge and allowing me to be a part of the journey.

Tuesday, April 18, 2017

How would you react if I told you that computer security experts are six times more likely to run just an ad blocking software on their PCs, over just anti-malware? Would you be surprised?

That was the result from a Twitter poll I conducted last year, in which more than 1,000 self-identified computer security experts shared that they are more concerned about ads than malware. While social media polls are admittedly unscientific, I’d argue these numbers are actually pretty close to reality, which means that roughly three-out-of-four computer security experts largely view ad-blocking as a more indispensable part of protection than anti-virus software by far. Let that sink in for a moment.

I understand the business model… really, I do. Publishers rely on their viewers seeing ads because that’s how they make their money. In return they provide all of us with free content and services. If ads are blocked, publishers make less money, and the free content and services dries up. On the other hand, these same ads are one of the leading threats to personal security and privacy. So, what we have here is an online version of a Mexican standoff. Neither side is able to proceed without exposing themselves to danger.

So here we are without many technical options: the only thing internet users can do to protect themselves is to install an ad blocker (like hundreds of million of users have already done); and the only thing a publisher can do is to use an ad blocker detector on their website(s). This allows them to decide to block content and/or issue a plea to whitelist their ads. Unfortunately, the technology model for publishers to ‘safely’ include third-party content such as ads into their pages is also lacking. There just isn’t a comprehensive and scalable way to check billions of ads daily to see if they’re safe to distribute – or if the origin of an ad is reputable. Of course, publishers can also supplement or replace advertising revenue streams with a paid-for-content model, hosting conferences, asking for donations, and so on.

Let's also be very clear— neither the publisher, advertisers, or the ad-tech industry that binds everything together takes on any liability for malvertising, infecting a user with malware, or the resultant damage. This also means that they have zero incentives to meaningfully address the problem, and never ever seem to want to talk about the security concerns that make ad blocking an essential security practice. They only want to talk about the money their side is losing, or how to make ads more visually tolerable. But even if ads magically become less obnoxious and less costly in terms of bandwidth, we still have the security problem. Until the advertising technology industry admits that their product - the ads themselves - are simply dangerous, there can be no real resolution.

Wednesday, February 01, 2017

As a long-time InfoSec veteran and entrepreneur, I’m often asked by company founders to join their advisory board and lend a hand. Sometimes the founders need someone with experience they can trust to bounce ideas off of, provide guidance on how to scale their business, point out the many pitfalls to avoid, make key introductions, and so on. I’ve been in this advisor role for many years, as well as mentoring more than fifty young businesses over the last five years alone through a startup incubator. Making this contribution has been highly rewarding, both personally and professionally. It leverages the many successes and mistakes I’ve made in my career to help others. Advising and mentoring is something I plan to continue doing for the foreseeable future. The only downside is that due to time constraints, I have to be extremely selective.

When I come across a hot new start-up, I fully research the company, try out the product, research their target market, meet the management team, speak with a handful of customers, and if I have something useful to offer, only then do I feel comfortable enough to get involved. Oh, another requirement is that none should be competitive with one another. Because I do my homework and have a deep understanding of the information security industry, I’m often asked by colleagues what companies I’d recommend in a particular space or a product to solve a particular enterprise problem. For those interested, below is where I’ve placed my bets and what I’m recommending.

Full Disclosure: I’ve a financial interest in most of these companies below, but not all of them. And if I don't have a stake, it doesn't mean I won't recommend them -- I can be just as impressed otherwise. I’ve also indicated where I serve in an official advisory capacity.

"The pioneer and innovator in crowdsourced security testing for the enterprise, Bugcrowd harnesses the power of tens of thousands security researchers to surface critical software vulnerabilities and level the playing field in cybersecurity. Bugcrowd also provides a range of responsible disclosure and managed service options that allow companies to commission a customized security testing program that fits their specific requirements. Bugcrowd’s proprietary vulnerability disclosure platform is deployed by Tesla, Pinterest, Western Union, Fitbit and many others."

"WhiteHat Security is the leading provider of website risk management solutions. Sentinel, WhiteHat's flagship product, is the most accurate, complete and cost-effective website vulnerability management solution available. It delivers the flexibility, simplicity and manageability that organizations need to take control of website security and prevent Web attacks. WhiteHat Sentinel is built on a Software-as-a-Service (SaaS) platform designed from the ground up to scale massively, support the largest enterprises and offer the most compelling business efficiencies, lowering your overall cost of ownership."

"Kenna is a software-as-a-service Risk and Vulnerability Intelligence platform that accurately measures risk and prioritizes remediation efforts before an attacker can exploit an organization’s weaknesses. Kenna automates the correlation of vulnerability data, threat data, and 0-day data, analyzing security vulnerabilities against active Internet breaches so that InfoSec teams can prioritize remediations and report on their overall risk posture."

"AsTech Consulting is a security consulting company which helps clients understand their risks and what to do about them. As independent security specialists, we employ very experienced security professionals, more than half of which have over 15 years of relevant experience."

Thursday, October 20, 2016

It’s common for long-time information experts like myself to be asked what keeps us in the security industry. Some say it’s a good stable job that nicely pays the bills. Others find the work interesting and enjoy the constant intellectual challenge. Some the like the people, the community, the culture, and exchange of ideas. Of course for many, it be some combination of all these things. For myself, while each of the above plays a part, I must admit those haven’t been my core reasons to stay on for a long time now.

Like I’ve said many times in the past, the Internet is single greatest invention we’re likely to witness in our lifetime. The Internet is a place that now connects over 2 billion people. The Internet is how we communicate and keep up with friends and family. It’s where we shop. It’s how we learn about ourselves and the world. It’s where bank and pay bills. It’s what entertains us and how we get from place to place. It’s how we better ourselves. Entire economies are now dependent on the Internet. If you think about it, we’re often more open and honest about our most intimate secrets with the Google search box than any our closest confidants. There is not a single person among us, or perhaps anyone we know, that won’t be online today. Something this important, this vital to the world and to humanity, must be protected. The Internet.

The time each of us has in this life is limited and far too short. Every day is a gift. And in that time few people ever get an opportunity to be a part of something greater than themselves. A chance to make an impact and to do something that truly matters. Internet security matters. So for me, to play even a small part in helping to protect the Internet and the billions of people connected feels like a good way to spend ones life time. That’s why I’m still here.

In the immortal words of Dan Geer, “There is never enough time. Thank you for yours.”

Monday, June 06, 2016

Todayis a bigday forme. I’m contributing to a company called SentinelOne, but I really don’t think of it as a job. I’ve accepted an opportunity to work side by side with other brilliant and highly motivated people where we’re all helping to solve important and challenging InfoSec problems. In this case, malware and ransomware. You see, more than anything, I want to make a positive impact on InfoSec. As I’ve said many times, we who work InfoSec are responsible for protecting the greatest invention we’ll see if our lifetime — the Web, the Internet, and the billions of people using it every day. That’s our mission, our calling. As such, I’ve always kept a evolving list of our industries biggest challenges, which I include in most of my slide decks.

Intersection of security guarantees and cyber-insurance

Explosion of Ransomware

Vulnerability remediation

Industry skill shortage

Measuring the impact of SDLC security controls

The only problem on the list I haven’t gotten the chance to work on is ransomware, an incredibly effective and fast-growing form of malware that’s taking over. I’ve long railed hard about the crap antivirus products on the market and the billions of dollars people and companies spend annually to effectively make themselves less secure. Yes, that’s right, I said LESS secure. The FBI recently published that ransomware victims paid out $209 million in Q1 2016 compared to $24 million for ALL of 2015. Some non-trivial percentage of those ransom dollars will be used for R&D, so the smart money says ransomware will quickly get even more sophisticated and out of hand. And to that point, in recent and well publicized news, ransomware is also responsible for disrupting the care of patients in a few hospitals. This can’t be allowed — lives are at risk!

In my life after WhiteHat, I looked at ton of companies and interesting opportunities where I could lend a helping hand, of which there was no shortage. My inbox was crushed with many worthy projects, but I knew I had to choose wisely. Then out pops a company with some super cool tech and few have heard of them, SentinelOne. SentinelOne is right smack in the middle of the malware/ransomware war, for which Gartner calls next-generation endpoint protection (NG EPP). I met with the founders, the team, all super cool and passionate people. A real gem of a start-up. I felt strongly that I needed to join this fight. Plus, I’ll be working on some exciting stuff behind that scenes that I can’t wait to share with world. Good things take time, so please, standby!

About Me

Jeremiah Grossman's career spans nearly 20 years and has lived a literal lifetime in computer security to become one of the industry's biggest names. He has received a number of industry awards, been publicly thanked by Microsoft, Mozilla, Google, Facebook, and many others for his security research. Jeremiah has written hundreds of articles and white papers. As an industry veteran, he has been featured in hundreds of media outlets around the world. Jeremiah has been a guest speaker on six continents at hundreds of events including many top universities. All of this was after Jeremiah served as an information security officer at Yahoo!