Recently in Insurance Category

There was an interesting article in Wired.com, the magazine, recently that put a new twist on an old topic: What's the best way to make sure the internet, and all of the information that travels on it every day, is safe? How do you really make cybersecurity, secure? After all, the safer the information, the more secure people will feel, and the use of the web, for everything from e-commerce to portable electronic healthcare records, will grow. The flip-side is just as true: the more hacks, hackers and data-breaches, the slower the pace of progress. The good will be harder to come by if the bad is hard to avoid.

The CDC, otherwise known as the Centers for Disease Control, is much in the news recently. Chances are, if you've seen news stories about the Ebola outbreak in West Africa, or the MERS outbreak earlier this year, the CDC has come up in more than just passing. It's the clearinghouse for health related information, combating communicable diseases, the world over. There was just an article, by Betsy McKay, Nicholas Bariyo, and Drew Hinshaw, that appeared in the August 23-24, 2014 Weekend Edition of the Wall Street Journal in the Review Section, which talks about the invaluable help the CDC gave to another country that used to be at risk of virulent Ebola outbreaks. Uganda used to send blood samples to the CDC's facilities in Atlanta, to be screened for Ebola. Now, thanks to technology and training the CDC provided, Ugandans do the same for themselves, in country, which lets them detect outbreaks of the deadly virus sooner, respond to them quicker, and stop them before they do large scale damage.

A central clearinghouse for ideas, both proven and proposed, to safeguard digital information seems like a good idea. Having a one size fits all approach, in which the government entity is the one upon whom everyone fighting the problem relies, may not be. That's not really even the job the CDC is doing with Ebola.

Look at how the Federal Trade Commission is policing cybersecurity: the whole point of the its Reasonable Precautions cybersecurity standard, and its enforcement, and codification, on a case by case basis, is that "Reasonable Precautions" become reasonable, or not, based on the particular facts of a given situation. What might be the right protection for digital information exchanged between wholesale distributors and retailers, might not be sufficient to protect information between retailers and consumers, and that in turn might not be enough to safeguard patients' healthcare histories when they are exchanged among medical providers. What might be a commercially reasonable effort to safeguard information in one industry, might not be in another.

The FTC encourages individual companies, and the industries in which they compete, to voluntarily join together to ensure data security. By making the terms Industry Standard Practices and Commercially Reasonable Efforts mean something substantive, companies can protect themselves against FTC enforcement actions for lax data security, as we've previously noted. Look no further than the April 7, 2014 decision of U.S.D.J. Esther Salas, in The Federal Trade Commission, Plaintiff, v. Wyndham Worldwide Corp., et al., Defendants, Civil Action No. 13-1887 (ES), United States District Court, D. New Jersey, to see why. If a company can't figure out what the FTC wants it to do to protect its customers' data, then it should create, and live by, Industry Standard Practices which will become Commercially Reasonable Efforts if all the major companies in the industry implement them. Many companies already say they do this anyway, right in their privacy policies. Instead of meaningless legal verbiage, make the terms mean something concrete; show they can work, and the FTC will have little to complain about, even if those efforts occasionally fail. Some of the most vulnerable industries, including retail, are banding together to do just that.

The Retail Industries Leaders Association, or RILA, as we previously noted, formed a voluntary clearinghouse, known as the Retail Cyber Intelligence Sharing Center, or R-CISC, to develop and share industry leading practices in cybersecurity, by communicating amongst themselves information they learn regarding threats and defenses. The reported backers of the initiative have put in a lot of effort: they've conferred with cybersecurity experts and involved interested government agencies. They also have a lot at stake: credit cards and financial information are common targets; just ask the RILA members.

One main benefit of a CDC for the wired world, according to Peter W. Singer, is the trust and confidence it will bring to all those who rely on it. By bringing the best and brightest together under one centralized government-funded roof, it would allow users to know that independent experts, with their best interests in mind, were on the job, fighting off the bad guys. That's a good thing; but is that the only way to achieve it?

What if the businesses which hold their customers' information on line were held accountable for not doing enough to protect that data? What if they faced the loss of business, and profits, as well as a government enforcement action, if they didn't do enough? What lengths would they go to in order to keep their customers' trust?

If you look at some quotes in the RILA press release, from the people involved in forming the R-CISC, you'll see that trust is a recurring theme there, too:

There are a few recent news stories that business owners, fraud investigators, and consumers should be aware of. Though not necessarily related, they point out the ever-growing need to protect digital information and the consequences for those who do not. Cybersecurity, it seems, is something that will affect everyone, eventually.

The topic of the first story, unfortunately, is common; the numbers, thankfully, are not, though we should all hope they stay that way. According to an article by Danny Yidron in the Wall Street Journal, which was last updated at 2043 hrs Eastern Time on August 5, 2014, a gang of Russian hackers has amassed 1.2 billion stolen user names and passwords from approximately 500 million unsuspecting people. According to the private security firm that discovered the theft, Hold Security in Milwaukee, the hackers obtained the information from 420,000 websites, allegedly ranging from leaders in major industries to small businesses and personal websites. No measurable harm evidently has come from the theft, at least not yet. The hackers reportedly so far are using the data only to send spam messages on social media accounts. That doesn't mean the people whose information was stolen are free and clear: There is a growing trend in recent years, according to the report, where cybercriminals amass online credentials for later use. While that later use isn't specified, it shouldn't be all that hard to determine. Consumers, according to the report, often use the same user names and passwords across various websites. If a hacker learns a user name and password for one account, it's not that hard to imagine that the hacker also could gain access to the consumer's other accounts, including on websites that store, or have access to, the consumers' financial information, including credit card numbers.

In order to see the harm that was done already, merely because the hackers have the user names and passwords, you have to remember that just exposing your customers' confidential information sometimes is enough to trigger an enforcement action by the Federal Trade Commission to force businesses to take reasonable precautions to protect their customers' digital information. If you remember the LabMD case, which we already spent some time discussing, the FTC's claims of unfair or deceptive acts or practices in, or affecting, commerce, were directed against LabMD for allegedly inadvertently posting the confidential information of less than 10,000 individuals on a file sharing platform that was intended to share music files instead. During the FTC's administrative law trial against LabMD, it reportedly did not even plan to present any witnesses who were the victims of the alleged ID theft; exposing the information, allegedly, was enough.

We're not comparing the theft of user names and passwords to exposing confidential health information, which allegedly is what occurred in the LabMD case. Allowing the theft of user names and passwords could lead to some real trouble, though, especially if it leads to the theft of user financial information, such as credit card numbers. That leads straight to the second news story.

What does investigating Insurance Fraud have in common with the FIFA World Cup currently taking place in Brazil? More than you might think, especially if you're a world-class goalie trying to stop a penalty kick.

The hardest job in all of soccer, or football as the rest of the world calls it, arguably is that of the goalkeeper on a penalty kick. Think of how big that goal really is. Now think of how small that keeper actually is. There is no comparison between the two. Add in the fact that tied games are decided on penalty kicks, and you'll understand the pressure involved, especially when you're playing for the World Cup and know that two World Cup Finals have been decided on penalty shootouts. Many people complain about how unfair it is to decide a game that way, especially when, as they see it, a goalie has to get lucky to stop a penalty kick. Just yesterday, Sunday June 29, 2014, an article in the New York Times by Rob Hughes lamented the fact that Brazil just beat Chile on penalty kicks, especially because Chile's last one didn't go in because it hit the goalpost.

How does a keeper have any chance at all to stop the open, unimpeded shot, from 12 yards away, when the penalty-taker has all that room to kick at? As it turns out, he does it in much the same way a fraud investigator detects a lie: He does his homework, knows what to look for, and then goes on instinct. Unlike a fraud investigator, though, not many people expect the keeper to get it right.

A study recently was conducted to see if there was any way to help the goalkeepers with their nearly impossible task. It came up with a few answers, which also, though inadvertently, may give some pointers on how to conduct a fraud investigation. Entitled "The development of a method for identifying penalty kick strategies in association football", it is authored by Benjamin Noël, Philip Furley, John van der Kamp, Matt Dicks and Daniel Memmert, and is published in the Journal of Sports Sciences.

Figuring out whether someone is lying or telling the truth isn't easy, as we've previously written.

Investigating Insurance Fraud isn't easy, either. Just ask anyone who works in SIU, and they'll tell you about the legwork involved: the interviews to take; the documents to get and go over; the data to analyze. And it all comes down to one thing: Is the person who's making the claim, telling the truth or lying? That, as we've previously written, probably is the hardest question for the fraud investigator to answer.

If the insured is lying about something important, something material and relevant to the investigation of the claim, chances are here in New York he won't recover anything. If the insured claims he had a lot of expensive, scheduled, jewelry stolen, but it wasn't, chances are he's not going to recover anything under his policy. If the insured claims that, when his house burned down, he had a lot of costly new electronics and clothes destroyed, and he's telling the truth, he'll get what he's entitled to under his homeowner's policy. If he's lying, though, chances are he won't get a dime, even for the house.

It's not always easy, though, to know when somebody's lying. We've all heard the classic telltale signs: A person is lying when he blinks rapidly; looks away; looks up and to the side; has dry mouth. The only problem is, so has the liar. Ask yourself: is someone who is basically trying to steal money, and has to lie to get away with it, going to advertise that he's lying?

As we just talked about in our last article, in order for an insurance company to deny a first-party property claim in New York because of arson, and make that denial stand up in court, it has to prove that the insured intentionally caused the fire, and it has to do so by clear and convincing evidence. That is not always an easy burden of proof to meet. There reportedly is an exciting new tool being developed that might make proving arson, i.e., that a fire was intentionally set, easier and help arson investigators become even more effective in determining who caused the fire.

Researchers from the University of Alberta and the Royal Canadian Mounted Police, working in tandem, have developed a new computer program that can pinpoint the presence of gasoline in debris taken from a fire scene. What makes this so important is that gasoline, according to the researchers, is the most common accelerant found in arson fires; evidently preferred by arsonists everywhere. By making it easier to detect, and confirm, the presence of gasoline, you stand a good chance of making arson easier to prove and less profitable to attempt.

What makes the new tool so helpful, is that it often is difficult to confirm the presence of an accelerant in debris taken from a fire scene. No two houses, buildings, or fire scenes, are exactly alike; they contain different mixes of materials. Different materials leave behind different chemical compounds when burned, and these can mask the presence of an accelerant such as gasoline. The researchers, in effect, developed a computer filter that can by-pass the background noise to pinpoint the tell-tale signs of gasoline. They developed their tool by examining data from 232 samples taken from fires across Canada; by using real-life debris rather than merely relying on simulations, the researchers say their tool is dependably accurate.

Currently, determining whether there are traces of an accelerant left behind at a fire scene is time-consuming work. According to the researchers, the Royal Canadian Mounted Police have two separate forensic scientists examine each sample to see if their findings agree; this can take several hours for each sample, and there normally are three to four samples per fire. The newly developed computer program shrinks this time substantially. The first scientist still will have to analyze the debris herself, but will be able to confirm her findings in seconds, rather than hours, by using the computer program. A second forensic scientist will not have to analyze the debris unless the computer program's findings disagree with those of the first scientist.

It takes a lot to deny a first-party property claim in New York because of arson. It is not much easier to make that denial hold up in court. As we've previously mentioned, when an insured seeks to recover for fire damage under his own policy of insurance, i.e., when he makes a first-party property claim, the burden of proof is on the insurer to establish the affirmative defense of arson, and it has to do so by clear and convincing evidence. Perhaps the best way to understand what that abstract legal rule means, though, is to see how it is applied to actual, real-life claims. There is a case, from not that long ago, Maier v. Allstate Ins. Co., 41 A.D.3d 1098, 838 N.Y.S.2d 715 (3rd Dept. 2007), that does a good job of showing just what type of evidence you need in order to establish an arson defense in a civil case.

The Plaintiff in Maier v Allstate, supra, owned a home in the Town of Sand Lake in Rensselaer County, in upstate New York. For a long time he lived half of the year in Sand Lake and the other half of the year he rented a home in Florida. The same day he was going to move to Florida permanently, a fire completely destroyed his Sand Lake house. The Insured tried to recover for the property damage under his homeowner's policy of insurance with Allstate; he submitted a sworn statement in proof of loss, making claim to recover a total of $240,000.00 for damage to the house, personal property, and debris removal. The insurance carrier paid off the $92,000.00 remaining on his mortgage, pursuant to the standard mortgagee clause in the policy, but denied the Insured's claim. When the Insured sued to recover under his policy, the insurance carrier asserted arson as an affirmative defense. After a bench trial, the carrier won and the complaint was dismissed. Not liking the verdict, the Insured appealed. The Appellate Division, Third Department, upheld the verdict. In other words, the carrier met its burden of establishing, by clear and convincing evidence, that the Insured intentionally caused the fire. The evidence the insurance company used, and the trial and appellate court relied on, shows how arson sometimes can be established through even conflicting, circumstantial evidence.

Arson means that the fire was intentionally set. One thing you normally look for to establish arson is the presence of an accelerant, which is a combustible material used to help start, or spread, the fire; think of a flammable liquid such as gasoline. If you find evidence that an accelerant was used, chances are the fire did not start accidentally. Here, there was conflicting evidence about whether or not an accelerant was used:

The County's fire investigator used a specially trained dog to determine that traces of accelerant were found near the entrance to a bedroom that had a burnt-out mattress. The Insured argued the dog's actions did not clearly confirm the presence of an accelerant; the court disagreed.

The insurance company's origin and cause investigator, based on his own inspection, determined that the fire began in the same location, on the burnt-out mattress. Presumably he determined this from the burn patterns on the mattress.

The lab analysis of the mattress, however, found no traces of an accelerant.

Most people by now have heard of the Heartbleed bug. It's the programming flaw in one of the most common encryption methods on the internet: OpenSSL. It makes what should be secure websites, and the personal information they contain, vulnerable to hackers. It is more important, though, than just another internet threat. Every business should consider whether it can be liable for depending on the vulnerable encryption software in the first place. This is especially important in light of the Federal Trade Commission's efforts to ensure that businesses take reasonable precautions to protect their customers' digital data.

The same day the Heartbleed bug was announced, April 7, 2014, Federal District Court Judge Esther Salas, upheld the Federal Trade Commission's right to police corporate cybersecurity practices. As we previously mentioned, the court denied Wyndham Worldwide Corp.'s motion to dismiss a suit the FTC brought against it which arose out of three separate alleged hacking incidents that occurred over a two year period.

According to a story by Matt Egan published on April 8, 2014 in Fox Business.com, the FTC sued Wyndham Worldwide Corp. and three subsidiaries, alleging that Wyndham, unreasonably and unnecessarily, exposed consumers' personal data to unauthorized access and theft that resulted in hundreds of thousands of customers having their payment card account information exported to a domain registered in Russia and a fraud loss of more than $10 million. The suit reportedly alleged that, among other things, Wyndham:

Just in case anyone thinks that cybersecurity is nothing more than an esoteric exercise for computer geeks and technicians, of no importance to the average person or business, the Heartbleed bug has come along to show us all how wrong that is. It was only just discovered two weeks ago and its impact was felt around the world almost immediately.

According to an article in the April 9, 2014 Daily Mail, the Heartbleed bug bypasses the normal safety features of websites. It can affect many of those sites that you might have noticed, which begin with an "https://" in front of their internet address, and which often appear with the symbol of a lock, both of which are supposed to mean they are safe. The bug, though, makes them vulnerable. It reportedly could affect more than 500,000 websites

The bug reportedly allows hackers to bypass normal encryption safety measures to get at encrypted information, including the most profitable types such as credit card numbers, user names, and passwords. The unauthorized user can even obtain the digital keys to impersonate other servers or users and eavesdrop on communications.

It's not considered malicious software or malware because it is more of programing flaw; but that really is not important. What is important is that the flaw, and the vulnerability, went undetected for more than two years until it recently was discovered, independently, by researchers at Google and the Finnish company Codenomicon. A fix is possible, and reportedly fairly easily applied. The problem seems to be that the fix has to be manually applied by the people who run each individual site. That, unfortunately, will take time.

There are a few recent developments in the field of cybersecurity that businesses, individuals, and fraud investigators alike should take note of. One is a recent case which, if followed, could expand a business' liability for security breaches and the others are new tools businesses possibly could use to protect against that same liability.

Digital information, including how to protect it and prevent fraud, is always a fascinating topic. New advances in digital security go hand in hand with ingenuous ways to steal digital information. It is fun to follow, in the same way it is fun to watch Wile E. Coyote chase the Roadrunner: the chase never really ends, they always come back for more, and they use bigger and better gadgets every time.

Cybersecurity, though, is more than just a fun-read. It has real-world implications. According to a report published in the Wall Street Journal, Federal District Court Judge Esther Salas, on Monday, April 7, 2014, upheld the Federal Trade Commission's right to police corporate cybersecurity practices to ensure businesses take reasonable precautions to safeguard their customers' data. The FTC reportedly sued Wyndham Worldwide Corp. and three subsidiaries, in 2012, after hackers broke into the company's corporate computer system and the systems at several individual hotels, between 2008 and early 2010, and allegedly stole credit and debit card information from hundreds of thousands of customers. The FTC alleged that Wyndham did not take reasonable measures to protect its customers' information from theft. It cited what it alleged were wrongly configured software, weak passwords and insecure computer servers. Wyndham argued that the FTC did not have the statutory authority to police corporate cybersecurity. The FTC argued that its authority came from its 100 year old statutory power to protect consumers from businesses that engage in unfair or deceptive trade practices. There was no finding of liability, but the court reportedly upheld the FTC's right to bring the suit. The lawsuit reportedly seeks to have the court order Wyndham to improve its security measures and fix whatever harm its customers suffered.

With the possibility of federal enforcement of what amounts to a "reasonable-precautions" cybersecurity standard, businesses, not just fraud investigators, should pay attention to the potential tools at their disposal to protect their clients' information.

The technological advances in keeping things secret are ingenuous. Much like the mythical jackalope, or my favorite, the basselope, they use things that do not seem to have anything to do with each other, to come up with something better: A more effective lock and key to turn away prying eyes from private information they should not see.

Insurance fraud, how it's committed and how it's solved, always is an interesting topic. It's like a crime drama. Whether it's Castle, The Mentalist, or NCIS, you get to see the end result and then figure out how it happened; and you inevitably learn about a couple of mistakes that help it along and a few more that eventually bring it to an end. Real-life examples are not always as compelling as highly-rated TV shows but they do illustrate the problem and show what investigators should, and should not, do to bring it to an end. The ones we will be talking about in this post are Rental Car Fraud, a smart-phone app, and, once again, the Target Data Breach. They have a lot more in common than you might think.

Rental Car Fraud, a subset of the ever-popular Auto Fraud, is growing at an alarming rate, according to an article in the March 12, 2014 edition of the Claims Journal by Denise Johnson. The concept is simple: rent a series of cars; use them to commit crimes and then dump, and maybe even burn, them when you're done; and conceal your identity by using fake or stolen ID. The cars are hard to trace and the connections between them even more difficult to figure out. According to Kraig Palmer, an investigator with the California Highway Patrol who recently spoke at the Combined Claims Conference in Orange County, Calif., stolen ID's are not hard to come by and can be relatively cheap at about $50 each. The fraud is not easy to solve. According to the article, Palmer said he worked on one case that involved 103 vehicles, which resulted in 72 arrests. Another involved 3 main suspects who rented 42 cars from 2 different rental agencies. One of the suspects was a preferred customer, which evidently made it easier for him to rent the cars and harder for the companies to trace him. Those incentive programs reportedly often allow a customer to register on-line without even having to set foot in the rental agency.

There are certain things a claims adjuster or SIU rep should look for when faced with an auto claim for property damage or bodily injury that involves a rental car. Kraig Palmer, according to the Claims Journal story, suggested they look for unusual patterns, such as whether one person rented more than one vehicle involved in the occurrence. Howard J. Hirsch added a few more, which appeared in the January/February 2011 edition of Auto Rental News; though he referred the tips to auto rental counter agents, fraud investigators might be able to use them as well:

The customer owned a vehicle, but it is not being serviced or repaired [at the time he rents the car].

A recent news story caught my eye because it shows the importance of a win-win negotiation strategy and the need to accurately assess your BATNA, or best alternative to a negotiated agreement. Though it deals with personal injury claims in Kansas, it can teach a lot to businesses in New York and across the country.

The state legislature in Kansas is considering a few important changes to personal injury litigation: increasing the cap on non-economic damages while at the same time changing the rules of evidence to allow a jury to hear whether a plaintiff has had losses covered by other, or collateral, sources including insurance, and to make it more difficult to use questionable expert testimony. To put it another way, the proposed rule changes would allow personal injury plaintiffs to collect more for pain and suffering while arguably making those harder to prove.

According to the story in the February 28, 2014, Claims Journal, Kansas has not raised its cap on damages for pain and suffering since the 1980's. Though the cap was found constitutional by the state's highest court in 2012, the decision disapprovingly noted the long delay in raising the cap. The warning evidently was heard loud and clear. The story notes that the chairman of the state senate judiciary committee, Jeff King, considers it only a matter of time before the current cap, of $250,000, is overturned as being too low. That is why the current bill would increase the cap, in stages, to $350,000.

There's an awful lot of data out there in the great big digital universe, and, as everyone should know by now, it can create a record of people's activities that they may not always fully appreciate. We've previously written about how metadata, when used the right way, can help investigate insurance fraud. As recent news stories point out, however, when used the wrong way by the wrong people, it can be used to steal and defraud innocent people and companies.

Everyone, every time they go online, leaves a digital footprint. Whether it's social media, where you just have to post your latest thought for all to see; e-commerce, where you browse, select and pay for everything on-line; or even shopping at the local brick and mortar store where you pay by credit card, there's a record created and information left behind. Cyber-security, which is just another name for at least trying to keep that digital information safe, was much in the news this Christmas Season. Unfortunately, for shoppers, retailers and broadcasters, alike, cyber-security often seems to be more of a goal than a reality.

By now, the security breach at Target stores may seem like old news, but it's not. On Friday, January 10, 2014, Target said that 70 million people had their names, addresses, and telephone numbers taken by cyber-thieves. This is in addition to the 40 million people who had their credit and debit card information, including Personal Identification Numbers, or PIN's, hacked from Target's servers. Thankfully, a lot of the information, including the PIN's, evidently was encrypted, which at least means it has to be cracked open before a thief can get at it. Whether that will be enough to protect the stolen information is something only time will tell. Unfortunately, even the loss of seemingly benign personal information, like your address, email address, and telephone number, can make you more susceptible to identity theft.

Neiman Marcus, just this past Saturday, January 11, 2014, announced that it, too, had been a victim of a cyber-security attack, in which thieves stole some of its customers' credit card information and made unauthorized purchases during the holiday season.

On December 25, 2013, the BBC was hacked. Just so you don't think that retail customers are the only targets, or that retail sales are the only source of ill-gotten gains, communications companies, even staid government-run ones like the British Broadcasting Corporation, are vulnerable. The story broke because someone saw the thief trying to sell access to the BBC servers, online. That would be kind of like coming home from work and not realizing your house was broken into until you see a commercial trying to sell your heirloom jewelry on TV.

The supposed thief, according the BBC story, is a notorious Russian hacker known by the names ""HASH" and "Rev0lver". From the sound of it, it's not the first time he's done this, and it won't be the last time he'll try. He attempted to sell access on underground, which is another word for clandestine, marketplaces on the web. It was first noticed by the Milwaukee based cyber-security firm Hold Security LLC, which reportedly makes a practice of monitoring such sites to locate people who try to deal in stolen information like this. HASH tried to convince buyers he had something worthwhile by showing them files which only someone with access to the servers would be able to get at.

Now you might think to yourself, what's the big deal about the BBC? After all, it's just information. It's not like anyone stole money directly out of your pocket.

Information and investigations go hand in hand. Whatever you investigate, whether it's insurance fraud; where that priceless, uninsured, artwork went after two rogues in police clothing strolled in late one night and took it from Boston's Gardner Museum; or who the one-armed man really was; you need information, lots of information, to figure it out. But does information always help?

When you investigate insurance fraud, you need information to confirm coverage for a given claim; to determine whether the claim really happened the way the insured said; to establish whether the insured submitted a fraudulent and/or exaggerated claim. You take his recorded statement and examination under oath. You interview witnesses and get corroborating documents. You get ... information.

Can you ever have too much information, though? Can an investigator, in effect, be buried in an avalanche of so many facts, have so much information, that she doesn't know what she has and misses the answer? At least according to an article in Thursday's Wall Street Journal, the answer is an emphatic yes.

The article, on the front page of the December 26, 2013 Wall Street Journal, is about the NSA. Yes, it mentions Edward Snowden, but it really isn't about him. It features William Binney, a former NSA analyst who's been retired for a dozen years. It really is about Mr. Binney's claims that the NSA's spying, the collection of all the metadata, the who-to's and the where-froms, of all of the calls of all of the people the NSA is supposedly collecting, hurts more than helps. Not that it hurts me or you directly, but that it hurts the NSA itself and keeps it from completing its mission: tracking down the bad guys and preventing terrorist attacks.

The most telling line in the whole story is when it eloquently sums up Mr. Binney's complaints about the NSA: "It knows so much, he says, that it can't understand what it has." And that, to put it mildly, can be a problem. Any votes on what would be worse: not being able to figure it out because you don't know enough or because you know too much but don't realize what you have? It seems like a tie: either way you lose; you still don't have the answer.

Our last post was about how sometimes it's easier to tell a lie than others; scientific research suggests it depends upon what you're lying about. Well, there's another new study that says sometimes, just sometimes, people are honest about their lying. In other words, they'll admit it; not always, not under every condition, and definitely not everyone, but definitely sometimes.

Lying has been in the news a lot recently. For sports fans, there's the old tried and true theme of performance enhancing drugs: did he or didn't he use them? Think of Lance Armstrong. He long said he didn't and then admitted that he did. For news junkies, there's Bashar al-Assad. The Syrian president said he didn't use chemical weapons, then the United States said he did, and he's now giving up his chemical weapons stockpile, if only someone will find a suitable place to destroy it. It's hard to say you didn't use chemical weapons if you're giving them up so you can never use them again. Wouldn't it be a whole lot easier if the world could know, before things get out of hand, who's telling the truth and who is lying? Truth detection is more of an art than a science but, there is some science to help the effort along the way.

Lying and Insurance Fraud go together. Cheat, steal, get caught, admit it; which one doesn't belong? Better yet, be honest when you cheat. No, that doesn't work either. Most every time someone tries to get away with something he shouldn't, chances are he's going to lie about it somewhere along the line. Investigators need to know how to ask questions, elicit answers, and get at the truth; so, chances are, they should all know a good lie when they hear one.

Detecting, and exposing, lies also is a big part of trial work. A trial attorney wants to make certain that the jury at least will doubt, if not see right through, slanted testimony; will see the inconsistencies, understand the contradictions, and punish the lies; or even scoff at the willful forgetfulness. Cross-examination, impeachment in general, and artful closing arguments all can accomplish this. Knowing a lie is essential to ensuring that everyone else does, too.

Everyone thinks they know a lie when they hear one. They believe they can tell the difference between a deliberate falsehood and an innocent mistake. That's probably one reason lying is the focus of so much comedy. Think of the Jon Lovitz character from Saturday Night Live, with his blatant lies getting cackles from the studio audience. Then compare him to poor little Emily Litella who seemed to spend most of her time trapped in an endless game of telephone, never getting things quite right, wondering what all the commotion was about violins on TV. Or think about how funny it was to see Jim Carrey playing a slick attorney in the movie Liar Liar, who, for 24 hours straight, had to tell the truth and nothing but the truth. Some lawyers, I mean every lawyer, thought that was funny; yeah, that's the ticket.