Happeh Theoryhttp://www.happehtheory.com
Sun, 20 Jan 2019 15:36:23 +0000en-UShourly1https://wordpress.org/?v=5.0.4Work Of Data Analyst Using AI Software Confirms The Main Claim of Happeh Theory, Which Is Facial Characteristics Can Determine If A Human Being Is Gay, And Other Personality Traitshttp://www.happehtheory.com/2018/07/08/work-of-data-analyst-using-ai-software-confirms-that-facial-characteristics-can-be-used-to-determine-if-a-person-is-gay-and-other-personality-characteristics-which-is-the-main-claim-of-happeh-theory/
http://www.happehtheory.com/2018/07/08/work-of-data-analyst-using-ai-software-confirms-that-facial-characteristics-can-be-used-to-determine-if-a-person-is-gay-and-other-personality-characteristics-which-is-the-main-claim-of-happeh-theory/#commentsSun, 08 Jul 2018 16:49:08 +0000http://www.happehtheory.com/?p=31032Ten or 15 years after the creation of Happeh Theory and the work found in this blog, a data analyst using AI software has confirmed the major claims of Happeh Theory. That facial characteristics can be used to determine if a human being is gay, or if they possess other personality characteristics. The AI software was trained on face pictures gleaned from the internet. If the same AI software was trained using pictures of the entire bodies of people, it’s results would confirm the other major claim of Happeh Theory. That the physical characteristics displayed by a particular human body can also be used to determine if a person is gay, or if they possess other particular personality characteristics.

Since this scientific work confirms the claims of Happeh Theory, there is no further need to update this blog. It’s mission has been accomplished. The blog will remain in place for historical reasons, but no further updates are likely to be made to it. Specifically meaning this blog is not being currently maintained, and requests for contact, opinions or conversation will neither be seen nor responded to.

It was my pleasure to contribute to the scientific knowledge of the human race, and I must say that I do feel proud of this work, and more importantly, I feel vindicated by it’s confirmation. The hate and denial from the internet that was experienced by me during the 10 or 15 years of this blogs existence was unpleasant and quite difficult to tolerate and work through.

The article describing the work of the data analyst can be found below. It is quite large. I would recommend reading the entire article to ensure you encounter every nuance of it.

‘I was shocked it was so easy’: ​meet the professor who says facial recognition ​​can tell if you’re gay

Psychologist Michal Kosinski says artificial intelligence can detect your sexuality and politics just by looking at your face. What if he’s right?

Vladimir Putin was not in attendance, but his loyal lieutenants were. On 14 July last year, the Russian prime minister, Dmitry Medvedev, and several members of his cabinet convened in an office building on the outskirts of Moscow. On to the stage stepped a boyish-looking psychologist, Michal Kosinski, who had been flown from the city centre by helicopter to share his research. “There was Lavrov, in the first row,” he recalls several months later, referring to Russia’s foreign minister. “You know, a guy who starts wars and takes over countries.” Kosinski, a 36-year-old assistant professor of organisational behaviour at Stanford University, was flattered that the Russian cabinet would gather to listen to him talk. “Those guys strike me as one of the most competent and well-informed groups,” he tells me. “They did their homework. They read my stuff.”

Kosinski’s “stuff” includes groundbreaking research into technology, mass persuasion and artificial intelligence (AI) – research that inspired the creation of the political consultancy Cambridge Analytica. Five years ago, while a graduate student at Cambridge University, he showed how even benign activity on Facebook could reveal personality traits – a discovery that was later exploited by the data-analytics firm that helped put Donald Trump in the White House.

That would be enough to make Kosinski interesting to the Russian cabinet. But his audience would also have been intrigued by his work on the use of AI to detect psychological traits. Weeks after his trip to Moscow, Kosinski published a controversial paper in which he showed how face-analysing algorithms could distinguish between photographs of gay and straight people. As well as sexuality, he believes this technology could be used to detect emotions, IQ and even a predisposition to commit certain crimes. Kosinski has also used algorithms to distinguish between the faces of Republicans and Democrats, in an unpublished experiment he says was successful – although he admits the results can change “depending on whether I include beards or not”.

How did this 36-year-old academic, who has yet to write a book, attract the attention of the Russian cabinet? Over our several meetings in California and London, Kosinski styles himself as a taboo-busting thinker, someone who is prepared to delve into difficult territory concerning artificial intelligence and surveillance that other academics won’t. “I can be upset about us losing privacy,” he says. “But it won’t change the fact that we already lost our privacy, and there’s no going back without destroying this civilisation.”

The aim of his research, Kosinski says, is to highlight the dangers. Yet he is strikingly enthusiastic about some of the technologies he claims to be warning us about, talking excitedly about cameras that could detect people who are “lost, anxious, trafficked or potentially dangerous. You could imagine having those diagnostic tools monitoring public spaces for potential threats to themselves or to others,” he tells me. “There are different privacy issues with each of those approaches, but it can literally save lives.”

“Progress always makes people uncomfortable,” Kosinski adds. “Always has. Probably, when the first monkeys stopped hanging from the trees and started walking on the savannah, the monkeys in the trees were like, ‘This is outrageous! It makes us uncomfortable.’ It’s the same with any new technology.”

***

Kosinski has analysed thousands of people’s faces, but never run his own image through his personality-detecting models, so we cannot know what traits are indicated by his pale-grey eyes or the dimple in his chin. I ask him to describe his own personality. He says he’s a conscientious, extroverted and probably emotional person with an IQ that is “perhaps slightly above average.” He adds: “And I’m disagreeable.” What made him that way? “If you trust personality science, it seems that, to a large extent, you’re born this way.”

His friends, on the other hand, describe Kosinski as a brilliant, provocative and irrepressible data scientist who has an insatiable (some say naive) desire to push the boundaries of his research. “Michal is like a small boy with a hammer,” one of his academic friends tells me. “Suddenly everything looks like a nail.”

Born in 1982 in Warsaw, Kosinski inherited his aptitude for coding from his parents, both of whom trained as software engineers. Kosinski and his brother and sister had “a computer at home, potentially much earlier than western people of the same age”. By the late 1990s, as Poland’s post-Soviet economy was opening up, Kosinski was hiring his schoolmates to work for his own IT company. This business helped fund him through university, and in 2008 he enrolled in a PhD programme at Cambridge, where he was affiliated with the Psychometrics Centre, a facility specialising in measuring psychological traits.

It was around that time that he met David Stillwell, another graduate student, who had built a personality quiz and shared it with friends on Facebook. The app quickly went viral, as hundreds and then thousands of people took the survey to discover their scores according to the “Big Five” metrics: openness, conscientiousness, extraversion, agreeableness and neuroticism. When users completed the myPersonality tests, some of which also measured IQ and wellbeing, they were given an option to donate their results to academic research.

Kosinski came on board, using his digital skills to clean, anonymise and sort the data, and then make it available to other academics. By 2012, more than 6 million people had taken the tests – with about 40% donating their data, creating the largest dataset of its kind.

In May, New Scientist magazine revealed that the dataset’s username and password had been accidentally left on GitHub, a commonly used code-sharing website. For four years, anyone – not just authorised researchers – could have accessed the data. Before the magazine’s investigation, Kosinski had admitted to me that there were risks to their liberal approach. “We anonymised the data, and we made scientists sign a guarantee that they will not use it for any commercial reasons,” he had said. “But you just can’t really guarantee that this will not happen.” Much of the Facebook data, he added, was “de-anonymisable”. In the wake of the New Scientist story, Stillwell closed down the myPersonality project. Kosinski sent me a link to the announcement, complaining: “Twitter warriors and sensation-seeking writers made David shut down the myPersonality project.”

During the time the myPersonalitydata was accessible, about 280 researchers used it to publish more than 100 academic papers. The most talked-about was a 2013 study co-authored by Kosinski, Stillwell and another researcher, that explored the relationship between Facebook “Likes” and the psychological and demographic traits of 58,000 people. Some of the results were intuitive: the best predictors of introversion, for example, were Likes for pages such as “Video Games” and “Voltaire”. Other findings were more perplexing: among the best predictors of high IQ were Likes on the Facebook pages for “Thunderstorms” and “Morgan Freeman’s Voice”. People who Liked pages for “iPod” and “Gorillaz” were likely to be dissatisfied with life.

If an algorithm was fed with sufficient data about Facebook Likes, Kosinski and his colleagues found, it could make more accurate personality-based predictions than assessments made by real-life friends. In other research, Kosinski and others showed how Facebook data could be turned into what they described as “an effective approach to digital mass persuasion”.

Their research came to the attention of the SCL Group, the parent company of Cambridge Analytica. In 2014, SCL tried to enlist Stillwell and Kosinski, offering to buy the myPersonality data and their predictive models. When negotiations broke down, they relied on the help of another academic in Cambridge’s psychology department – Aleksandr Kogan, an assistant professor. Using his own Facebook personality quiz, and paying users (with SCL money) to take the tests, Kogan collected data on 320,000 Americans. Exploiting a loophole that allowed developers to harvest data belonging to the friends of Facebook app users (without their knowledge or consent), Kogan was able to hoover up additional data on as many as 87 million people.

Cambridge Analytica always denied using Facebook-based psychographic targeting during the Trump campaign, but the scandal over its data harvesting forced the company to close. The saga also proved highly damaging to Facebook, whose headquarters are less than four miles from Kosinski’s base at Stanford’s Business School in Silicon Valley. The first time I enter his office, I ask him about a painting beside his computer, depicting a protester armed with a Facebook logo in a holster instead of a gun. “People think I’m anti-Facebook,” Kosinski says. “But I think that, generally, it is just a wonderful technology”.

Still, he is disappointed in the Facebook CEO, Mark Zuckerberg, who, when he testified before US Congress in April, said he was trying to find out “whether there was something bad going on at Cambridge University”. Facebook, Kosinski says, was well aware of his research. He shows me emails he had with employees in 2011, in which they disclosed they were “using analysis of linguistic data to infer personality traits”. In 2012, the same employees filed a patent, showing how personality characteristics could be gleaned from Facebook messages and status updates.

Kosinski seems unperturbed by the furore over Cambridge Analytica, which he feels has unfairly maligned psychometric micro-targeting in politics. “There are negative aspects to it, but overall this is a great technology and great for democracy,” he says. “If you can target political messages to fit people’s interests, dreams, personality, you make those messages more relevant, which makes voters more engaged – and more engaged voters are great for democracy.” But you can also, I say, use those same techniques to discourage your opponent’s voters from turning out, which is bad for democracy. “Then every politician in the US is doing this,” Kosinski replies, with a shrug. “Whenever you target the voters of your opponent, this is a voter-suppression activity.”

Kosinski’s wider complaint about the Cambridge Analytica fallout, he says, is that it has created “an illusion” that governments can protect data and shore up their citizens’ privacy. “It is a lost war,” he says. “We should focus on organising our society in such a way as to make sure that the post-privacy era is a habitable and nice place to live.”

***

Kosinski says he never set out to prove that AI could predict a person’s sexuality. He describes it as a chance discovery, something he “stumbled upon”. The lightbulb moment came as he was sifting through Facebook profiles for another project and started to notice what he thought were patterns in people’s faces. “It suddenly struck me,” he says, “introverts and extroverts have completely different faces. I was like, ‘Wow, maybe there’s something there.’”

Physiognomy, the practice of determining a person’s character from their face, has a history that stretches back to ancient Greece. But its heyday came in the 19th century, when the Italian anthropologist Cesare Lombroso published his famous taxonomy, which declared that “nearly all criminals” have “jug ears, thick hair, thin beards, pronounced sinuses, protruding chins, and broad cheekbones”. The analysis was rooted in a deeply racist school of thought that held that criminals resembled “savages and apes”, although Lombroso presented his findings with the precision of a forensic scientist. Thieves were notable for their “small wandering eyes”, rapists their “swollen lips and eyelids”, while murderers had a nose that was “often hawklike and always large”.

Lombroso’s remains are still on display in a museum in Turin, besides the skulls of the hundreds of criminals he spent decades examining. Where Lombroso used calipers and craniographs, Kosinski has been using neural networks to find patterns in photos scraped from the internet.

Kosinski’s research dismisses physiognomy as “a mix of superstition and racism disguised as science” – but then argues it created a taboo around “studying or even discussing the links between facial features and character”. There is growing evidence, he insists, that links between faces and psychology exist, even if they are invisible to the human eye; now, with advances in machine learning, such links can be perceived. “We didn’t have algorithms 50 years ago that could spot patterns,” he says. “We only had human judges.”

In a paper published last year, Kosinski and a Stanford computer scientist, Yilun Wang, reported that a machine-learning system was able to distinguish between photos of gay and straight people with a high degree of accuracy. They used 35,326 photographs from dating websites and what Kosinski describes as “off-the-shelf” facial-recognition software.

Presented with two pictures – one of a gay person, the other straight – the algorithm was trained to distinguish the two in 81% of cases involving images of men and 74% of photographs of women. Human judges, by contrast, were able to identify the straight and gay people in 61% and 54% of cases, respectively. When the algorithm was shown five facial images per person in the pair, its accuracy increased to 91% for men, 83% for women. “I was just shocked to discover that it is so easy for an algorithm to distinguish between gay and straight people,” Kosinski tells me. “I didn’t see why that would be possible.”

Neither did many other people, and there was an immediate backlash when the research – dubbed “AI gaydar” – was previewed in the Economist magazine. Two of America’s most prominent LGBTQ organisations demanded that Stanford distance itself from what they called its professor’s “dangerous and flawed research”. Kosinski received a deluge of emails, many from people who told him they were confused about their sexuality and hoped he would run their photo through his algorithm. (He declined.) There was also anger that Kosinski had conducted research on a technology that could be used to persecute gay people in countries such as Iran and Saudi Arabia, where homosexuality is punishable by death.

Kosinski says his critics missed the point. “This is the inherent paradox of warning people against potentially dangerous technology,” he says. “I stumbled upon those results, and I was actually close to putting them in a drawer and not publishing – because I had a very good life without this paper being out. But then a colleague asked me if I would be able to look myself in the mirror if, one day, a company or a government deployed a similar technique to hurt people.” It would, he says, have been “morally wrong” to bury his findings.

One vocal critic of that defence is the Princeton professor Alexander Todorov, who has conducted some of the most widely cited research into faces and psychology. He argues that Kosinski’s methods are deeply flawed: the patterns picked up by algorithms comparing thousands of photographs may have little to do with facial characteristics. In a mocking critique posted online, Todorov and two AI researchers at Google argued that Kosinski’s algorithm could have been responding to patterns in people’s makeup, beards or glasses, even the angle they held the camera at. Self-posted photos on dating websites, Todorov points out, project a number of non-facial clues.

Kosinski acknowledges that his machine learning system detects unrelated signals, but is adamant the software also distinguishes between facial structures. His findings are consistent with the prenatal hormone theory of sexual orientation, he says, which argues that the levels of androgens foetuses are exposed to in the womb help determine whether people are straight or gay. The same androgens, Kosinski argues, could also result in “gender-atypical facial morphology”. “Thus,” he writes in his paper, “gay men are predicted to have smaller jaws and chins, slimmer eyebrows, longer noses and larger foreheads… The opposite should be true for lesbians.”

This is where Kosinski’s work strays into biological determinism. While he does not deny the influence of social and environmental factors on our personalities, he plays them down. At times, what he says seems eerily reminiscent of Lombroso, who was critical of the idea that criminals had “free will”: they should be pitied rather than punished, the Italian argued, because – like monkeys, cats and cuckoos – they were “programmed to do harm”.

“I don’t believe in guilt, because I don’t believe in free will,” Kosinski tells me, explaining that a person’s thoughts and behaviour “are fully biological, because they originate in the biological computer that you have in your head”. On another occasion he tells me, “If you basically accept that we’re just computers, then computers are not guilty of crime. Computers can malfunction. But then you shouldn’t blame them for it.” The professor adds: “Very much like: you don’t, generally, blame dogs for misbehaving.”

Todorov believes Kosinski’s research is “incredibly ethically questionable”, as it could lend a veneer of credibility to governments that might want to use such technologies. He points to a paper that appeared online two years ago, in which Chinese AI researchers claimed they had trained a face-recognition algorithm to predict – with 90% accuracy – whether someone was a convicted criminal. The research, which used Chinese government identity photographs of hundreds of male criminals, was not peer-reviewed, and was torn to shreds by Todorov, who warned that “developments in artificial intelligence and machine learning have enabled scientific racism to enter a new era”.

Kosinski has a different take. “The fact that the results were completely invalid and unfounded, doesn’t mean that what they propose is also wrong,” he says. “I can’t see why you would not be able to predict the propensity to commit a crime from someone’s face. We know, for instance, that testosterone levels are linked to the propensity to commit crime, and they’re also linked with facial features – and this is just one link. There are thousands or millions of others that we are unaware of, that computers could very easily detect.”

Would he ever undertake similar research? Kosinski hesitates, saying that “crime” is an overly blunt label. It would be more sensible, he says, to “look at whether we can detect traits or predispositions that are potentially dangerous to an individual or society – like aggressive behaviour”. He adds: “I think someone has to do it… Because if this is a risky technology, then governments and corporations are clearly already using it.”

***

But when I press Kosinski for examples of how psychology-detecting AI is being used by governments, he repeatedly falls back on an obscure Israeli startup, Faception. The company provides software that scans passports, visas and social-media profiles, before spitting out scores that categorise people according to several personality types. On its website, Faception lists eight such classifiers, including “White-Collar Offender”, “High IQ”, “Paedophile” and “Terrorist”. Kosinski describes the company as “dodgy” – a case study in why researchers who care about privacy should alert the public to the risks of AI. “Check what Faception are doing and what clients they have,” he tells me during an animated debate over the ethics of his research.

I call Faception’s chief executive, Shai Gilboa, who used to work in Israeli military intelligence. He tells me the company has contracts working on “homeland security and public safety” in Asia, the Middle East and Europe. To my surprise, he then tells me about a research collaboration he conducted two years ago. “When you look in the academia market you’re looking for the best researchers, who have very good databases and vast experience,” he says. “So this is the reason we approached Professor Kosinski.”

But when I put this connection to Kosinski, he plays it down: he claims to have met Faception to discuss the ethics of facial-recognition technologies. “They came [to Stanford] because they realised what they are doing has potentially huge negative implications, and huge risks.” Later, he concedes there was more to it. He met them “maybe three times” in Silicon Valley, and was offered equity in the company in exchange for becoming an adviser (he says he declined).

Kosinski denies having collaborated on research, but admits Faception gave him access to its facial-recognition software. He experimented with Facebook photos in the myPersonality dataset, he says, to determine how effective the Faception software was at detecting personality traits. He then suggested Gilboa talk to Stillwell about purchasing the myPersonality data. (Stillwell, Kosinski says, declined.)

He bristles at my suggestion that these conversations seem ethically dubious. “I will do a lot of this,” he says. “A lot of startup people come here and they don’t offer you any money, but they say, ‘Look, we have this project, can you advise us?’” Turning down such a request would have made him “an arrogant prick”.

He gives a similar explanation for his trip to Moscow, which he says was arranged by Sberbank Corporate University as an “educational day” for Russian government officials. The university is a subsidiary of Sberbank, a state-owned bank sanctioned by the EU; its chief executive, Russia’s former minister for economic development, is close to Putin. What was the purpose of the trip? “I didn’t really understand the context,” says Kosinski. “They put me on a helicopter, flew me to a place, I came on the stage. On the helicopter I was given a briefing about who was going to be in the room. Then I gave a talk, and we talked about how AI is changing society. And then they sent me off.”

The last time I see Kosinski, we meet in London. He becomes prickly when I press him on Russia, pointing to its dire record on gay rights. Did he talk about using facial-recognition technology to detect sexuality? Yes, he says – but this talk was no different from other presentations in which he discussed the same research. (A couple of days later, Kosinski tells me he has checked his slides; in fact, he says, he didn’t tell the Russians about his “AI gaydar”.)

Who else was in the audience, aside from Medvedev and Lavrov? Kosinski doesn’t know. Is it possible he was talking to a room full of Russian intelligence operatives? “That’s correct,” he says. “But I think that people who work for the surveillance state, more than anyone, deserve to know that what they are doing is creating real risk.” He tells me he is no fan of Russia, and stresses there was no discussion of spying or influencing elections. “As an academic, you have a duty to try to counter bad ideas and spread good ideas,” he says, adding that he would talk to “the most despicable dictator out there”.

I ask Kosinski if anyone has tried to recruit him as an intelligence asset. He hedges. “Do you think that if an intelligence agency approaches you they say: ‘Hi, I’m the CIA’?” he replies. “No, they say, ‘Hi, I’m a startup, and I’m interested in your work – would you be an adviser?’ That definitely happened in the UK. When I was at Cambridge, I had a minder.” He tells me about a British defence expert he suspected worked for the intelligence services who took a keen interest in his research, inviting him to seminars attended by officials in military uniforms.

In one of our final conversations, Kosinski tells me he shouldn’t have talked about his visit to Moscow, because his hosts asked him not to. It would not be “elegant” to mention it in the Guardian, he says, and besides, “it is an irrelevant fact”. I point out that he already left a fairly big clue on Facebook, where he posted an image of himself onboard a helicopter with the caption: “Taking off to give a talk for Prime Minister Medvedev.” He later changed his privacy settings: the photo was no longer public, but for “friends only”.

The study that this story is based on has revealed that taking Ibuprofen in the recommended doses cause male fertility problems. Which means all those men who are prescribed Ibuprofen for pain and a myriad other health problems, are actually being prescribed a drug which is going to lower their fertility or make them completely infertile. The stoopid scientists haven’t decided which yet. They need more time to study the situation.

Never trust a scientist or a doctor. They are stoopid. They do whatever idiotic thing a drug salesman or some book tells them they should do. They have no personal knowledge of what effects the drugs they prescribe have. So when can never know if the drug they give you for pain relief is going to end up killing your testicles and turning you into a woman or not.

The story about the study revealing Ibuprofen causes fertility problems is reprinted below.

Ibuprofen has a negative impact on the testicles of young men, a study published Monday in the journal Proceedings of the National Academy of Sciences found. When taking ibuprofen in doses commonly used by athletes, a small sample of young men developed a hormonal condition that typically begins, if at all, during middle age. This condition is linked to reduced fertility.

Advil and Motrin are two brand names for ibuprofen, an over-the-counter pain reliever. CNN has contacted Pfizer and Johnson & Johnson, the makers of both brands, for comment.

The Consumer Healthcare Products Association, a trade group that represents manufacturers of over-the-counter medications and supplements, “supports and encourages continued research and promotes ongoing consumer education to help ensure safe use of OTC medicines,” said Mike Tringale, a spokesman for the association. “The safety and efficacy of active ingredients in these products has been well documented and supported by decades of scientific study and real-world use.”

The new study is a continuation of research that began with pregnant women, explained Bernard Jégou, co-author and director of the Institute of Research in Environmental and Occupational Health in France.

Jégou and a team of French and Danish researchers had been exploring the health effects when a mother-to-be took any one of three mild pain relievers found in medicine chests around the globe: aspirin, acetaminophen (also known as paracetamol and sold under the brand name Tylenol) and ibuprofen.

Their early experiments, published in several papers, showed that when taken during pregnancy, all three of these mild medicines affected the testicles of male babies.

Testicles and testosterone

Testicles not only produce sperm, they secrete testosterone, the primary male sex hormone.

All three drugs then are “anti-androgenic,” meaning they disrupt male hormones, explained David M. Kristensen, study co-author and a senior scientist in the Department of Neurology at Copenhagen University Hospital.

The three drugs even increased the likelihood that male babies would be born with congenital malformations, Kristensen noted.

Tringale noted that pregnant and nursing women should always ask a health professional before using medicines.

Knowing this, “we wondered what would happen in the adult,” he said. They focused their investigation on ibuprofen, which had the strongest effects.

A non-steroidal anti-inflammatory drug, ibuprofen is often taken by athletes, including Olympians and professional soccer players for example, before an event to prevent pain, Jégou said. Are there health consequences for the athletes who routinely use this NSAID?

The research team recruited 31 male volunteers between the ages of 18 and 35. Of these, 14 were given a daily dosage of ibuprofen that many professional and amateur athletes take: 600 milligrams twice a day, explained Jégou. (This 1200-mg-per-day dose is the maximum limit as directed by the labels of generic ibuprofen products.) The remaining 17 volunteers were given a placebo.

For the men taking ibuprofen, within 14 days, their luteinizing hormones — which are secreted by the pituitary gland and stimulate the testicles to produce testosterone — became coordinated with the level of ibuprofen circulating in their blood. At the same time, the ratio of testosterone to luteinizing hormones decreased, a sign of dysfunctional testicles.

This hormonal imbalance produced compensated hypogonadism, a condition associated with impaired fertility, depression and increased risk for cardiovascular events, including heart failure and stroke.

For the small group of young study participants who used ibuprofen for only a short time, “it is sure that these effects are reversible,” Jégou said. However, it’s unknown whether the health effects of long-term ibuprofen use are reversible, he said.

After this randomized, controlled clinical trial, the research team experimented with “little bits of human testes” provided by organ donors and then conducted test tube experiments on the endocrine cells, called Leydig and Sertoli cells, which produce testosterone, explained Jégou.

The point was to articulate “in vivo, ex vivo and in vitro” — in the living body, outside the living body and in the test tube — that ibuprofen has a direct effect on the testicles and so testosterone.

“We wanted to understand what happened after exposure (to ibuprofen) going from the global human physiology over to the specific organ (the testis) down to the endocrine cells producing testosterone,” Kristensen said.

More than idle curiosity prompted such an extensive investigation.

Questions around male fertility

The World Health Organization estimates that one in every four couples of reproductive age in developing countries experiences childlessness despite five years of attempting pregnancy.

A separate study estimated that more than 45 million couples, or about 15% of all couples worldwide, were infertile in 2010, while another unrelated study suggested that men were solely responsible for up to 30% and contribute up to 50% of cases overall.

Meanwhile, a recent analysis published in the journal Human Reproduction Update found that sperm counts of men in North America, Europe, Australia and New Zealand are plunging. Researchers recorded a 52% decline in sperm concentration and a 59% decline in total sperm count over a nearly 40-year period ending in 2011.

Erma Z. Drobnis, an associate professional practice professor of reproductive medicine and fertility at the University of Missouri, Columbia, noted that most drugs are not evaluated for their effects on human male fertility before marketing. Drobnis, who was not involved in the new study, has done extensive research into sperm biology and fertility.

“There is evidence that some medications are particularly harmful to the male reproductive system, including testosterone, opioids, antidepressants, antipsychotics, immune modulators and even the over-the-counter antacid cimetidine (Tagamet),” she said. “However, prescribing providers rarely mention these adverse effects with patients when prescribing these medications.

She believes the new study, though small, is “important” because ibuprofen is among the most commonly used medications.

Though the new research indicates that ibuprofen disrupts the reproductive hormones in healthy young men, she thinks it’s possible there’s an even greater negative effect in men with low fertility. The other OTC drugs concerning for potential fathers are cimetidine and acetaminophen. She recommends that men who are planning to father a child avoid drugs for several months.

“Larger clinical trials are warranted,” she said. “This is timely work that should raise awareness of medication effects on men and potentially their offspring.”

Jégou agrees that more study is needed to answer many questions, including whether ibuprofen’s effects on male hormones are seen at low doses and whether long-term effects are reversible.

“But the alarm has been raised now,” he said. “if this serves to remind people that we are really dealing with medical drugs — not with things which are not dangerous — this would be a good thing.”

“We need to remember that it is a pharmaceutical compound that helps a lot of people worldwide,” Kristensen said. He noted, though, that of the three mild analgesics examined, ibuprofen had “the broadest endocrine-disturbing properties identified so far in men.”

]]>http://www.happehtheory.com/2018/01/09/study-reveals-taking-ibuprofen-causes-male-infertility/feed/1A New AI Can Determine If A Person Is Gay Or Straight From A Facial Photograph. This Phenomenon Supports Both The Claims Made By Phrenology Decades Ago And The Claims Made By Happeh Theory Over The Last Decadehttp://www.happehtheory.com/2017/09/08/a-new-ai-can-determine-if-a-person-is-gay-or-straight-from-a-facial-photograph-this-phenomenon-supports-both-the-claims-made-by-phrenology-decades-ago-and-the-claims-made-by-happeh-theory-over-the-las/
http://www.happehtheory.com/2017/09/08/a-new-ai-can-determine-if-a-person-is-gay-or-straight-from-a-facial-photograph-this-phenomenon-supports-both-the-claims-made-by-phrenology-decades-ago-and-the-claims-made-by-happeh-theory-over-the-las/#commentsFri, 08 Sep 2017 02:43:49 +0000http://www.happehtheory.com/?p=31020Artificial intelligence can accurately guess whether people are gay or straight based on photos of their faces, according to new research suggesting that machines can have significantly better “gaydar” than humans.

The study from Stanford University – which found that a computer algorithm could correctly distinguish between gay and straight men 81% of the time, and 74% for women – has raised questions about the biological origins of sexual orientation, the ethics of facial-detection technology and the potential for this kind of software to violate people’s privacy or be abused for anti-LGBT purposes.

The machine intelligence tested in the research, which was published in the Journal of Personality and Social Psychology and first reported in the Economist, was based on a sample of more than 35,000 facial images that men and women publicly posted on a US dating website. The researchers, Michal Kosinski and Yilun Wang, extracted features from the images using “deep neural networks”, meaning a sophisticated mathematical system that learns to analyze visuals based on a large dataset.

The research found that gay men and women tended to have “gender-atypical” features, expressions and “grooming styles”, essentially meaning gay men appeared more feminine and vice versa. The data also identified certain trends, including that gay men had narrower jaws, longer noses and larger foreheads than straight men, and that gay women had larger jaws and smaller foreheads compared to straight women.

Human judges performed much worse than the algorithm, accurately identifying orientation only 61% of the time for men and 54% for women. When the software reviewed five images per person, it was even more successful – 91% of the time with men and 83% with women. Broadly, that means “faces contain much more information about sexual orientation than can be perceived and interpreted by the human brain”, the authors wrote.

The paper suggested that the findings provide “strong support” for the theory that sexual orientation stems from exposure to certain hormones before birth, meaning people are born gay and being queer is not a choice. The machine’s lower success rate for women also could support the notion that female sexual orientation is more fluid.

While the findings have clear limits when it comes to gender and sexuality – people of color were not included in the study, and there was no consideration of transgender or bisexual people – the implications for artificial intelligence (AI) are vast and alarming. With billions of facial images of people stored on social media sites and in government databases, the researchers suggested that public data could be used to detect people’s sexual orientation without their consent.

It’s easy to imagine spouses using the technology on partners they suspect are closeted, or teenagers using the algorithm on themselves or their peers. More frighteningly, governments that continue to prosecute LGBT people could hypothetically use the technology to out and target populations. That means building this kind of software and publicizing it is itself controversial given concerns that it could encourage harmful applications.

But the authors argued that the technology already exists, and its capabilities are important to expose so that governments and companies can proactively consider privacy risks and the need for safeguards and regulations.

“It’s certainly unsettling. Like any new tool, if it gets into the wrong hands, it can be used for ill purposes,” said Nick Rule, an associate professor of psychology at the University of Toronto, who has published research on the science of gaydar. “If you can start profiling people based on their appearance, then identifying them and doing horrible things to them, that’s really bad.”

Rule argued it was still important to develop and test this technology: “What the authors have done here is to make a very bold statement about how powerful this can be. … Now we know that we need protections.”

Kosinski was not available for an interview, according to a Stanford spokesperson. The professor is known for his work with Cambridge University on psychometric profiling, including using Facebook data to make conclusions about personality. Donald Trump’s campaign and Brexit supporters deployed similar tools to target voters, raising concerns about the expanding use of personal data in elections.

In the Stanford study, the authors also noted that artificial intelligence could be used to explore links between facial features and a range of other phenomena, such as political views, psychological conditions or personality.

This type of research further raises concerns about the potential for scenarios like the science-fiction movie Minority Report, in which people can be arrested based solely on the prediction that they will commit a crime.

“AI can tell you anything about anyone with enough data,” said Brian Brackeen, CEO of Kairos, a face recognition company. “The question is as a society, do we want to know?”

Brackeen, who said the Stanford data on sexual orientation was “startlingly correct”, said there needs to be an increased focus on privacy and tools to prevent the misuse of machine learning as it becomes more widespread and advanced.

Rule speculated about AI being used to actively discriminate against people based on a machine’s interpretation of their faces: “We should all be collectively concerned.”

]]>http://www.happehtheory.com/2017/09/08/a-new-ai-can-determine-if-a-person-is-gay-or-straight-from-a-facial-photograph-this-phenomenon-supports-both-the-claims-made-by-phrenology-decades-ago-and-the-claims-made-by-happeh-theory-over-the-las/feed/4Scientists Claim Phrenology Is Not A Valid Science. Now, In 2017, A Computer Application Used To Diagnose Genetic Diseases From Facial Pictures Proves That The Basic Principles Of Phrenology Are Correct.http://www.happehtheory.com/2017/04/10/scientists-claim-phrenology-is-not-a-valid-science-now-in-2017-a-computer-application-used-to-diagnose-genetic-diseases-from-facial-pictures-proves-that-the-basic-principles-of-phrenology-are-correc/
http://www.happehtheory.com/2017/04/10/scientists-claim-phrenology-is-not-a-valid-science-now-in-2017-a-computer-application-used-to-diagnose-genetic-diseases-from-facial-pictures-proves-that-the-basic-principles-of-phrenology-are-correc/#commentsMon, 10 Apr 2017 21:44:42 +0000http://www.happehtheory.com/?p=31008Scientists have derided and ridiculed the science of Phrenology, which claims health problems can be diagnosed by visual examination of the head, is a “psuedoscience”, and has no validity. That is because Scientists always attack any other field of science that undercuts their own profit driven methods of diagnosing and treating health problems. After all, if someone can obtain a health diagnosis from a person who only has to visually examine their head, then why would that person pay 1000’s of dollars to go to a doctor or a hospital for the batteries of expensive tests that Western Medicine employs.

This post is about a news article that has been released describing a doctor who can diagnose genetic diseases by simple visual examination of a patients face, and a company that has developed a computer program to perform the same diagnosis.

This story proves that Phrenology is a valid science. Phrenology diagnoses illnesses by examining a patients head, and the doctor in the following story as well as the computer software described both diagnose genetic illnesses using a picture of a patient’s face.

The point of the posting this story and others like it on this website are to prove that scientists and doctors are stoopid. They can and do constantly make mistakes and outright lie about issues they have no comprehension of. Doctors and scientists are only concerned with making money, and will ridicule and attack anything or anybody that can possibly interfere with their ability to make money. Never ever trust a doctor or a scientists. Always question what they say and do, and keep an open mind to any other information that is available about the subject you are interested in.

The news story about facial diagnoses of genetic illness is reprinted below.

Maximilian Muenke has a superpower: He can diagnose disease just by looking at a person’s face.

Specifically, he can spot certain genetic disorders that make telltale impressions on facial features.

“Once you’ve done it for a certain amount of years, you walk into a room and it’s like oh, that child has Williams Syndrome,” he said, referring to a genetic disorder that can affect a person’s cognitive abilities and heart.

And that’s an incredibly useful skill, even as genetic sequencing becomes more widespread. For one thing, it can be the factor that sends someone to get a genetic test in the first place. For another, people in many parts of the world don’t have access to genetic tests at all.

That’s inspired years of effort to train a computer to do the same thing. Software that analyzes a patient’s face for signs of disease could help clinicians better diagnose and treat people with genetic syndromes.

Some older attempts at facial analysis relied on large, clunky scanners – a tool better suited to a lab, not the field. Now, in the era of smartphones, such efforts have a whole new promise. Face2Gene, a program developed by Boston-based startup FDNA, has a mobile app that clinicians can use to snap photos of their patients and get a list of syndromes they might have.

Meanwhile, Muenke and his colleagues at NIH last month published an important advance: The ability to diagnose disease in a non-Caucasian face.

It’s a promising preliminary sign. But if facial recognition software is to be widely useful for diagnoses, software developers and geneticists will need to work together to overcome genetics’ systemic blind spots.

Diagnoses vs. probabilities

The algorithms in general work on the same principles: Measuring the size of facial features and their placement to detect patterns. They’re both trained on databases of photographs doctors take of their patients. The NIH works with partners around the world to collect their photos; FDNA accepts photos uploaded to Face2Gene.

But they differ in a key way: Whereas the NIH algorithm can predict if someone has a given genetic disorder, the Face2Gene algorithm spits out not diagnoses, but probabilities. The app describes photos as being a certain percent similar to photos of people with one of the 2,000 disorders for which Face2Gene has image data, based on the overall “look” of the face as well as the presence of certain features. However, the app won’t give clinicians a yes or no answer to the question of “does my patient have a genetic disorder.”

That’s intentional. Face2Gene is meant to be more like a search engine for diseases – a means to an end.

“We are not a diagnostic tool, and we will never be a diagnostic tool,” said FDNA CEO Dekel Gelbman.

Drawing that bright line between Face2Gene and “a diagnostic tool” allows FDNA to stay compliant with FDA regulations governing mobile medical apps while avoiding some of the regulatory burden associated with smartphone-based diagnostic tools.

Diversity needed

The algorithm the NIH uses – developed in collaboration with Children’s National Hospital system in Washington, D.C. – seems to work pretty well so far: in 129 cases of Down Syndrome, it accurately detected the disorder 94 percent of the time. For DiGeorge Syndrome, the numbers were even higher: it had a 98 percent accuracy rate across all 156 cases.

Face2Gene declined to provide similar numbers for their technology. “Since Face2Gene is a search and reference informational tool, the terms sensitivity and specificity are difficult to apply to our output,” Gelbman cautioned.

But there’s one big stumbling block for both of them, a problem that has dogged medical genetics for decades: Data for non-white populations is sorely lacking.

“In every single textbook, the ones we had [when I trained] in Germany and the major textbooks here in the US, there are photos of individuals of Northern European descent,” Muenke said. “When I told this to my boss, he said there have to be atlases for children from diverse backgrounds. And there aren’t. There just aren’t.” (Today there is that resource, based on Muenke and the NIH’s work.)

So diagnosing diseases from a face alone presents an additional challenge in countries where the majority of the population isn’t of northern European descent, because some facial areas that vary with ethnic background can often overlap with areas that signify a genetic disorder. Eventually, the software will also have to be able to tackle people with mixed ethnic backgrounds, too. “We have thought about it but haven’t gone there yet,” Muenke said.

For example, children with Down Syndrome often have flat nasal bridges – as do typically developing African or African-American children. Across different races and ethnicities of children there were only two reliable identifiers that could be used to diagnose Down Syndrome – the angles between landmark points on the child’s nose and eye, according to a paper Muenke and his colleague, Marius Linguraru, published with their colleagues earlier this year. All of the other “typical” features weren’t significantly more likely to show up when children were compared to ethnically-matched controls.

In fact, using a Caucasian face as a reference can sometimes be the least representative choice. “One of the findings that I’m very interested in [in] our recent study was that the population that we found to be most different from the others, in terms of facial patterns characteristic of DiGeorge Syndrome, was the Caucasian population,” Linguraru said.

To continue to fix this problem, both the NIH and Face2Gene need help from more researchers who can upload more patients’ faces – but that’s easier said than done. Confirming a suspected disorder with genetic tests is standard practice today, and there are no genetic labs based in Africa registered in the NIH’s Genetic Testing Registry. Asia and South America are also relatively underserved.

Those numbers also reflect the general patterns of distribution for medical geneticists. “Most practitioners are located in North America and Europe,” Gelbman said. Nigeria, for example, doesn’t have a single medical geneticist in the entire country.

It’s possible that might change, with time and effort. In addition to his work as a researcher, Muenke directs a program that brings healthcare professionals from developing countries to the US for a month-long crash course in medical genetics. (The program is funded by the NIH’s Fogarty International Center; President Trump eliminated funding for the center in his 2018 “skinny” budget announced in March.)

For now, both algorithms have shown that they can handle a diverse patient set. FDNA scientists published a paper in January showing that their algorithm could better identify Down Syndrome after being trained with a more-diverse set of faces, and Muenke and Linguraru have also published papers this year demonstrating their algorithm’s ability to identify genetic disorders correctly in children across a variety of ethnic backgrounds.

As both groups work on recruiting more researchers, they are also working to push their tech forward. FDNA is working on establishing partnerships with pharmaceutical companies to start their commercial outreach. In theory, these partnerships could contribute to precision medicine efforts or help companies develop new therapies for rare diseases.

Meanwhile, Linguraru has his eyes on eventual FDA approval for the algorithm the NIH has used. The ultimate goal would be a simple tool that any doctor could use anywhere to get fast results and better diagnose their patients.

]]>http://www.happehtheory.com/2017/04/10/scientists-claim-phrenology-is-not-a-valid-science-now-in-2017-a-computer-application-used-to-diagnose-genetic-diseases-from-facial-pictures-proves-that-the-basic-principles-of-phrenology-are-correc/feed/9Elderly Put At Risk By Needless Medication, Study Findshttp://www.happehtheory.com/2016/10/16/elderly-put-at-risk-by-needless-medication-study-finds/
http://www.happehtheory.com/2016/10/16/elderly-put-at-risk-by-needless-medication-study-finds/#commentsSun, 16 Oct 2016 08:32:46 +0000http://www.happehtheory.com/?p=30999One of the main reasons people give for disbelieving the claims of Happeh Theory, chief among them that masturbation will make a human being blind and crippled, is that scientists claim masturbation is harmless. For that reason this website highlights the fact that scientists are constantly being proven to be wrong about pronouncements they make, and scientific advice is constantly being shown to be harmful and even cause death among people who follow that advice. The hope is that when readers are confronted with all of the situations where scientists are proven wrong or their advice is proven harmful, they will discard their blind trust in scientists and become more open to the claims of Happeh Theory.

The study that this post is based on has found that older people are frequently put at risk from being prescribed needless medications. The only reasons that come to mind for prescribing elderly people drugs that do not help them and actually put them at risk, is either incompetence on the part of doctors, or profit making by the doctors themselves or the institution they work for.

If the doctors are incompetent, then why would anyone believe them when they state that masturbation is harmless? And if they are prescribing drugs only to make money and not for health purposes, then maybe doctors say masturbation is harmless because they have been paid by somebody to make a claim that they know to be false. The study that this post is based on is reprinted below.

A third of elderly patients may be being prescribed unnecessary medication, putting them at needless risk of side-effects and costing the NHS millions, a study has shown.

A review of 1,800 over 75s at NHS Croydon found that the average patient had been prescribed six different drugs.

But after a reassessment hundreds of prescriptions were cancelled, with up to one third of patients taken off at least one drug.

Hundreds of prescriptions were stopped because they were no longer effective and dozens because the patients were experiencing side effects or drug reactions.

A further 121 patients were sent to their GP for further review, and 89 patients had their dose reduced.

The most common drugs which were stopped were the blood-thinning drugs warfarin and clopidogrel, aspirin, alendronic acid for osteoposrosis, cetirizine for hay fever and allergies, laxido for constipation, omeprazole for gastric reflux and adcal-d3, a drug to boost calcium and vitamin levels.

The research, carried out by pharmaceutical consultants Interface Clinical Services, predicted that the changes would save the NHS around £192,000 a year. However there are more than 5 million over 75s in Britain, which suggests that the cost savings across the country if everyone was similarly assessed could amount to millions.

Charities said polypharmacy – in which patients are prescribed a number of drugs – was becoming an increasing problem.

Caroline Abrahams, Charity Director at Age UK, said: “We know that the more medications you take, the greater the risks, such the risk of giddiness and of falling. This is because of what happens when ?different drugs interact and in the worst cases older people can even end up in hospital.

“This will be an increasing problem as our population ages, with as many as three million older people expected to be regularly taking multiple medicines by 2018.

“It is therefore extremely important that it becomes routine for older people to have their medication reviewed regularly, and the more drugs they are taking the more important this is.”

Katherine Murphy, Chief Executive of The Patients Association, added: “The Patients Association has been aware for some time about the problem of patients being prescribed too many different medicines, often with one medicine being prescribed to reduce the symptoms of another.

“The lack of any regular review of a patients medication sometimes over several years, is concerning, particularly for vulnerable older people who may feel unable or are unwilling to challenge this aspect of their care.

“We welcome any initiative to improve this situation in the interests of patient safety. The research highlights that medication is often not required, may have limited benefits and can be unnecessarily expensive.”

Four out of five people aged 75 or over now take at least one prescribed medication, and patients on multiple medications are more likely to suffer drug side effects and adverse reactions. Adverse reactions and side effects from drugs account for between five per cent and 17 per cent of all hospital admissions

“There is a clear and steady increase in the number of patients admitted to hospital with drug related side effects ,” said a spokesman for Interface.

“By conducting this kind of clinical assessment in primary care practices, Interface is helping to reduce drug-related adverse events in over 75s and decrease the burden that they place on secondary care services.”

The research was presented at the Royal College of GPs annual conference.