predatory mental health treatments

My quest is to “untangle” the bizarre mess that “researchers” have created around ASD / Asperger’s symptoms and the “co-morbidity” of anxiety.

How difficult a question is this?

Is anxiety a “big problem” for individuals diagnosed with Asperger’s? If yes, then is it commonly “debilitating” in that it prevents the person from engaging in successful employment, satisfying relationships, and “freedom” to engage the environment by participating in activities that are important to their “happiness”?

And yet, what I encounter are articles, papers, and studies that focus on the argument over whether or not anxiety is part of ASD Asperger’s, the diagnosis, or a co-morbid condition.Anxiety, for “experts” has taken on the “power” of the Gordian knot! Honestly? This is the typical “point” at which an Asperger “looses it” and wants to simply declare that neurotypicals are idiots… but, I’m on a mission to help myself and my co-Aspergerg types to survive in social reality. We’re not going to find logical reality-based “answers” in psychology or even in neuroscience…we are on our own.

So let’s look at anxiety, another of those words whose meaning and utility have been destroyed by neurotypical addiction to “over-generalization” and fear of specificity!

Over the past few months, I have experienced an increase in “sudden onset” panic attacks: it’s not as if Ican’t assign a probable cause. The facts of my existence (age, health, financial problems) are enough to fill up and overflow whatever limit of tolerance that I can summon up each day. Severe (and sometimes debilitating) anxiety has been integral to my existence since at least age 3, which is the time of my first “remembered” meltdown. I can honestly say, that if it were not for “anxiety” manifesting as sudden meltdowns, panic attacks, “background radiation” and other physical reactions, (who cares what they are labeled?), my life would have been far easier, with much more of my time and energy being available to “invest” in activities of choice, rather than surviving the unpredictable disruptions that I’ve had to work around. The fact that I’ve had an interesting, rich and “novel” existence, is thanks to maximizing the stable intervals between anxiety, distress, and exhaustion – and avoiding alien neurotypical social expectations and toxic environments as much as possible.

Here is a simple formula that I have followed:

Life among NTs is HELL. I deserve to “reserve” as much time as possible for my intrinsically satisfying interests; for pursuit of knowledge, experiences and activities that enable me to become as “authentic” to “whoever and whatever I am” as possible.

This realization came long, long before diagnosis, and I had to accept that a distinct possibility was that there was no “authentic me” and if there was, it might be a scary discovery. But, ever-present Asperger curiosity and dogged persistence would accept no other journey. It is important to realize, that Asperger or not, this type of “classic quest” has been going on in human lives for thousands of years, and for the most part has been in defiance of social disapproval (often regarded as a serious threat) by societies world-wide, which impose on individuals the carefully constructed catalogue of roles and biographies handed down from “on high”.

The point is that the choice to “go my own way” was “asking for it” – IT being endless shit (and the accompanying anxiety) dumped on human beings existing on all levels of the Social Pyramid, but especially directed toward any group or individual who is judged to be “antisocial” or inferior. I have encountered conflicts large and small, and was exposed to “human behavior” in ways I couldn’t have imagined.

What I have confronted in “normdom” is the strange orientation of “experts” who ignore the contribution of environmental sources to hyperarousal, a physiological reaction to conditions in the environment. (Note: Fear, anxiety, and all the “emotion-words” are merely the conscious verbal expression that infants and children ARE TAUGHT to utilize in social communication, and for social purposes) These words are not the physiological experience.

A feedback “loop” exists between the environment and the human sensory system. The physiology of fear and anxiety is an ancient “alarm system” that promotes survival,but in the human behavior industry, anxiety has been “segregated” and classified as a pathology – an utterly bizarre, irrational, and dangerous idea. The result is that “normal” human reactions and behavior, provided by millions of years of evolutionary processes, and which PROTECT the individual, are now “forbidden” as “defects” in the organism itself. Social involvement and culpability are “denied” – responsibility for abuse of humans and animals by social activity is erased!

Social indoctrination: the use of media, advertising, marketing, political BS and constant “messaging” that presents “protectiveevolutionary alerts and reactions” (awareness of danger; physiological discomfort, stress and illness) are YOUR FAULT. You have a defective brain. It’s a lie.

Due to an entrenched system of social hierarchy (inequality), social humans continue to be determined to “wipe out” the human animal that evolved in nature, and replace it with a domesticated / manufactured / altered Homo sapiens that just like domesticated animals, will survive and reproduce in the most extreme and abusive conditions.

This “domestic” hypersocial human is today represented as the pinnacle of evolution.

Human predators (the 1 % who occupy “power positions” at the top of the pyramid)merely want to ensure that the status quo is maintained, that is, the continued exploitation of the “observation” that domesticated humans will adapt to any abuse – and still serve the hierarchy. This “idea” also allows for the unconscionable torture and abuse of animals.

The “expert” assumption is that a normal, typical, socially desirable human, as defined by the “human behavior” priesthood, can endure any type and degree of torture, stress, abuse, both chronic or episodic, and come out of the experienceUNCHANGED; undamaged and exploitable. Any variation from this behavioral prescription is proof of a person’s deviance, inferiority and weakness.

The most blatant example of this “attitude” is the epidemic of PTSD and suicide in soldiers returning from HELL in combat. Not that many wars ago, militaries literally “executed” soldiers suffering from this “weakness, cowardice and treason” on the battlefield, or “exiled” them to asylums as subhuman and defective ‘mistakes”. Now we ship soldiers home who have suffered extreme trauma and “treat them” so badly, that suicide has become the only relief for many. Having the afflicted remove him or herself, rather than “murdering” them is considered to be compassionate progress.

And my point is about relief:I concluded long ago that chronic and episodic “hyperarousal” must be treated immediately with whatever works; in my experience, that means medication. Despite limiting one’s “exposure” to toxic social environments, one cannot escape the damage done to human health and sanity.

Some relief can be had by employing activities and adjustments in thinking patterns, that often (usually by trial and error) can mitigate physical damage. But what we must remember is that anxiety, fear, distress and the “urge to flee” are healthy responses to horrible human environments. How many mass migrations of “refugees” are there at any time, with thousands, and even millions of people, seeking “new places” to live a life that is proper to a healthy human?

Like this:

It’s that time of year: I’m a huge hockey fan, having grown up with the Chicago Blackhawks on black and white TV. No helmets. Scarred faces, missing teeth. Much less padding. But were there more fights than today? I honestly can’t say.

Today it’s (my friend’s) vast screen TV and obnoxious pregame “shows” (what else can be expected from Las Vegas, that great faux-gold turd in the desert). But I wouldn’t want to deprive anyone from experiencing the “joy of” the Stanley Cup Final. My friend has never been a fan of hockey (or any sport), but enforced viewing through some of the playoff games, and she’s “hooked” already – unfortunately, by the Las Vegas Knights. My guess concerning my friend’s rapid “seduction” by hockey? Well it is a fantastic sport, after all! And a chance for female Homo sapiens to observe and enjoy “men being men” without any collateral damage, like in war. They can punch each other all they want; no one else gets hurt.

I have also observed that the “mental qualities” manifested by players may be a guide to help nervous Asperger types approach our confrontations with “hostile” neurotypicals. No kidding!

Performance Anxiety and Pregame Jitters

Many athletes feel performance anxiety in the opening minutes of the game. You may feel butterflies in your stomach or your heart pounding. Some athletes like to feel pregame jitters before competition. These athletes think of pregame jitters as a sign of readiness and energy. Other athletes think of pregame jitters as a sign of nervousness.

I would say that “pregame jitters” are a fact of life for Asperger types: every social interaction, everyday – along with a strong tendency to “rehearse” upcoming events – but aren’t daily practice and visualization vital to athletic excellence? Can we change our “attitude” toward this anticipatory physical phenomenon, and perhaps take a “neutral” view? I know, it seems a difficult task! LOL

Pre-game jitters are a natural part of competing and a sign you are ready to embrace competition. Even the best athletes in the world get the jitters. Michael Leighton, goaltender for the Philadelphia Flyers, admitted to feeling nervous before his first NHL playoff game.”My legs were shaking a little bit, I was nervous,” Leighton said. “Once I made a few saves, you kind of forget about that and just get focused. It kind of goes away.”

This seems applicable to Asperger types; often, once I get to the “performance” part of social interaction, something automatic takes over and I jabber away –

The mistake many athletes make is interpreting pre-game jitters as there is something wrong or a problem. Pregame jitters can be harmful when they don’t go away in the opening minutes of the game. They can cause you to lose confidence and focus. When you’re focused on how nervous you feel, you lose focus on the present task.

Athletes need to embrace the pregame jitters as a sign they are ready to play. Your mental game tip is to stay calm when you experience pregame jitters in the opening minutes.(Yes, but how???)Stay focused on your strategy and what’s important to execute. Pregame jitters are important to help you prepare for the game and they will help you focus your best if you embrace them! Think of it this way: the best athletes get worried if they don’t experience pregame jitters!

Maybe our tendency to rehearse is an asset, if we use that energy to devise a strategy! Is rehearsal another “asset” that NT psychologists misinterpret as a defect?

Listed below are some mental game tips to help you perform your best under pressure and in the big game.

What seem like minor everyday social interactions for NTs can be extremely “big game” status for us!

1. Develop a consistent pregame routine. (Yes, psychologists judge preference for establishing routines in ASD / Asperger individuals a “developmental defect”. Screw them! We need to use our traits as assets…) A pregame routine can help transition you into the right mindset before competition. While you’re warming up your body, you also want to mentally prepare for the upcoming game. A pregame routine will help you focus your mind, prepare to feel confident, and to trust in your practice. During your pregame routine, remind yourself to trust in the practice you have done leading up to the game.

2. Focus on your game not your competitors. Many athletes tend to make comparisons to their competitors and thus psych themselves out. When you do this, you typically make negative comparisons, which can cause you to lose confidence in your game. Instead of gawking at your competitors, you want to focus on your pregame preparation. You should focus on your strengths and abilities, for example.

3. Focus on the process, not results. Focusing on results causes you to think too far ahead and sets too many expectations for competition. When you focus on the results, you lose focus on the current play, point or shot and you can’t perform your best. Remind yourself that focusing on results doesn’t help you execute. Then, refocus quickly on what’s important, such as the target or your strategy for the current play.

4. Have trust in yourself. Some athletes lose trust and tighten up in the big game. This can cause you to over control your performance and not play freely. You want your performance to just happen, without thinking too much about “how to” execute your skills. For example, a batter needs to react to the ball instead of think about how to make a good swing. Simplify your thoughts such as thinking about one thought or image to help you execute (feeling balanced). Avoid thinking too much about how to or technique.

Overall, you want to treat the big game as any other game. You don’t want to place too much importance on one game, which can lead to added pressure, a lack of focus, and trust in your game. Focus on what you do best. Follow your usual pregame routine and mentally prepare for the big game just like you would any other game.

What does all this point to for Asperger types? We must free ourselves from the poisonous messages we have received all our lives from neurotypicals who “judge” our traits and behaviors as “defective and subhuman” And USE our cognitive skills and superior sensory abilities to our advantage! This is very different than submitting to being “trained monkeys” as social humans demand of us.

Like this:

This debate is just one more “Catholics vs. Protestants” type religious war over “who owns the hearts, minds and fates of children” – and their $$ insurance coverage. I wish for once that genuine scientific thinking – and compassion – had some influence on reproduction and the health of fetuses, infants, children, young adults and their families. (We adults are on our own in this NT – produced nightmare of irrational – supernatural thinking) LOL

Sensory processing disorder is a condition in which the brain has trouble receiving and responding appropriately to information that comes in through the senses.

Like this:

One of the most mind-boggling aspects of Autism research is the insistence that ASD /Asperger types have “defective sensory systems” because they “react negatively” to “toxic” modern social environments: This is so backwards!“Autism experts” expect human fetuses, infants and children to develop “normally” in what are ANTI-LIFE artificial environments. This is simply ignorant “social blindness” to the fact that “toxic environments” are wreaking havoc on human physiology, which evolved over millions of years in Nature, and not in polluted, unhealthy and high stress “concentration camps” of millions of people, denied clean air and water, nutritious food, privacy and autonomy and self-preservation. Attempts are made to “drug” and medically intervene in the catastrophic so-called “mental illness” epidemic and the failing health of millions; drastic medical intervention is resorted to, in order to “artificially adapt” Homo sapiens to killer environments. Not even the cascade of species extinctions caused by modern human degradation of the environment can “penetrate” the ignorance of social humans.

The expression of genes in an organism can be influenced by the environment, including the external world in which the organism is located or develops, as well as the organism’s internal world, which includes such factors as its hormones and metabolism. One major internal environmental influence that affects gene expression is gender, as is the case with sex-influenced and sex-limited traits. Similarly, drugs, chemicals, temperature, and light are among the external environmental factors that can determine which genes are turned on and off, thereby influencing the way an organism develops and functions.

Sex-Influenced and Sex-Limited Traits

Sex-influenced traits are those that are expressed differently in the two sexes. Such traits are autosomal, which means that the genes responsible for their expression are not carried on the sex chromosomes. An example of a sex-influenced trait is male-pattern baldness. The baldness allele, which causes hair loss, is influenced by the hormones testosterone and dihydrotestosterone, but only when levels of the two hormones are high. In general, males have much higher levels of these hormones than females, so the baldness allele has a stronger effect in males than in females. However, high levels of stress can lead to expression of the gene in women. In stressful situations, women’s adrenal glands can produce testosterone and convert it into dihydrotestosterone, which can result in hair loss.

Sex-limited traits are also autosomal. Unlike sex-influenced traits, whose expression differs according to sex, sex-limited traits are expressed in individuals of only one sex. An example of a sex-limited trait is lactation, or milk production. Although the genes for producing milk are carried by both males and females, only lactating females express these genes.

Drugs and Chemicals

The presence of drugs or chemicals in an organism’s environment can also influence gene expression in the organism. Cyclops fish are a dramatic example of the way in which an environmental chemical can affect development. In 1907, researcher C. R. Stockard created cyclopean fish embryos by placing fertilized Fundulus heteroclitus eggs in 100 mL of seawater mixed with approximately 6 g of magnesium chloride. Normally, F. heteroclitus embryos feature two eyes; however, in this experiment, half of the eggs placed in the magnesium chloride mixture gave rise to one-eyed embryos (Stockard, 1907).

A second example of how chemical environments affect gene expression is the case of supplemental oxygen administration causing blindness in premature infants(Silverman, 2004). In the 1940s, supplemental oxygen administration became a popular practice when doctors noticed that increasing oxygen levels converted the breathing pattern of premature infants to a “normal” rhythm. Unfortunately, there is a causal relationship between oxygen administration and retinopathy of prematurity (ROP), although this relationship was unknown at the time; thus, by 1953, ROP had blinded approximately 10,000 infants worldwide. Finally, in 1954, a randomized clinical trial identified supplemental oxygen as the factor causing blindness. Complicating the issue is the fact that too little oxygen results in a higher rate of brain damage and mortality in premature infants. Unfortunately, even today, the optimal amount of oxygenation necessary to treat premature infants while completely avoiding these complications is still not clear.

How much devastating and life long damage continues to be suffered by infants due to the “promotion” of premature birth as the new normal?

Yet another example of the way in which chemicals can alter gene expression involves thalidomide, a sedative, antiemetic, and nonbarbiturate drug that was first manufactured and marketed during the mid-1950s. While thalidomide has no discernable effect on gene expression and development in healthy adults, it has a profoundly detrimental effect on developing fetuses. When the drug was first created, however, its impact on fetuses was not known. Moreover, because of its apparent lack of toxicity in adult human volunteers, thalidomide was marketed as the safest available sedative of its time and rapidly became popular in Europe, Australia, Asia, and South America for countering the effects of morning sickness. (In the United States, the drug failed to receive Food and Drug Administration approval because its side effects included tingling hands and feet after long-term administration, which led to concerns that the drug might be associated with neuropathy.) Not until 1961 did Australian researcher William McBride and German researcher Widukind Lenz independently report that thalidomide was a teratogen, meaning that its use was associated with birth defects. (How many of today’s medical drugs and treatments will be / are being “discovered” to be dangerous, due to ignorance and lack of diligent testing, when “rush to market” profitability is the motivation, not human benefit) Another study associated thalidomide use with neuropathies. Sadly, the drug was withdrawn too late to prevent severe developmental deformities in approximately 8,000 to 12,000 infants, many of whom were born with stunted limb development. Interestingly, despite the fact that thalidomide is dangerous during embryonic development, the drug continues to be used in certain instances yet today. For example, it has therapeutic potential in treating leprosy, and in recent years, it has also been used to treat cancers and enhance the effectiveness of cancer vaccines (Bartlett et al., 2004; Fraser, 1988).

Nature is packed full with evidence for the wide-ranging and specific effects of environment! Why do “researchers” cling to the socio-religious fantasy that humans, as “special creations” are somehow “exempt” from the consequences of animal reality?

In addition to drugs and chemicals, temperature and light are external environmental factors that may influence gene expression in certain organisms. For example, Himalayan rabbits carry the C gene, which is required for the development of pigments in the fur, skin, and eyes, and whose expression is regulated by temperature (Sturtevant, 1913). Specifically, the C gene is inactive above 35°C, and it is maximally active from 15°C to 25°C. This temperature regulation of gene expression produces rabbits with a distinctive coat coloring. In the warm, central parts of the rabbit’s body, the gene is inactive, and no pigments are produced, causing the fur color to be white (Figure 1). Meanwhile, in the rabbit’s extremities (i.e., the ears, tip of the nose, and feet), where the temperature is much lower than 35°C, the C gene actively produces pigment, making these parts of the animal black.

Light can also influence gene expression, as in the case of butterfly wing development and growth. For example, in 1917, biologist Thomas Hunt Morgan conducted studies in which he placed Vanessa urtica and Vanessa io caterpillars under red, green, or blue light, while other caterpillars were kept in the dark. When the caterpillars developed into butterflies, their wings showed dramatic differences. Exposure to red light resulted in intensely colored wings, while exposure to green light resulted in dusky wings. Blue light and darkness led to paler colored wings. In addition, the V. urtica butterflies reared under blue light and V. io butterflies reared in the dark were larger than the other butterflies.

As these examples illustrate, there are many specific instances of environmental influences on gene expression. However, it is important to keep in mind that there is a very complex interaction between our genes and our environment that defines our phenotype and who we are.

And yet “autistic” children areblamed for physical effects that originate in unhealthy human-created and imposed toxic environments. Which, not surprisingly, are also negatively impacting ALL HUMANS.

A new study analyzing over 21,000 participants found that differences in activation of brain regions in different psychological “disorders” may have been overestimated, and confirms that there is still no brain scan capable of diagnosing a mental health concern.

A new study, published in the journal Human Brain Mapping, questions previous findings that specific brain regions are implicated in particular mental health conditions. Instead, according to the researchers, biased study design and the difficulty of publishing negative findings may have led to inaccurate results. While the researchers did find some differences in brain activation between people with mental health conditions and people without mental health conditions, they were not able to discriminate between specific diagnoses. The current study suggests that there are few, if any, differences in brain regions activated by specific mental health conditions. That is, there is still no brain scan that can tell whether a person has depression, social anxiety, or schizophrenia, for example.

Researchers have theorized that the different symptom clusters that form mental health diagnoses are linked to specific regions of the brain. If confirmed, such a finding would suggest that mental health diagnoses have biological components that could be targeted medically. However, the finding of the current study undermines this theory. Instead, the results indicate that while there is a general tendency for large parts of the brain (such as the amygdala and the hypothalamus) to be activated in a number of mental health conditions (as well as when humans are under stress in a number of ways), there is little difference between the varying diagnoses—even for diagnoses as seemingly different as social anxiety, depression, and schizophrenia.

The researchers were led by Emma Sprooten (Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York City). They used statistical tests to combine the results from 547 studies, which enabled them to analyze the data from 21,692 participants. The studies compared the brain scans of healthy participants with participants who were diagnosed with major depressive disorder, bipolar disorder, schizophrenia, obsessive compulsive disorder (OCD), and anxiety disorders, including social anxiety disorder, generalized anxiety disorder, panic disorder, specific phobias, and post-traumatic stress disorder (PTSD).

The studies in question used functional magnetic resonance imaging (fMRI), a common type of brain scan which creates images based on blood oxygenation levels within the brain. Higher blood oxygenation levels are assumed to indicate areas involved in more activity. Thus, an fMRI result is theorized to indicate which areas of the brain are activated or deactivated for particular tasks or states of being.

Importantly, fMRI has endured its own questions of bias. A recent article, published in the Proceedings of the National Academy of Sciences, confirmed a previous finding that up to 70% of the results in fMRI studies may actually be “false positives”—that is, finding a result when there actually is none. Nikos K. Logothetis wrote, in a 2008 article in Nature, that the fMRI “is an excellent tool for formulating intelligent, data-based hypotheses, but only in certain special cases can it be really useful for unambiguously selecting one of them, or for explaining the detailed neural mechanisms underlying the studied cognitive capacities.” That is, fMRI results can inform the questions we ask, but they can rarely answer those questions. Unfortunately, the neuropsychiatric literature is riddled with fMRI studies that purport to do just that.

Another recent study attempted to showcase just how much fMRI results rely on subjective interpretation. The researcher, Joshua Carp of the University of Michigan, examined a single fMRI event and found that there were 34,560 different results that could be reached by following different analysis procedures. He argues that the choice of analysis procedure is a subjective one, and researchers may try numerous procedures in order to achieve a positive result. He suggests that in the future, researchers must clearly specify which procedure they will use in order to reduce this extraordinary bias.

Sprooten and her colleagues framed their results as addressing the common practice of “reverse inference,” which has been challenged by other researchers as well. In reverse inference, researchers pre-select which brain regions (ROIs) they are going to study in order to maximize potential results—rather than examine the whole brain to determine which areas are activated. Put simply, if you study a particular area, then you will never see if there is activation in other brain regions during your test. You will only find activation in your pre-selected area. This result is often taken to indicate that particular disorders are associated with activation in particular regions—but this conclusion rests on the assumption that researchers would not have found other areas had they examined the whole brain.

The strength of the current study was its ability to compare ROI studies (studies that focused on only specific regions of the brain) with the results from whole-brain studies. The ROI studies tended to find differences in which brain regions were activated by different mental health conditions. However, once the whole-brain studies were factored in, these findings disappeared. When all studies were included, there were no differences between the diagnoses.

Notably, the researchers only included studies that found significant results—that is, those that purported to find differences between those with mental health diagnoses and those without. Their results would likely be even more striking if they factored in the studies with negative results—studies that did not find differences.

Sprooten writes:

“The pre-selection of ROIs, possibly in combination with the difficulty of publishing negative results, seems to bias the literature and may indirectly lead to oversimplification and over-localization of neurobiological models of behavior and symptoms.”

Choosing a brain region to examine, rather than examining the whole brain, appears to lead to biased, oversimplified results. Likewise, the conclusion that Logothetis reaches in his Nature article is that “the limitations of fMRI are not related to physics or poor engineering, and are unlikely to be resolved by increasing the sophistication and power of the scanners; they are instead due to the circuitry and functional organization of the brain, as well as to inappropriate experimental protocols that ignore this organization […]The magnitude of the fMRI signal cannot be quantified to reflect accurately differences between brain regions, or between tasks within the same region.”

The study conducted by Sprooten and her colleagues suggests that many fMRI studies misrepresent the abilities of brain scans. As Logothetis argues,

using fMRI results to confirm pre-existing theories of brain region activation in mental health diagnoses is in direct contradiction of the abilities of the fMRI technology.(It’s FRAUD!)

In short, brain scan research is of limited use in explaining the complex psychological states of human beings. If a neurological answer seems clear and easy, it may be being misrepresented and oversimplified.

A Brief History of Psychiatry

Psychiatry got its name as a medical specialty in the early 1800s. For the first century of its existence, the field concerned itself with severely disordered individuals confined to asylums or hospitals. These patients were generally psychotic, severely depressed or manic, or suffered conditions we would now recognize as medical: dementia, brain tumors, seizures, hypothyroidism, etc. As was true of much of medicine at the time, treatment was rudimentary, often harsh, and generally ineffective. Psychiatrists did not treat outpatients, i.e., anyone who functioned even minimally in everyday society. Instead, neurologists treated “nervous” conditions, so named for their presumed origin in disordered nerves.

Around the turn of the 20th century, the neurologist Sigmund Freud published theories on the unconscious roots of some of these less severe disorders, which he termed psycho-neuroses. These disorders impaired relationships and work, or produced odd symptoms such as paralysis or mutism that could not be explained medically. Freud developed psychoanalysis to treat these “neurotic” patients. However, psychiatry, not neurology, soon became the specialty known for providing this treatment. Psychoanalysis thus became the first treatment for psychiatric outpatients. It also created a split in the field, which continues to this day, between biological psychiatry and psychotherapy.

Psychoanalysis was the dominant paradigm in outpatient psychiatry for the first half of the 20th century. In retrospect it overreached, as dominant paradigms often do, and was employed even for conditions where it appeared to do little good. Empirical evidence of its efficacy was scarce, both because psychoanalysts largely shunned experiments, and because analytic interventions and outcomes are inherently difficult to study this way. Nonetheless, many case reports alleged the benefits of psychoanalysis, and subsequent empirical research has tended to support this. (Really?)

By the late 1950s and early 1960s, new medications began to change the face of psychiatry. Thorazine and other first generation anti-psychotics profoundly improved institutionalized psychotic patients,(really?) as did newly developed antidepressants for the severely depressed. (The introduction of lithium for mania is more complicated (link is external); it was only available in the U.S. starting in 1970.) State mental hospitals rapidly emptied as medicated patients returned to the community (the “deinstitutionalization movement”). (And now reside in U.S. jails and prisons nationwide)Although a well-funded community mental health system never materializedas promised, psychiatric patients with varying levels of symptoms and dysfunction were now treated as outpatients, often with both medication and psychodynamic psychotherapy, i.e., less intensive psychotherapy based on psychoanalytic principles. (Grab that prescription pad and send them on their way.)

In 1980, the Diagnostic and Statistical Manual (DSM) of Mental Disorders, published by the American Psychiatric Association, was radically revised. Unlike the prior two editions which included psychoanalytic language, DSM-III was symptom-based and “atheoretical,” i.e., it described mental disorders without reference to a theory of etiology (cause). This was intended to provide a common language so that biological and psychoanalytic psychiatrists could talk to each other,and to improve the statistical reliability of psychiatric diagnosis. (The notion that ‘symptoms’ picked from column a; column B; like a Chinese menu, satisfy scientific criteria for actual diagnosable “diseases, disorders, and behavioral annoyances” is laughable. The tragedy that has resulted – an epidemic of multi-disorder bogus diagnosis in skyrocketing numbers of American children and subsequent “carpet bombing” with psychotropic drugs – is criminal)

Patients were thereafter diagnosed by “meeting criteria” for one or more defined disorders. One result of this shift was that psychoanalysis and psychodynamic therapies were increasingly seen as nonspecific and unscientific, whereas pharmaceutical research took off in search of drugs that could improve discrete symptoms to the point that patients would no longer meet criteria for a DSM-III disorder.

Wow! All of this effort to appease a rift in the psych-psych industry and to generate obscene profit from psycho-drugs! And not a word about the “objective scientific reality” of conditions that afflict actual living humans. Oh, I forgot. This is a “socially-constructed” industry that cares little about human beings outside of $$$$$$$$$$$.

A new class of antidepressants called SSRIs (“selective serotonin reuptake inhibitors”) were better tolerated and medically safer than prior antidepressants. The first of these, Prozac, was released in 1987. Shortly thereafter, new anti-psychotics were released: “atypical neuroleptics” such as Risperdal and Zyprexa. Heavily promoted and with apparent advantages over their predecessors, these medications were widely prescribed by psychiatrists, and later by primary care physicians and other generalists. Psychiatry was increasingly seen as a mainstream medical specialty (psychiatrists became drug pushers for Big Pharma) (to the relief of APA leadership), and public research money strongly shifted toward neuroscience and pharmaceutical research. The National Institute of Mental Health (NIMH) declared the 1990s the Decade of the Brain (link is external) “to enhance public awareness of the benefits to be derived from brain research.” DSM-IV was published in 1994, further elaborating criterion-based psychiatric diagnosis. Biological psychiatry appeared to have triumphed.

Meanwhile, clinical psychologists championed the use of cognitive and cognitive-behavioral psychotherapies. Coming from an experimentalist tradition (the “rats in mazes” stereotype of academic psychology), clinical psychologists empirically validated the use of cognitive-behavioral therapy (CBT) for depression, anxiety, and other named disorders.Standardized therapy could be conducted by following a treatment manual; targeted symptom improvement documented success or failure. (All of this blah, blah, blah is based on non-scientific theories, surveys, subjective judgments and opinions: standardization is a “method of social contrivance” that ensures that every man, woman and child is diagnosed with “something”!)This empiricism(what a joke!) meshed well with the “evidence based medicine (link is external)” movement starting in the 1990s, to the further detriment of analytic and dynamic therapies. Whether treated by a psychiatrist with a prescription pad or a psychologist with a CBT manual (or both), emotional complaints were first categorized and diagnosed, and then treated by sharply focusing on the specific defining symptoms of the diagnosis. (Really? Symptom-based subjective column A; column B “guessing” is the modern social typical version of traditional folklore: toss the stones, dice or sticks; read the interpretation manual; give the “patient” some concoction with impressive side effects that “ensure belief” in the efficacy of the “poison”. Ignore the damage; declare “magic mission accomplished.”

Notwithstanding the Decade of the Brain and lavish public and private investment, pharmaceutical innovation dried up in the 2000s. No new classes of medication or blockbuster psychiatric drugs were discovered. Moreover, previously unrecognized or under-appreciated side-effects of widely used medications hit the headlines. SSRIs were implicated in increased suicidal behavior, and some patients reported severe “discontinuation syndromes”(withdrawal) when stopping treatment. Atypical neuroleptics were associated with a “metabolic syndrome” of weight gain, increased diabetes risk, and other medical complications. Adding insult to injury, the millions (billions and billions) spent on basic brain research led to no advancement in our understanding of psychiatric etiology, nor to novel biological treatments. And to top it off, pharmaceutical companies were fined (link is external) repeatedly and for huge sums for promoting powerful, expensive psychiatric medications for unapproved uses.(It’s big Pharma’s fault, not “ours”)

The release of DSM-5 in 2013 garnered much controversy. Dr. Allen Frances, chair of the APA task force that oversaw the prior edition, criticized the new effort for its medical/biological bias, (a joke!) and for expanding the scope of psychiatric disorders in ways that shrink the range of normality.Thousands of mental health clinicians and researchers signed petitions opposing the new edition for similar reasons. The NIMH declared it would no longer use DSM diagnoses in its research, because DSM definitions were products of expert (subjective, ideological, religious, ego-driven etc)consensus (warfare), not experimental data. Like psychoanalysis before it, the new dominant paradigm, psychiatry as a “neurobiological” specialty, had also overreached (link is external).

Psychiatry’s reputation suffered for it. Once the doctors for society’s hopeless and forgotten, later the subtle explorers of individual psyches, office-based psychiatrists are now too often viewed as mere technicians, attacking emotional symptoms with one prescription after another. Getting to know the person behind the symptoms is left to non-psychiatric therapists, obscuring the often close connection between medication response and psychology. (Blah, blah, blah) (Note the emphasis on “emotional symptoms” in all this, and not on medical mental illness, or personal cognitive deficits or irrational belief systems, or dysfunctional social environments, or the intense social stress imposed by American culture)

Healing the rift between biological psychiatry and psychotherapy was foreshadowed in the 1970s by George L. Engel’s biopsychosocial medical model and by Eric R. Kandel’s laboratory work on the cellular basis of behavior. (Kandel’s classic 2001 paper is well worth reading.) Even at the height of the medicalization of psychiatry in the 1980s and 90s it was recognized that unconscious dynamics affect the doctor-patient relationship, and that interpersonal factors strongly influence whether patients feel helped with treatment. (Is the psych “con” workable with certain vulnerable personality types; yes!) It is time again to acknowledge that many outpatients, probably most, seek treatment not for discrete symptoms but for diffuse dissatisfaction, stormy relationships, unwitting self-sabotage, dissociative reactions, and other misery that cannot readily be reduced to DSM diagnostic criteria. The convenient fiction that people’s feelings can be distilled into a “problem list” is not so convenient after all.

The future of psychiatry can be neither “brainless” nor “mindless.” History points to many conditions once thought to be “mental” (general paresis, cretinism, senility, seizures, etc.) that are now known to be medical. Brain research is essential, as more such examples are sure to come. It is equally clear that we are nowhere near analyzing and treating human psychology at the neural level. (You can’t “treat” human psychology – psychology is a western belief system that designates all human thought and behavior as pathologic. We need to free human beings from this dangerous, life-stunting oppression) That may be possible someday, but for now any such claims are absurdly premature. The distinction between medical and psychological will likely become less sharp in the years ahead, as certain genetic or other biological differences will be linked to psychological vulnerabilities. Nonetheless, the uneasy tension between biological and psychological psychiatry will not end soon; we are better off embracing it instead of choosing sides. A robust psychiatry of the future will surely claim a wide purview, from the cellular basis of behavior, to individual psychology, to family dynamics, and finally to community and social phenomena that affect us all.

The profits will continue to roll-in as the psych-psych-pharma industry increasingly seeks and achieves domination over human behavior.

Like this:

This post is about one type of “brain gadget”: a mini (minimal number of electrodes) wearable EEG with enormous appeal for narcissists who are also lazy!

If you are allergic to physics, go to discussion at bottom of post!

Muse Brain-sensing Headband

Meditation and yoga are terrific tools for reducing your stress levels and improving your concentration, but they can be tricky for a beginner to master. That’s where the new Muse headband comes in: It’s a headphone-sized piece of tech that fits around your head, measuring your brain’s electrical impulses toguide you toward nirvana.

How exactly does Muse do it? When you connect headphones to a smartphone running the Muse Calm app, you’ll be able to hear just how focused your brain is. As your mind slowly ceases to wander, you’ll hear increasingly soothing sounds like ocean waves or a gentle breeze.Remember all those New Age meditation tapes from the 70s?Each Muse session can be completed in just 3 minutes,allowing you to fit the tech in to even the busiest of lifestyles.

PRICE? JUST $299. Not for poor folk! Guess we’ll have to reach Nirvana the hard way!

Some physics involved…LOL

Just in case you may be entranced by the idea of all your neurons firing “in sync” as a state of Nirvana, think again…

Data, data, data. Wowie, zowie tech. But what does it mean?

More physics (unavoidable) LOL

Again, fascinating tech, but what does it mean for the person who utilizes the gadget?

______________________________

Let’s try a description from a pop-tech website:

Discovered in 1924 by a German Psychiatrist, Hans Berger, Electroencephalography technology, or EEG, works by measuring the difference in electrical field that is produced by neurotransmission in real time. In traditional EEG testing, rows of electrodes are placed on a person’s scalp with a wire that hooks them up to an amplifier that strengthens the waves that are picked up, and a computer which records all of the data.

The data is presented on a graph in real time as the electrodes are picking up the electrical field on the scalp. Scientists decode this databy analyzing the types of waves that are presented.(Actually a computer program does this) There are a total of five different wavelength patterns: Delta, Theta, Alpha, Beta, and Gamma (least to greatest in wavelength frequency).

Just as a traditional EEG cap places electrodes all across the skull, headbands like InteraXon’s Muse Brain Sensing Headband work by placing sensors along the forehead and behind the ears. Once the headset is paired with its application, the electrical impulses that are read by the sensors are immediately visualized in the app. Once the headset is paired with its application, the electrical impulses that are read by the sensors are immediately visualized in the app.

NOTE: The traditional EEG uses multiple sensors distributed over the skull surface: mini EEG headsets generally only place 2 active electrodes over the frontal lobes. What about the rest of the brain? Visual processing, etc.

These neural patterns that are picked up by the electrodes are then used by researchers to analyze cognitive behavior. A can of worms is opened here. For instance, in sleep research, researchers will look for delta waves to see how deep a patient is able to fall asleep. Likewise, they will look for higher frequency waves such as gamma or beta waves to check if the patient is still in REM sleep.

With its noninvasive method of use, this technology allows scientists and physicians to record when and where a particular activity has taken place in a subject’s brain. From these findings,they are then be able to interpret how the subject was feelingduring a particular conversation – were they bored and unresponsive? Engaged and thinking critically? Were they focused on the conversation or task without any interruptions?

____________________________________

On the subject of interpretation:

Are these “interpretations” valid as true representations of what is going on in the brain or how it works globally? That is, psychological concepts / word concepts such as boredom, or the process of “thinking critically”, are subjective opinions on the part of humans doing the interpretation. (Just ask a variety of people what “critical thinking” entails! LOL)

How do researchers arrive at these “interpretations”(meanings), when “brain waves” are ranges of electrical frequencies (physical properties), but the “assumed meanings” (interpretations) of the brain waves are “embedded” (locked and loaded hypothesis) in the activities the subject is tasked with, such as viewing images of faces, contrived to “express” various emotions that have been pre-determined by the researchers?

These are entirely different categories(measurable, quantifiable electrical activity) vs. (subjective experience or opinion and abstract process)“connected by” sweeping conclusions that are drawn “as if by magic”.

Structuring “experiments” in order tofind confirmation of what you are seeking in your “locked and loaded hypothesis” is not valid. This is not to say that actual physical data collected is not valid: Whether or not someone is asleep or not is at least observable (by the common definition of sleep), but one will not find “bored” or other subjective states stamped on a part of the brain or mysteriously “hidden” in electrical activity.

This projection of “human concepts about brain functions” (which are invented by certain human brains) is evidence of magical thinking. The assignation of “brain chopped into parts” = actual organization of the brain is baseless.The leap to assigning “human language-generated concepts”(such as emotion words or other mental states such as boredom, which are identified by word descriptions, which are learned by individuals during socio-cultural indoctrination),to direct correlation with a “pan-human” brain design(that problem with “normal” again) reflects the imposition of anthropomorphism on the human brain itself!

Think about it!

_________________________

back to:

With the frontal cortex being the primary location for problem solving, judgement, and impulse control, it makes sense why the Muse Brain Sensing Headband has sensors that are placed along the forehead. (It’s also much cheaper and takes far less computing power)

While it may not be (is not) able to get an accurate read on the brain as a whole, it is able to track the activity of the frontal lobe, where our ability to control focus is located. Thus, this headband can strongly aid in training the frontal cortex to react more calmly to impulse and think through actions rationally with a more focused mindset. (Again, we have some “magical transformation taking place, which is in reality a subjective illusion due to the highly suggestible social human act of obedience to “authority” That is, the user, having spent $299.00 for this techy-looking, futuristic “headband” will dutiful believe that wearing the object 3 minutes a day, randomly, for maybe a week or two, will “work magic” on his or her brain, making the brain more “focused and intelligent” – and superior to their social competitors…

Consumer EEG systems did show a significantly more convenient and faster set up, which is optimal for their intended use in entertainment and self-help applications.However, their data quality was overall negatively affected by artifact susceptibility associated with the dry electrode. As expected, the data quality was particularly diminished during EO. The lack of impedance testing capability and application to the frontal region, which is particularly prone to eye blinks and muscle movement with eye opening also likely contributed to this relative artifact. Additionally, the assessment performed by consumer EEG systems is, by their nature, limited and confined to the only anatomical brain region covered by the few channels, precluding multi-networks evaluations.

NOTE: This so-called therapy is not approved by the FDA to “treat” Autism, and yet it is being promoted for treating ASD / Asperger types.

Keyword: ka-ching

From the website: About Us

The Center for Integrative & Innovative Therapies (The CIIT Center) is all about the intimate relationships among the patient, doctors, and therapists, all working together under one roof! What sets The CIIT Center apart from other facilities is that our doctors and therapists look beyond the obvious symptoms that our patients are experiencing and they use innovative diagnostic tests and integrative therapies to treat the core, underlying health conditions and not just the symptoms. That’s when your personalized treatment program begins….

The CIIT Center believes in Integrative Medicine as the core to our treatment protocol and we treat the patient as a whole person, including all aspects of their lifestyle. We treat each patient with an integrative approach that includes a customized treatment plan of various therapies. Our healthcare professionals work in unison to develop an individualized health plan all in one facility.

We’ll find mysterious imaginary things wrong with you; trust us. We have our own “doctors” who will make sure you are diagnosed with “something” that we can turn into $$$$$$$$$$$$$$$ .

What is TMS?

Transcranial Magnetic Stimulation (TMS) is a drug-free, painless, non-invasive therapy that uses magnetic pulses to stimulate your brain. It is FDA approved for patients who are suffering from Depression that have had poor results from antidepressants.If you don’t have depression and/or are not taking anti-depressants, this so-called therapy is not approved by the FDA for you. Of course, these nice people will “find someone” (their doctor) who will diagnose your non-existent depression.

How does TMS work?

TMS works by sending magnetic pulses into your brain. It is non-invasive and doesn’t involve any drugs or sedatives. During Depression, there is an area of your brain (Dorsal lateral prefrontal cortex, a.k.a. DLPFC) that is electrically under-active. By stimulating the nerves of the DLPFC, one may experience improvements in both mood and Depression symptoms. These improvements have the potential to lift one out of Depression and decrease one’s reliance on antidepressants or other mood-related medications. How magical! How non-committal. How simple! Of course, if it doesn’t improve your complaint, we still get paid!

Note how quickly this topic pops up!Is TMS covered by Insurance?

You are diagnosed with Major Depressive Disorder (no problem; our staff doctors can do that!) AND

You’ve tried at least 2 different antidepressants along with psychotherapy and haven’t had success (Since the failure rate of these two “treatments” is very high (and improvement unverifiable) almost anyone who has “tried” these, will qualify.

Medicare will cover TMS Therapy if you meet the above criteria.

Other insurance companies can be very specific when covering TMS Therapy. If our doctors think you are a potential candidate for TMS,we will contact your insurance company(apply pressure, and negotiate a deal?) to verify that they will cover your TMS treatment. Coverage varies per insurance company and it can take some time (days to weeks) to receive approval. We suggest making an appointment as soon as possible so that the process can get a head start. Call us at xxx-xxx-xxxx to make an appointment or learn more about TMS.

We also offer TMS Therapy for other conditions such as Anxiety, Traumatic Brain Injury (TBI), Concussions, Obsessive Compulsive Disorder (OCD), and Post Traumatic Stress Disorder (PTSD). However,treatment for conditions other than Depression is considered off-label and insurance will normally not cover the cost. As a result, payment will likely be out-of-pocket. However, we do offer payment plans with discounted rates. Please call us at xxx-xxx-xxxx to discuss or to make an appointment with a doctor.

How long does TMS Therapy last?

Your first session of TMS will last about 60 minutes: 20-30 minutes of preparation and 20-30 minutes of actual treatment. Subsequent sessions will be streamlined and should normally last 20 minutes, though the length of your session may vary depending on the doctor’s orders and the conditions we’re treating.

In order for the treatment to be effective, we ask patients to be available for 5 days a week (3 days minimum) for at least 6 weeks. When streamlined, each session usually lasts only 20-25 minutes per day. A full TMS course typically consists of 36 treatments, with the final 6 treatments spaced out. COST?

What can I expect during TMS?

Your first session of TMS will involve a technician taking measurements for your personal TMS cap and asking questions pertaining to your health. After the measurements are done, the technician will take a motor threshold (MT). The MT is used to determine the intensity of your treatment, based on an involuntary motor reflex. After preparations are done, you will sit in an adjustable chair and be able to watch TV during treatment. The coil will then be placed against your head and treatment will begin. The machine will provide a “heads-up” beep before each pulse. Some patients report that the pulses feel like someone “tapping” on your head rapidly while others report involuntary facial twitches. The treatment will continue to run for its prescribed time, after which the technician will remove the coil and your cap and you will be free to continue your day.

Well! Sit in a comfy chair; watch TV; wear a little cap; how fun! Think of the endless blah, blah, blah social mileage you’ll get out of this adventure! It’s just like going to the salon-spa to get your hair styled. In fact, just think of it as a “styling treatment” for your brain!

Will these supposedly high-tech treatments someday join the catalogue of electro-magnetic gadget quackery?