There are a few points I want to make in response to that. Some I've seen elsewhere. Some, less so.

What would you do if a kid was vocally stimming, with their natural voice, and you thought that was impeding their communication? Still not taking away their voice, right? Even if you think they're doing something noncommunicative with their voice, you're still taking their voice in that example. Never means never. (This is mentioned in the PrAACtical AAC post, but it was also my immediate gut reaction.)

Or what would you do if you heard me stimming with my AAC device? Cause yeah, I'm an adult and you know I can communicate and all, but I do that sometimes. Would you consider taking my device? I'm kind of assuming it's a no there because the idea that you might try is a bit too scary for me to look at right now, but why wouldn't you do that to me, if you would to them? (This is somewhat an explanation to my immediate gut reaction.)

Keep in mind that communicative echolalia is a thing. In my experience ... yeah, sometimes repeating words or sounds because it feels good is a thing but there's often a meaning. (pickles pickles pickles pickles pickles resulted in my getting pickles, in college. It was also stimmy, as a side bonus.) For those looking for citations on the communicative functions of echolalia, Barry Prizant did some work on that (Prizant & Duchan, 1981; Prizant & Rydell, 1984). I don't trust him on the whole, remember my reactions to Uniquely Human, but communicative functions of echolalia is a useful thing he did.

Echolalia, repeating words and phrases is also how a lot of autistic people learn language in the first place. The thing that is how we learn language is not actually a barrier to communication and if this is what's going on, your assumption that this is a barrier to communication is just wrong. Do not pass Go. Do not collect $200.

Also, is the babbling stage a thing with AAC use? Cause it usually is with oral speech and it's not successful communication yet but it has to happen in order to get to successful communication later. Exploring language and using it in unexpected ways is part of learning language. (This shows up in the PrAACtical AAC post.)

Stimming is great. I am usually stimming in some way. It's not usually vocal because that's just not what tends to work for me, but I am usually stimming. Hence, fidget spinners and blanket pieces. The fact that a person is, in fact, stimming does not mean you should stop them from doing whatever it is they're doing to stim. Suggesting alternative ways of stimming can be OK under some circumstances, but seriously, "they're stimming" doesn't mean "they should stop." Similarly, "it's echolalia" doesn't mean "they should stop."

Thursday, November 30, 2017

One of the big things with augmentative and alternative communication devices is that you're not supposed to take the device away from the person who uses it. The idea that you don't do that came up in the AAC class I'm taking this semester. The reason that came up is a bit different from the visceral, that's how I talk wtf reaction I have as a part time AAC user, but it came up.

The video was, "AAC
in the Classroom for Students with Significant Disabilities: A
Progression Strategy From BIGmack to SoundingBoard and Beyond!" It can be found on AbleNet under Ablenet university webinars, registration required but free. This quote led me to respond.

“The only time they get a voice is
when you give it to them. You need to leave the device with them so they
start learning self-control.”

I suppose a student could have a self-control issue? Here's the thing: you have no way of knowing if that's an issue, if taking away the device has been a thing, because a person's natural self-regulation doesn't apply so well in scarcity, even if they already have the ability to regulate themselves. It's not just about regulating myself -- it's also about not knowing if the thing will remain available. If I think someone else might finish the chocolate cake before I get any, I'm going to go for it when I'm not quite as hungry (and haven't had quite as much of the healthier options) than when I know it'll still be there if I wait. The same principle applies with talking: say everything you can, while you know you can.

Scarcity over time absolutely can mess up any self-regulation that's been learned, too. Even if teaching self-control is a concern here, it's not always so much, "leave the device with them so they learn self-control." Sometimes it's, "leave the device with them so you don't destroy whatever self-control they have."

That's all besides my main issue: I've never heard anyone use the need for a speaking person to learn self-control as the reason they don't tape this person's mouth shut. Most people seem to get that taping someone's mouth shut is not OK. (Most, not all. In the context of really nasty abuse, it happens, and be warned if you decide to look at the details.) Most people don't need a self-control argument in order to understand that taping someone's mouth shut is unacceptable.

An argument about the need to teach self-control shouldn't be needed here, either. If we have to consider teaching self-regulation (a useful skill, to be sure!) as an argument for why we shouldn't be taking away a person's communication access, things have already gone badly wrong.

Monday, November 13, 2017

Under most circumstances where people would expect me to report pain, I display a really high pain tolerance, or threshold, or probably both. I've mentioned a couple times that I went hiking on a freshly broken foot, and that I was fine. When I got attacked by an ~800lb Old Spot pig, I had a cantaloupe sided lump on my thigh, and I also went on rides at the Fryeburg fair that afternoon. (I got mauled on a Tuesday. I rode my bike to school that Friday. There was no pain medication involved.)

Then there are other kinds of pain where I ... don't have that kind of tolerance.

The key is how my sensory processing issues are involved in the pain. (I'm not certain that it's possible for my sensory processing to be uninvolved in any sensory experience I have, ever, and it's probably related to the high pain threshold at the least.)

If my sensory processing issues (and I'm OK calling these ones issues, they hurt) are the cause, my pain tolerance is super low. Camera flash? Comparable to getting hit in the face, because I kind of am getting hit in the face. Ow.

If it's an injury pain, I've got a high pain tolerance. I'll probably notice that I did something, because I was presumably aware when the injury happened. (It took a while to figure out that my leg had an issue after I passed out in the pool because I was not conscious when I hit my leg on the edge of the pool.)

If it's an illness pain, there's a significant chance that I won't notice. It's not unusual for me to figure out that I'm sick based on a more behavioral cue. Why am I shaking? Why am I shivering? Why can't I talk? I just threw up so I guess I'm sick.

The gist of my thoughts is: dear medical professionals, I'm not a sick or injured NT. Overwhelmingly, I've had medical professionals assume that if I'm not presenting the way they've been taught people with a given issue present, that means I'm not having that problem. This has led to them missing broken bones. So, the first thing I want to talk about is pain.

No, I can't rate my pain on a scale of 1-10 for you in any useful way. The only pain scale I've ever found that made any kind of sense to me put hiking on a freshly broken foot at a 2 or a 3. I am reasonably certain that hiking on a freshly broken foot would not be a 2 or a 3 for an abled neurotypical human, but it was for me.

Yes, I understand that you need a pain scale number for insurance. Figure out what the problem is or take a reasonable guess based on everything except my reported pain level, then you take a guess for what it should be. I really, really can't. It is a waste of both of our times to try to get a number from me.

If you think or were taught that a person with a given injury or condition "can't" do something, think about why. Is it a structural issue, where a joint literally won't take the weight, or is it supposed to hurt too much, or is it supposed to take too much energy? If it's pain, there's a very good chance I can do it anyways. See again: hiking on a freshly broken foot. Yes, I'm still annoyed at the doctor who concluded the image that looked like a month-old broken foot (because it was a month-old broken foot!) couldn't be a month-old broken foot because I'd been walking on it.

Somewhat related is hypermobility and hypermobility related injuries. It's related because hypermobility often comes with chronic pain. My pain sense isn't reliable enough to tell you if I have hypermobility related chronic pain or not. (Oops.) It also comes with an increased likelihood of dislocating and subluxing things, and often a ridiculous range of motion. (Said range of motion could get reduced by injuries. It happens.) Related to my hypermobility:

"Normal" range of motion and my normal range of motion are two different things. Don't assume I'm fine because my range of motion is strictly greater than this "normal" range of motion, and realize that things are very wrong if that's all I've got.

Things you think a person needs to be really loose to do? I can probably still do them while very tense. A friend of mine who weighs over 250 pounds recently wound up standing on my back in an attempt to break up the tension there. It just barely worked. I still had my normal flexibility like that. Don't assume my muscles aren't all knotted up just because I'm still flexible.

If I dislocate something, I can probably put it back into the joint myself, and I probably didn't tear anything. That means I'm going to recover from it way faster than someone without hypermobility. However, it may still hurt, and it was still a dislocation. It especially will still hurt if my joint sat wrong for a while. (Also, my hips will hurt every time, right away. I recover really fast, but yes I need a moment.)

And of course, communication. I'm Autistic. I don't communicate like a neurotypical person would.

I am going to be precise. If your recap is even slightly off, I will correct you. Considering that y'all have managed to forget things like, "Alyssa isn't being sedated" despite being reminded many times, and y'all have also manged to misunderstand "I don't trust my pain sense to tell me if it's broken or not" as "Not broken because the patient can walk," I'm going to keep doing this. Cope. Or even thank me because I'm keeping you from making medical errors!

Written communication is better for me than spoken communication. I can be much more precise in writing. If you give me the questions that are going to be asked ahead of time (or believe that I have some idea - I do!) I can type up answers and get them to you. That way, you don't need to worry that you've tripped a script in the initial description of whatever my issue is.

I have a lot of scripts. "Fine thanks and you?" is a script. Unless this is my annual physical, it is also inaccurate.

Phones are bad. Let me book online, or via email, or in person before I leave the prior appointment. Do not demand I book over the phone. I can't.

Wednesday, November 1, 2017

I wear many metaphorical hats. I'm a teacher. I'm a published poet. I'm a disability studies scholar, affiliated with a university but not for disability studies. I'm a graduate student in neuroscience. I'm an Autistic advocate, and not only a self-advocate (advocating for myself is often harder than the general stuff.) I'm a blogger.

Always, I am all of these things (and a bunch of other things). Sometimes, I get the opportunity to combine them. I've been blogging for GradHacker, part of Inside Higher Ed, since the start of the calendar year. That's for writing that's relevant to graduate students, or about graduate school.

Even though I know disabled graduate students exist, and disabled professors exist, and anyone teaching will eventually have disabled students, I've worried before every disability-related pitch I've made to them. Is it a topic that anyone outside disability communities would care about? Do they have enough background to understand the issue even if they care? Will the editors go for it, even if the audience would find the post useful?

It's far easier to talk about something like my discussions with my union, where my example "just happens" to be about disability. I know graduate assistants unions and contracts are widely relevant. I know "read your contract!" is good advice for any graduate assistant. One of the reasons I give is about knowing where I go for my accommodations, but it's not the only one I give. There were all of three disability posts on GradHacker before I started blogging for them, so far as I can tell. Breaking that pattern was a bit nerve-wracking. (Three of the posts I have up for them are explicitly about disability, and all but one at least references it. Seems like a lot, but I said I could bring a disability perspective when I applied and they took me so they kind of asked for it? That's what I tell myself, anyways.)

The disability series I'm writing for GradHacker now didn't start out as a series at all. It started with a post I'd had the idea for, and then suddenly couldn't not write. That's how a lot of my writing happens, actually. The disability stories I'd heard over the course of my time at university, either from professors or from other students, scare me. A way of explaining the pattern came to me, and I had a post. I was about to post it here, and then I realized that the GradHacker audience was the one that really needed to see it. They're reasonably likely to be teaching college later, and they might be doing so now! That became of the most commented-on pieces on GradHacker, because I "spoke" up. (Maybe the most commented on. Definitely the most commented on since I started blogging for GradHacker, almost by a factor of three.)

Now it's going to be at least four posts: one about disability stories, one about using AAC as a student, an upcoming one about the accommodation talk as a student (written, but not scheduled to post until late this month), and one an editor suggested to me about disclosure as a disabled teacher.

I'm talking to academia, or the future of academia, about things that directly affect me as an Autistic graduate student. Some people might even be listening. I hope so.

Thursday, October 19, 2017

This semester, I'm taking a class about Augmentative and Alternative Communication (AAC). There are videos. I do something like liveblogging while watching them, just into Open Office. Now the results are here.

Thursday, October 5, 2017

["Somewhere on the spectrum" here is "somewhere on the autism spectrum," or variants on the theme of claiming everyone to be a little bit autistic.]

Autism diagnosis can be pretty arbitrary. There isn't a blood test. There are genes that are associated with an increased probability of being autistic, but that's not the same thing as a gene "for autism" or a genetic test. We don't really ask about the internal experience of being autistic, either. Instead, we basically have a behavioral diagnosis: if you do X, Q, and W, but not A or B, then we're going to conclude that autism is the proper label. C makes us wonder if you might really have some other thing, but we won't rule out autism if you don't meet the rest of the criteria for that other thing. (Or, we shouldn't.)

Since autistic behavior is a subset of normal human behavior, this gets messy. Autistic people might tend to stim in characteristic ways, but everybody stims, and sometimes we're just getting in extra trouble for a way of stimming that is actually pretty common. Think about fiddling with a pen or pencil as an example of us getting in extra trouble for something most people will sometimes do.

That means edge problems. Where, exactly, are we putting the line between two neurotypes? The
location of the line changes when we change the diagnostic criteria -
that's always a big topic of discussion around DSM changes. Telling people who seem to be near an edge that they are definitely on one side or the other of that edge, based purely on external behavior, will lead to mistakes. Some of these mistakes will be harmful.

Any categorization scheme dealing with people has to deal with the reality that no two people are exactly alike. Not every single person is easy to classify. Our nervous systems didn't read the textbooks while wiring themselves! There are people who fit equally well (or equally poorly) in several categories. The problem there is with the textbooks, and the inevitable incompleteness of categorization systems. MASSIVE harm is done when people treat the problem of not fitting the classification system as being with us instead.

Oh, and let's not pretend that everyone diagnosing autism (or any other neurotype) actually understands the neurotypes they're diagnosing. Plus there's problems from people taking advantage of their positions of power, or otherwise acting in bad faith. Sometimes, things are intentionally done wrong.

Now, all of these issues are real. I've seen people use some combination of these issues to argue that everyone is somewhere on the spectrum, and that's where the problem is. "Some people are hard to classify" doesn't mean "everyone is hard to classify" or "everyone is somewhere on the spectrum for neurotype Y." On a similar note, "The person who diagnosed me incorrectly with X didn't understand X or my actual neurotype of Y" is different from "X doesn't really exist" or "Everyone is really Y." In each of those cases, the first statement is true. The second and third statements are not, and actually look a lot like diagnostic arbitrariness themselves. (They can certainly hurt people in similar ways to diagnostic arbitrariness around the edges of definitions.)

Tuesday, September 26, 2017

Yet another article about screen time is going around. I swear, those things are everywhere. This time it's Temple Grandin (who gets touted as being an autism expert in general when she's actually an expert in livestock, like cows*) talking about limiting screen time for autistic kids. She's actually more nuanced about it than most - the headline says screen time, and she says it once too, but she does specify what "kind" of screen time she means. Most people don't.

So, here's a bunch of things that get lumped under screen time:

1) I have an ereader. I am reading a book (or a paper related to my graduate studies). On a screen!

2) I'm watching a movie. On a screen!

3) I'm playing Pokemon Go, which involves a lot of walking around, but also it's a cell phone game. On a screen!

4) I'm playing a computer game. On a screen!

5) I currently can't talk, so I'm using FlipWriter on my iPad to communicate with my classmates. On a screen!

6) I'm teaching math. It's an online class, which is great because my ability or inability to speak at the time is irrelevant. My "accommodation" of getting to write or type instead of talking, when needed, is already built in to the system. Still. Where am I doing this? On a screen!

7) I'm using the Internet to talk to a friend who lives across the country. On a screen!

Which of these am I supposed to be limiting? Why are we using one category for all of them, if the answer isn't all of them?

Or, which of these will you admit to having a problem with, versus which ones would you actually like to get rid of? Because I think that's part of the why. If you build a category full of things that you don't like, including some things that it's considered OK to take issue with (video games!), you can get away with talking about the whole category as a problem. Build up the apparent size of the "problem" by including numbers from the parts you need to at least pretend are OK (maybe AAC? maybe online classes?), talk about supposed bad effects from one item in the set (video games?) as if they came from the entire set, and then there's clearly a big problem. Ban or limit the whole category.

2) Abled people call the thing distracting, because our existence in public is apparently distracting.

3)
The thing is either banned entirely or permitted only for people with
the paperwork to prove they need it for disability reasons.

4)
Disabled people who need the thing either don't have access to the
thing or must out themselves as disabled in order to gain access. If
outing oneself is required, the thing is heavily stigmatized.
Instead of being banned because it's distracting to others, it's apparently distracting to us? In any case, the thing is banned or limited "for our own good."

Then what happens?

Whoops, no ereader for you unless you can prove you need it for a disability reason and are willing to out yourself. Spend the money and the space on those paper books! Who cares that they're harder to hold up, or that the electronic version is searchable?

Whoops, no more movies! (You know, storytelling? Acting? It's on a screen, though, so we can't have that.)

Whoops, no more games on a screen! Never mind that some of them involve walking and most of them involve problem solving and that fun matters on its own.

Whoops, no AAC for you unless you have formal documentation of the fact that you need it and are willing to out yourself. Better go back to being silent in class, or maybe not going to school at all! It's distracting to have you here, after all. Or you could try this low-tech system? (Which, to be fair, is most of what I use. Doesn't mean it's OK to make me stick to the low-tech options in the situations where my high-tech, screen dependent options are better.)

Whoops, no more online classes. (Temple actually made this one an explicit exception, so, again, tiny bit of kudos for the nuance, but don't say screen time unless you actually mean screen time because words have meanings.)

Whoops, no more friends who live far away! Pay attention to the jerk in front of you who thinks screens are the devil.

You only had to admit to taking issues with the video games, but now all this is gone, because you could point to something that many people will take an issue with and generalize it beyond any semblance of accuracy.

*I'm sure she's an expert in what works for her. She basically got pushed to "pass" for neurotypical, which is still what mainstream experts tend to think of as being the "optimal outcome" for autism but is often a recipe for burnout. Now she recommends stuff that makes it sound like she agrees that's the best thing. She also led to the popularization of the idea that "autistic people think in pictures." As an autistic aphantasiac (no minds eye), I'm well aware that's not consistently the case. So, no, I'm not a fan of Grandin.

Thursday, August 10, 2017

So, a few weeks ago I met with two folks from a company that's making a computer game or a video game related to autism and social skills. I agreed to meet with them for a couple reasons:

The one I'd met before, I met at a hack-a-thon like event (un-hack-a-thon?) that was autism focused and had many autistic participants, mostly teenagers, and which used Nick Walker's description of autism as a starting point. Starting from a neurodiversity paradigm description of autism is nice, and not something I see much of for technology and autism stuff.

The one I'd met also likedthe "Autistic Party Giraffe" shirt I was wearing. I find that people's opinions on that shirt are somewhat useful information: folks who comment on liking it are generally able to handle the idea that Autistic identity is a thing without too much worldview conflict.

They clearly didn't quite know what "supporting autistic people in finding social methods that work for us" would really mean, but the couple ideas I'd thrown out at Chatter went over well. Things like, if we can get more done by not trying to pass for neurotypical, why the heck is passing for neurotypical considered an optimal outcome? (See Dani's "On Functioning and 'Functioning'," yet again.)

So, I did the thing. It was exhausting. We met at a coffee place between my campus and the train station on a Friday morning, and we talked for about two hours. They said at the time that what I was saying made sense, and that it changed their perspectives, and now they needed to figure out how to navigate the tangled mess of doing something actually helpful with their game while also getting the needed funding to make the darn game.

One incident that sticks out for me was the demo video of the game. They brought a laptop, and there was a minute or two of gameplay video that I watched. When it first started, there was a big face and eyes right at me. I flinched. Unexpected face in my face! Then there were points where a player was supposed to recognize the emotion that this being was expressing. The emotions were clearly overacted, both in terms of facial expressions and tone of voice. This was supposed to be some sort of "easy" mode, I guess? Whatever. I could tell it was overacted. That didn't mean I could always tell what emotion was being overacted. (Yeah, I got some "wrong.")

Judging by their reactions to my reactions (how meta theory of mind can we go here?), it seems I served as an object lesson there:

Identifying that an emotion is being expressed is not the same as identifying what that emotion is.

Managing OK in real-life social situations is apparently not the same as recognizing overacted emotions in artificial settings.

Some autistic people will absolutely flinch from unexpected eye contact. Ow.

Monday, July 31, 2017

Heads up that this is about the current US government, including the POTUS. Meaning: Everything is a mess.

Every time that several bad things are happening at once, call them R through Z, I see comments like this:

Don't worry about X, it's just a distraction (from Y)!

Z isn't a real threat, it's just a distraction (from R).

They want you focused on S instead of all the other stuff, don't fall for it!

Here's the problem: all of R through Z are legitimately bad. Every single one of them. They might not affect you personally, but they are all bad. Some are foreign policy disasters. Some are complete failures of how our government is "supposed" to work, and not in ways that would help marginalized folks. (A massive change in how policing is handled could be great. Encouraging brutality in arrests is not the massive change that could be great. It's taking the status quo and making it even worse.) Some are fairly blatant attacks on one group or another. (Taking Medicaid apart will get disabled people killed or institutionalized. See also: why ADAPT has been protesting at pretty much all things healthcare.)

These aren't distractions. To borrow a term from the Internet we rely so heavily on, it's a dedicated denial of service attack (DDOS). The idea behind DDOS is that a person or group sends so many requests to a server at once that the server crashes and loses most or all of the requests, making whatever site it's supposed to host unusable. Think of all the bad things happening as requests - you want to do things about them, hopefully. Think of yourself as the server - you have a limited capacity to handle requests, or a limited capacity for issues to take action about. If you try to take action on all of them, you'll get overloaded and quite possibly handle none.

That's precisely the idea behind DDOS. Overwhelm the server (you, in this metaphor) and they can't do anything. For actual servers, there are a variety of ways to handle it but no perfect solutions, because a server that can't respond to requests for information isn't much of a server at all. For us, any one person clearly can't pay attention to every single issue. This isn't a call for you to focus on more things at a time. (That sounds like a bit of a contradiction, since to focus you generally need to narrow things down.)

So: you can't focus on every single issue at once. You still need to focus on a few issues, or even just one. That's fine. The difference between understanding all the bad things happening that aren't your personal focus as distractions and understanding them as part of a DDOS attack is what happens when you encounter another person who is focused on a different set of a few issues. If those issues are distractions, their focus is a problem. If those issues are part of a DDOS attack, their focus is great. You want to know that other people are covering these other issues! Splitting up the issues between different groups of people so that everything gets covered even though you don't cover everything is the best way we have of responding if all the issues are real.

And what about things like foreign connections and the whole Russia mess that we know Trump doesn't like to have talked about? Noticing what news tends to come with increases in the DDOS onslaught is still useful. That's the news that they really want to make sure gets lost because we're too overwhelmed to deal with it.

Saturday, July 29, 2017

Yeah, yeah, I know, I've talked about this before. Assuming
I caught all my prior posts, this is the sixteenth time I've talked about
language choices for autism, though this one isn't quite the same as the
others. It’s coming as the result of a good discussion that helped me clarify
thoughts I'd been having rather than the result of someone insisting my
language choice is wrong because they were taught so.

And history is the key to my current thoughts. Every way I can think of to
identify myself as Autistic or as Queer has history. Usually as a slur, in the case of Queer identity - Queer itself is an example of this. “Autistic” as noun? It’s
part of the dehumanizing nonsense that got person-first language started in the
first place.

Person-first language, or “person with autism”? Yeah, it started in a good
place, where people with disabilities, mostly intellectual or developmental
disabilities, decided that they wanted that language to emphasize their personhood. Professionals were (frankly
often still are) forgetting that we’re people. Said professionals picked up the
language. They didn’t pick up the intent: remember that we’re people. At least
in the case of autism, and probably for other disabilities, they picked up a
completely different idea: that the autism or other disability is somehow separable from the person, and there’s a
“normal” person underneath. That’s a history I want nothing to do with – don’t call me a person with autism. Also, if
you need a language construction to remember that I’m human I don’t want you
anywhere near me. I don’t. I’m not sorry.

“Differently abled”? Technically true, I guess. It’s another one where there
may have been good intentions originally – recognizing that we have abilities that typical people may
not have access to, and that this can be a direct result of our
disabilities. (Or, or different abilities?) It gets used as as a way to ignore
the realities of disability, of access barriers, and sometimes of the reality
that there are things we just can’t do.

“On the spectrum”? It’s been touted as a compromise solution to this
language debate. Mostly by professionals who think “person on the spectrum” is less euphemistic than “person with
autism” and by people “on the spectrum” who are willing to be tokenized, as far
as I can tell. It’s not only unclear (there are many spectrums), but also still a person-first construction. That’s
not a compromise! But folks insist it is one.

“Aspergers” or any variation
thereof? 1) False. Literally does not apply. 2) When it was a diagnosis in the
DSM, it was frequently applied to mean “high functioning” or to avoid scaring
people with the “autism” label. It ties in with aspie
supremacy, and that can kill. No
way. That’s not just a history I don’t like. That’s a present I find morally
reprehensible.

Now, I need to find a way to talk about who I am, what my experiences are as
an Autistic person. I need to use language that will be understood. Making up new words is a valid option. It’s where new language
comes from. I use plenty of words that were created in my community. But take a
look at the history behind some of the words I said I have issues with. Some of
them started in my community, or communities like mine. Then they got picked up
by folks who want to pretend that the difference isn't quite real, isn't
important, or can somehow be separated from the person (maybe needs to be in
order for the person to count as a Real Person.) Even language that could be
good has this happen. Then there’s the reclaimed slurs. (A lot of the language
around Queerness is of the reclaimed slur type.) Just about all the language
has problems of this sort. At this point, reasonable people can reach different
preferences based on which bad history,
which bad associations, which ones
are we going to tolerate or reclaim for the sake of being understood?

Now, I am of the "queer as in fuck you" school of thought for most
of my divergences[ii]. Disability
is a word that scares people. “Good intentions” behind folks saying they don't
see me as autistic, or as disabled are an indicator of how much disability is
seen as a Bad Thing. Making people face the scary concept is actually an
argument for using capitalized,
identity first Disabled and Autistic in my case. Folks can sit with that
particular discomfort, and if they tell me they don't see me that way or I
shouldn't call myself that, they're getting asked 1) why they think their idea
of me trumps my own, and 2) why they think they know better than I do what I
should be called. If my identity is so uncomfortable for them that this is
taken as attacking, we’ve got a big problem.

[i]
That would totally be enough for me
to hate being called “on the spectrum,” though.

“Critical
neuroscientists frame the question of a science gap between neuro-
and social scientists, experts and the public, just as couple's
guides conceive of the gender gap in terms of unawareness,
misunderstanding, or ignorance, promoting the idea that all matters
can be settled through enhanced communication and better knowledge of
each other's distinctive language, culture, needs or concerns.”

This
needs more attention paid to it. Here is a big issue: there
is a power imbalance. Patriarchy
is a word for the imbalance in the couple's guide, and it would
relate to the sciences one too since hard sciences tend to be thought
of as men's fields while social sciences are thought of more as
women's fields. (Accuracy of this thinking is another issue, but STEM
in general runs man-heavy.)

That
contributes to the rhetorical positioning of the fields, where
neuroscientific “facts” can't be questioned by social sciences,
even if questioning the facts isn't exactly what's going on.
Sometimes it's questioning the causes and interpretation of the
reported result rather than questioning whether or not the result was
correct, or reproducible. Though the fMRI study of a dead fish is
relevant, and so is the fMRI of the same person daily for about a
year – fMRI is notinfallible,
no more than any scientific procedure is, and pretending it is will
get us into trouble.

The
author then asks about “lay expertise” from patients, relatives,
and activists. Since I'm studying neuroscience but came from the
Neurodiversity Movement before I got into neuroscience, I wonder
where that puts me. As a neuroscience student, I'm one of the science
people. As an Autistic person, I'm somewhat a patient. (Not much of
one, haven't been in therapy related to autistic traits for a while,
but when I write as an Autistic person, I go in that category.) And
there is definitely a power difference between the roles. There has
to be, for Theory of Mind to have been interpreted to mean autistic
people can't understand our own experiences.
Not everyone making use of the word thinks that, but it's an
interpretation I've seen way too much of.

The author then points to this
framework as “preventative politics,” where it keeps the peace by
avoiding/assuaging conflict in the name of interdisciplinarity. She
argues this could prevent good science that would come from
controversy. I'd agree, but also say that it can involve silencing of
ideas that aren't status quo as part of the peacekeeping.

Another issue with the focus on
communication is that it only works if everyone is acting in good
faith. It's the same problem
with Nonviolent Communication and similar: if everyone is acting in
good faith, it works fine. If anyone involved is actually seeking to
maintain control or to do harm, consciously or not, it's not going to
work. If one person's goals actively exclude the other person's
goals, better communication can lead to figuring this out, but not to
solving the problem. Seeking to expand the domain of one's own field
without worrying too much about the domain of anyone else's field
could lead to a similar failure in interdisciplinary communication
ideas.

Tuesday, May 30, 2017

Fidget spinners are a fad. Thinkpieces about fidget spinners, therefore, are also a fad. That's how it works, right? On one side, there's people who are arguing that these are toys (true), that they are a fad (true), that they can distract some people (true), that there is not research showing improved focus from their use (true), and that they are not an accessibility issue (false). On another side, there's people arguing that they are a focus tool for some autistic people and/or people with AD(H)D (true), that the lack of evidence is due to a lack of research and not a statement of inefficacy to use against individuals who find them useful (true), that this can be an accessibility issue (true), and that their fad nature among neurotypical students is bad (false) because it is getting the toys banned (mixed truth value). I've also seen more nuanced views, generally from disabled people, but those seem to be the two main camps.

I want to point out a pattern in how accessibility discussions go, especially in educational contexts.

A disabled person needs something for access reasons.

Abled people call the thing distracting, because our existence in public is apparently distracting.

The thing is either banned entirely or permitted only for people with the paperwork to prove they need it for disability reasons.

Disabled people who need the thing either don't have access to the thing or must out themselves as disabled in order to gain access. If outing oneself is required, the thing is heavily stigmatized.

Disabled people who have an actual access conflict with the thing are erased entirely, which makes conversations about possible solutions to the access conflict impossible. One set of needs or the other will "win." Any disabled people who need to avoid the thing are lumped in with the people who want to ban the thing for ableist reasons and therefore vilified. Which set of needs "wins" here varies, but it usually has some relationship to hierarchy of disability stuff and having one set "win" while the other "loses" is a bad solution regardless.

That's not just a fidget spinner thing, but it does apply here. With fidget spinners, autistic people and folks with ADHD (I'd love to know of a reasonably recognized way of talking about this neurotype without the second D/in a neurodiversity paradigm way, btw) end up in both the "need the thing" and the "need to avoid the thing" groups. I assume some other neurotypes are similarly split as well - I just don't have the familiarity to assert so. With visual alerts on fire alarms, D/deaf people need the thing. Since the visual is a strobe, a lot of neurodivergent people, especially people with photosensitive epilepsy, need to avoid the thing. With service animals, the folks who use them need the thing. People with allergies need to avoid the thing, and not everyone with an allergy can safely share a space with a service animal, even if they are treating their allergies. Conflicting access needs exist, and this pattern prevents us from finding ways to deal with the conflicts. Instead, one access need gets lumped in with abled people who don't like the thing because it's associated with disability and therefore presumed not to be a real need.

Now for fidgets: some people need something to do with their hands while listening if they're going to retain anything. I am in this group, by the way. In high school, I knit, I sewed, and I made chainmail - armor, not spam. I've also tried drawing, which takes care of the "need to do something in order to sit" issue but takes enough attention that I'm no longer following the conversation, so that doesn't work for me in class. Writing hurts quickly enough that while taking notes has sometimes been possible at university, there was no way it was going to be the answer for the duration of a school day in middle or high school. (I, specifically, should not have a laptop in class. If I'm going to
need notes it's the least bad option, but least bad does not mean good.) So I did assorted arts and crafts that were fairly repetitive and totally unrelated to class. The biology teacher who told us on day one that he had ADHD was both the most understanding teacher about my need to fidget somehow and the teacher most at risk of being distracted by my making armor in class.

That last paragraph is the "no, really, I need to fidget." It's also the "there are several fidget options that work for me." Most, but not all, of the standard fidget toys will meet my needs, as I discovered because they are also a fad and I got some awesome fidget toys. This is important, when access conflicts come into play - if there are several options that meet the access need of the first disabled person, it's easier to find one option that everyone is OK with. When there are several options that work, requesting "not option A in situation W" is not an access issue, because options B through H are still fine. If we're going to come up with reasons that each of B through H are also not fine, individually, then we're going to have a problem.

The fidget toy fad is making options D through H cheaper and cooler. When fidgets are marketed as assistive technology, they are super expensive. Considering that disabled people tend not to have a lot of money, that's an access issue, so the fad is making a set of possible solutions more accessible. That's cool. It's also leading to a sufficient presence for teachers to make explicit policies about the toys (as opposed to banning them person by person), and for a flat ban to seem like a good idea to teachers who are seeing kids appear distracted by them. (My bet is that the neurotypical students who appear distracted actually are. I expect the autistic and ADHD students who appear distracted are a mix of actually distracted because they are just as distractable as any other student and only appearing to be distracted because of ableist ideas about what paying attention looks like. Remember, I'd fail special needs kindergarten as a twenty-four year old PhD student.) The explicit banning for everyone is ... not so good. Mostly because the other options are usually also disallowed or heavily stigmatized, and then we may well be left with no good options.

And let's not pretend handing everyone a fidget spinner, or any other fidget, is going to magically "solve ADHD" or whatever. I think some of the camp that's firmly against the toys is reaching that position for similar reasons to haters of weighted vests - we hand it over and the person is still autistic, or still ADHD. A tool that a person uses to cope in a less than accessible environment doesn't make them stop being disabled by the environment. Plus a fidget spinner isn't going to help everyone. Some people really will be distracted if they have something to play with, and some of those people really will be neurodivergent. Conflicting access needs, again, are a thing. If one person needs a fidget, and another needs not to be next to someone with an obvious fidget, those two people probably shouldn't sit next to each other. Giving people fidgets that they can use while the toy remains in their pocket is also a possibility in some cases. We can have conversations about access conflicts, if we admit that both sets of needs exist. (We also need to admit that some subset of the people making arguments about distraction are doing the bad faith argument where everything disabled people need is a distraction because, essentially, our presence in public is a distraction.)

Saturday, May 20, 2017

That's one of those sentences I read every so often, which is technically true, but which doesn't actually lead to the conclusions I see it used to support. Taste buds really do change with age! This is a thing that happens, and it's part of why there are certain foods kids tend not to like but which adults are more able to tolerate. (I think most alcoholic drinks go in this category, where kids tend not to like the taste anyways?)

As true as it is that tastes change, there's some things my brain has decided I need to explain now about why this doesn't mean getting into a power play with someone over what they eat and how they're "picky" is a good idea.

You probably don't know what the result of "pushing the issue" is going to be. I don't just mean long term results. I mean short term, in the minutes to hours right after forcing the (in)edible object down. Obviously, you don't expect it to be a big deal, or else you wouldn't be trying to force a "picky" eater to eat something they can't eat. How wrong are you ready to be? TMI alert, last time I made myself drink something that was an issue, it came back up. (If it hadn't been something I was medically supposed to have, I wouldn't have tried. It still didn't work, because it didn't stay down.)

The fact that someone's tastes may change and they may be able to eat a food later doesn't mean they can tolerate it now. The change hasn't happened yet. So even if you're correct about the nature of the upcoming change, you're still trying to make someone eat something they don't currently tolerate. See point 1.

Also, even if you were going to be correct, you can cause that not to happen by creating an association between being forced to eat the food and whatever sensory issue it's hitting. That can create a new issue with the food in question, besides taste...

The issue may not be the taste. I can't drink anything carbonated. You might think that's a rather broad category for a taste issue. You'd be correct. It's not a taste issue. It's best described as a texture issue, and you've said nothing about texture sensitivities changing. In fact, most of the foods I can't deal with are texture issues, not taste ones.

The changes in taste may not be the ones you expected or hoped for. Some foods that were issues before can become non-issues, but it can go the other way too. As a very small human, I could eat mushrooms. As an adult human, I can not eat mushrooms. (It's also the texture, not the taste.) Chocolate pudding was a "safe" food for me as a kid. It's about 50-50 on my being able to eat it now. (Texture again. Also, partially related to times when I didn't get the choice about yogurt, which has never been an OK texture and which is close enough to pudding that making yogurt even worse made pudding a problem. See point 2.1.) I ... actually can't think of any foods I can have now that I couldn't deal with as a kid.

Tastes do change as we get older. That doesn't mean they'll change the way you want them to, or that a possible change that hasn't happened yet justifies acting as if it's already happened.

Thursday, May 18, 2017

This is another one I read for neuroethics. I was considering using this article for my presentation on a neuroethics related topics, but that didn't happen because someone else split off my too-large group and it wasn't too big anymore. We actually wound up talking about a medication used to treat addiction ... that can itself be addictive. Fun times. So, here's some of my thoughts from reading Critical studies of the sexed brain.

“They suggest that we work and talk
across disciplines as if neuroscientists were from Mars and social
scientists were from Venus, assigning the latter to the traditional
feminine role of assuaging conflict” (247). sigh I am not
surprised that some scientists think of social sciences that way.

Brain plasticity+ identity formation in
intersex people, brains vs. genitals. That's going to be interesting. By which I mean, I have concerns. I have friends who are intersex. I know people who do intersex activism. And I know intersex people who concluded that intersex and/or nonbinary is their
gender identity rather than picking one of the two binary genders.
Hope the author isn't assuming a gender identity must be one of
man/woman. Heck, mine isn't that and as far as I know, I'm not intersex.

Oi
at calling autism a disease. It is a neurodevelopmental
disability [or a neurotype, that's a good word and also let's remember what I'm saying when I say disability - the social model of disability is a thing.] Also I know the
author found neurodiversity stuff because the article comes up when I
search the journal for neurodiversity, what the heck? I don't expect
to hear it called a neurotype
in anything done by neurotypical(-passing) academics but really?
Disease?

Ok,
gender in the brain as a result of plasticity, that's going to be
interesting – “reflect gendered behavior as learned and
incorporated in a social context” is a thing, but please, please
don't let this turn into “male socialization” for trans women or
“female socialization” for trans men, or either of the above for
nonbinary folks. The socialization of “consistently mistaken for X
while actually Y” is not the same as the socialization of “X.”
Ok, individual differences are a thing. That's good. “Plasticity
arguments are extremely interesting as they wage war against both
biological and social
determinism, reductionism, essentialism, and other -isms.” Phew
that's
not the socialization argument I was worried about, I don't think.

Does
she mean “cishet” by “normal people”? (Cishet=cisgender,
heterosexual.) I appreciate the quotation marks around “normal
people” but there probably is
another word for what she means and using it would be nice.

Now we have one of
my rage buttons. All caps time!

OH MY GOD STOP
CALLING NEURODIVERSITY AN ASPERGERS THING. THE ANI PEOPLE WERE
CLASSIC EVEN IF THEY TALK NOW, AND ALSO DIAGNOSED BEFORE ASPERGERS
WAS IN THE DSM. MEL BAGGS IS NONSPEAKING. AMY SEQUENZIA IS
NONSPEAKING. I'M CLASSIC EVEN THOUGH I USUALLY TALK. STOP. STOP.
SERIOUSLY THE ROOTS ARE OLD ENOUGH THAT ASPERGERS WASN'T A DIAGNOSIS
YET WHEN A LOT OF OUR FOLKS WERE DIAGNOSED, WHICH MEANS THEY WEREN'T
DIAGNOSED ASPERGERS. THEY ARE NOT ASPERGERS, WHICH IS ALSO NOT A
DIAGNOSIS ANYMORE. (maybe was when written?)

Intersex activist
history! I knew about unwanted surgery, gender role training, and
folks wanting their own intersex bodies back. I also know someone who was
put on unwanted hormones. What are the results of Diamond getting so
lauded while speaking in terms of brain sex, though? It's still the
language coming from the people who try to enforce the man/woman
dichotomy. What are the results of using the "sexed brain" discourse
while not necessarily fitting in the binary?

1 Walker,
N. (September 27, 2014). Neurodiversity: Some basic terms and
definitions. Neurocosmopolitanism: Nick Walker's notes on
neurodiversity, autism, and cognitive liberty.
[blog post] Retrieved from
http://neurocosmopolitanism.com/neurodiversity-some-basic-terms-definitions/
is a good explanation of the neurodiversity related vocabulary I
tend to use when thinking about neuro stuff.