Twitter is stepping in to deal with the “targeted abuse and harassment” facing many of the survivors of the Parkland shooting.

Since the teens have emerged as powerful voices on social media following last week’s shooting, they’ve had to face what many other high profile Twitter users before them have dealt with: abuse and harassment.

This time, the social media platform is wasting no time in addressing the issue which “goes against everything we stand for at Twitter.” The company says it’s “actively working on” responding to reports of abuse and harassment.

It’s also using its anti-spam tools “to weed out malicious automation” targeting Parkland survivors and “the topics they are raising.” (Earlier in the day, Twitter announced new rules meant to crack down on bots.)

We are also using our anti-spam and anti-abuse tools to weed out malicious automation around these individuals and the topics they are raising. We have also verified a number of survivors’ Twitter accounts.

Twitter also confirmed that it had verified “a number of” accounts of Parkland survivors. The company previously announced plans for a new verification system earlier in the year, after it paused the program following widespread criticism.

Twitter’s updates come as the Parkland survivors have morphed into very vocal and public faces leading a new debate about gun control in the country. That role has quickly made them targets for harassment and conspiracy theories, which have also cropped up on Facebook and YouTube.

That’s when the scammers come in. Someone, with a Twitter account designed to mimic the famous person’s, posts a reply promising free bitcoin, ether, or some other cryptocurrency. All you supposedly need to do to be on the receiving end of this potential bonanza is send a little ether to a provided address.

See where this is going?

“To celebrate this, I’m giving awaу 5,000 ЕTH to my followers,” reads once such reply from @elonlmusk (notice the extra “L”). “To pаrtiсipаte, just sеnd 0.5-1 ЕTH to the address bеlow and gеt 5-10 ЕTH back to the address you used for the transaсtion.”

To celebrate this, I’m giving awaу 5,000 ЕTH to my followers!

To pаrtiсipаte, just sеnd 0.5-1 ЕTH to the address bеlow and gеt 5-10 ЕTH back to the address you used for the transaсtion.

Clicking through the link takes you to a page that makes it look like ether is indeed being sent out. Spoiler: It’s not. That page is fake.

Don’t be fooled.

Image: some scammer

To make things even more confusing, the person behind the grift looks to have set up fake Twitter accounts to attest to the legitimacy of the con. The account @GaryPet70008539, for example, was created in January of this year and has only tweeted once.

Thank you sо much Elon! Just sent 0.4 EТН and got 4 EТН within 6 minutes! 👏🏻You’re a great person! Keep it up!

The account @MattMar46412834 was also set up in January, and has also only tweeted once.

If you take the time to copy and paste the receiving ETH address into a legit service, like Etherscan, you are greeted with a very different picture. Specifically, one that shows no outgoing transactions and a whole lot of incoming.

Never gonna see that again.

Image: etherscan

At the time of this writing there are almost 20 ETH in that account for a value of around $16,424. In other words, people are falling for this. Please don’t be one of those people.

After all, while some fanboys like to call Elon Musk the real-life Tony Stark, Iron Man wasn’t exactly known for randomly giving away cryptocurrency on the internet.

Twitter, which is constantly criticized for not doing enough to prevent harassment, has updated its guidelines with more information on how it handles tweets or accounts that encourage other people to hurt themselves or commit suicide.

While we continue to provide resources to people who are experiencing thoughts of self-harm, it is against our rules to encourage others to harm themselves. Starting today, you can report a profile, Tweet, or Direct Message for this type of content.

In a new section on its Help Center titled “Glorifying self-harm and suicide,” Twitter outlined its approach to tweets or accounts that promote or encourage self-harm and suicide. The company says its policy against encouraging other people to hurt themselves is meant to work in tandem with its self-harm prevention measures as part of a “two-pronged approach” that involves “supporting people who are undergoing experiences with self-harm or suicidal thoughts, but prohibiting the promotion or encouragement of self-harming behaviors.” Twitter already has a form that lets users report threats of self-harm or suicide and a team that assesses tweets and reaches out to users they believe are at risk.

Twitter says offenders may be temporarily locked out of their account the first time they violate the policy and their tweets encouraging self-harm or suicide removed. Repeat offenders may have their accounts suspended.

Last fall, Twitter published a new version of its policies toward abuse, spam, self-harm and other issues, following a promise by chief executive officer Jack Dorsey that it would be more aggressive about preventing harassment. Publishing stricter guidelines and putting them into practice, however, are two different things. Many of Twitter’s critics still believe the platform doesn’t do enough to enforce its anti-harassment measures and must provide more information about exactly what kind of content results in a suspension. For example, telling someone to “kill yourself” arguably violates its guidelines, but a quick search of #killyourself returns many recent results, including tweets aimed at specific people.

So, you live and breathe online and couldn’t be happier about it. But maybe, just maybe, your daily digital interactions across the social web aren’t quite as authentic as you thought.

No, this time around it’s not the algorithm’s fault, but rather the result of a different kind of bad actor mucking up the works: bots. The automated scourge has invaded practically every platform you love, and isn’t going anywhere any time soon. But you can fight back.

Despite what basically any quick scan of Twitter or Facebook might suggest, however, the surest way to beat the bots isn’t to argue with them. Rather, it’s to see them for what they are — manufactured fictions designed to manipulate both you and the larger conversation in order to further unknown (and sometimes known) agendas.

That means you’re going to need to be able to spot them in the wild.

Bots, bots, everywhere

These days bots are an inescapable part of online life. Just last year researchers estimated that Twitter alone was home to around 30 million of them. There are automated spam accounts on Instagram, Facebook, and pretty much everywhere else.

Some appear designed to intentionally rile us up or to support specific political candidates, while others have purposes less clear. While the goals of their creators may vary, there are telltale signs that many bots share. If you can identify these, you can better armor yourself against their onslaught.

Fair warning: Doing so isn’t always easy.

Spot the bot

Some automated accounts will be easy to identify as such. For example, @EmojiMeadow straight up tells you that it is, in fact, a bot. “I am a bot” is literally the first sentence of its Twitter bio. We’re not talking about that kind of bot, however.

The automated accounts that you need help uncovering are the ones that are actively trying to trick you. Accounts like the now-suspended @jenna_abrams, which many — including certain media outlets — thought to be the account of a real person named Jenna Abrams. Spoiler: It wasn’t.

Thankfully, there are a few easy steps you can take to help you determine the authenticity of an account. Notably, none of these are foolproof, but a critical and discerning eye is something we’re all going to need to develop and hone if we are to survive as a functioning society.

So why not start here.

Bringing the noise.

Image: Westend61/getty

First, check the account’s bio. Does it read like it belongs to a real person? That’s a start. Does it have a profile picture of a person instead of an abstract silhouette? Yes? Cool, now reverse image search that pic. The result should be telling. If the picture appears all across the web, it’s probably recycled from somewhere else and suggests there may be some bullshit afoot.

Next, check the account’s history. If we’re talking Twitter, for example, there are a few behaviors that scream automated account. Robhat labs, the team behind likely bot identifier botcheck.me, called out a few key ones.

“Behavior such as tweeting every few minutes in a full day,” the group explains, “endorsing polarizing political propaganda (including fake news), obtaining a large follower account in a relatively small time span, and constant retweeting/promoting other high-confidence bot accounts are all traits that lead to high-confidence bot accounts.”

Don’t have the time to dig through the history of every last garbage account on Twitter? Try dropping the handle in the aforementioned bot-checking tool. It will give you back a report that says whether or not an account is probably (but not definitively) automated.

What next?

So, you’ve found what you believe to be a bot. Good job! Seeing past the lie is an important first step. But what to do next?

Not everyone has the time or inclination to thoroughly investigate every spammy account online, so getting the word out is important. Now, this does not mean you should start actively posting on that account’s wall or whatever (seriously don’t do this). Instead, try using the reporting mechanisms the platform provides to flag it.

Twitter’s definition of spam includes “many forms of automated account interactions and behaviors as well as attempts to mislead or deceive people,” so deceptive bots fall right in that category. Report the account by going to the profile page, clicking the “overflow icon,” selecting “report,” and then choosing “they are posting spam.”

Both Facebook and Instagram also have defined ways to report platform abuse, and feel free to avail yourself of those. Keep in mind that the social media giant in question will in all likelihood not do anything about your report, buy hey, you never know.

And anyway, you don’t need to knock a bot offline to beat it. Realizing it’s an automated account out to deceive you takes away its power to do so. Feel free to mute or block the account after you’ve reported it and return to going about your daily online business.

It looks like trolls are exploiting the latest iPhone bug to make life very difficult for Twitter users.

Earlier in the week, yet another iPhone-crashing iOS bug surfaced. For some reason, a single character from the Indian Telugu language will cause whatever app it’s viewed in to crash repeatedly.

Apple has said it’s aware of the issue and plans to fix it in an upcoming update, but the issue has proved to be particularly problematic on Twitter. As word of yet another crash-inducing bug has begun to spread, it appears that some Twitter users are using it to their advantage.

Since the bug surfaced late last week, some have been inserting the offending character into their Twitter names and encouraging others to spread it in tweets.

It’s much easier for the bug to spread on Twitter than within iMessages and other apps, because your phone will be affected if anyone in your feed uses the character, whether or not you intentionally view the tweet.

Needless to say, it’s been wreaking havoc on a number of users’ phones, leaving them unable to use the Twitter app without frequent crashes.

Again, Apple says it has a fix on the way, though it’s not clear when it will be available — it’s been patched in the latest iOS betas so hopefully it will be soon. Until then, Twitter users have discovered a couple workarounds. If you can access your Twitter account outside of the app, such as within Safari, you can log in and block any users tweeting the character.

It’s not ideal, but it should serve as a temporary fix until Apple or Twitter issue a formal bug fix.