Now that its
bottom line is being affected, YouTube says it will begin to take additional steps to protect its
advertisers and creators from inappropriate content on its network. In a
blog post authored by YouTube CEO Susan Wojcicki on Monday, the company said it will
increase its staff to over 10,000 in 2018 to help better moderate video
content. The news follows a series of scandals on the video-sharing site
related to its lack of policing around
content aimed at children,
obscene comments on videos of children,
horrifying search suggestions, and more.

The platform has already started to remove channels that violate its
policies.

TechCrunch
continued:

One example, the channel ToyFreaks, was
recently terminated after concerns were raised about its videos, where a fathers’ young
daughters were filmed in odd, upsetting and inappropriate situations, at
times.

YouTube had
said then the channel’s removal was part of a new tightening of its child
endangerment policies. It also
last month implemented new policies to flag videos where inappropriate content was aimed at children.

It has since pulled down thousands of videos of children as a result, and
removed the advertising from nearly 2 million videos and over 50,000 channels.

The move is a change from the long-stated assertion by most social media
platforms that they are not publishers with editorial control but rather
just a medium for consumers to use when creating their own content. Now
these companies are grappling with the ramifications of their openness on
user growth and advertising revenue.

We’re also taking actions to protect advertisers and creators from
inappropriate content. We want advertisers to have peace of mind that their
ads are running alongside content that reflects their brand’s values.
Equally, we want to give creators confidence that their revenue won’t be
hurt by the actions of bad actors.

We believe this requires a new approach to advertising on YouTube,
carefully considering which channels and videos are eligible for
advertising. We are planning to apply stricter criteria, conduct more
manual curation, while also significantly ramping up our team of ad
reviewers to ensure ads are only running where they should. This will also
help vetted creators see more stability around their revenue. It’s
important we get this right for both advertisers and creators, and over the
next few weeks, we’ll be speaking with both to hone this approach.

By beefing up staffing, YouTube hopes to alleviate the problems with its
advertising model, under which companies can have their ads run alongside
controversial video.

BuzzFeed News reported that the problems persist on the platform after
their attempts to resolve the issue with machine learning and better
filtering.

It wrote:

It's unclear when the advertising changes will go into effect. For now,
controversial videos still appear to be running alongside advertisements.
In a review of videos masquerading as family friendly content, BuzzFeed
News found advertisements running on a number of popular "flu shot" videos,
a genre that typically features infants and young children screaming and
crying.

This is a problem for brand managers looking to avoid controversy in a
marketplace that is strongly driven by consumer values.

BuzzFeed reached out to companies it found advertising on these videos, and
the businesses weren’t thrilled to be mentioned in connection with the
content.

"A Lyft ad should not have been served on this video," a Lyft spokesperson
told BuzzFeed News. "We have worked with platforms to create safeguards to
prevent our ads from appearing on such content. We are working with YouTube
to determine what happened in this case."

Adidas offered BuzzFeed News a statement dated from Nov. 23 and added, "we
recognize that this situation is clearly unacceptable and have taken
immediate action, working closely with Google on all necessary steps to
avoid any reoccurrences of this situation." Less than one hour after their
initial response, the flu shot videos appeared to be deleted off YouTube
entirely.

Instagram is also trying to curb user behavior that crosses the line of
decency for some users.

The platform has responded to the increased use of its photo-sharing site
to publish photos of tourists taking inappropriate pics with wild animals.

Protecting wildlife and sensitive natural areas is hard enough as it is,
and it's not helping that every
brain-dead tourist wants to post a
selfie with a koala bear or dolphin. Starting today, Instagram is making it harder
to find such content. If you search hashtags associated with images that
could harm wildlife or the environment, it will post a warning before
letting you proceed.

"I think it's important for the community right now to be more aware,"
Instagram's Emily Cain told National Geographic. "We're trying to
do our part to educate them."

Again, the company is responding to an investigative report that revealed
misuse of its product.

Engadget
continued:

The decision followed an investigation by National Geographic and
World Animal Protection into wildlife tourism. The investigators discovered
that animals were being captured illegally from rain forests and kept in
cages, then trotted out for selfies with tourists ignorant of their plight.

Although the new policy won’t stop bad actors, wildlife activists hope it
will help other people to become more informed about the unintended
consequences of their actions.

Engadget
concluded:

World Animal Protection's Cassandra Koenen points out that the animals
people most want to pet or hold, like koalas and
sloths, really don't like being handled. And the problem is made worse because
tourists are terrible at determining which attractions treat animals
poorly.

Though Instagram's gesture doesn't seem like it'll be much of a deterrent,
Koenen believes that it will stop folks that don't mean harm and just don't
know better. "If someone's behavior is interrupted, hopefully they'll
think, maybe there's something more here, or maybe I shouldn't just
automatically like something or forward something or repost something if
Instagram is saying to me there's a problem with this photo," she said.

These changes for brand managers and social media publishers might seem
subtle, but they mark the beginning of what could be a revolution in the
way these companies police content on their sites.

Regardless of intent, companies and platforms alike are being held
accountable for the content they share by consumers. The early days, when
achieving viral success by any means necessary was acceptable, are over,
and organizations would be wise to pivot to a model in which content
highlights key brand values and connects with an audience of concerned and
informed customers.

What do you think of these changes,
PR Daily
readers? How will they inform your content strategy?