Twitter has been working with universities, researchers and journalists on how best to measure conversational health, and is also looking at how to build and earn trust.

“The way to do that was to be transparent around what those indicators are and where Twitter is within those,” said Dorsey.

“To be accountable to try to drive [those indicators] towards more health and to be clear and consistent around the actions we are taking or not taking.

“So we’ve started down that path and we’re making some progress, but the most important thing is we now have this framework so we can start measuring our progress.

“If you can’t measure where you are, you can’t improve anything at all.

“The reality is while you see trolling behavior, and harassment, and abuse in your every day – you know, people on the street yelling at you – our system was being used to unfairly amplify that, and to silence voices… because they were being targeted.

“People were gaming our system and utilizing our model in a way that prevents others from feeling confident enough to speak.

“We have had brave counter-examples of that which just focus on the positive…but, unfortunately, those are rare.

“We are looking at our system cohesively to understand where our system is being used unfairly, and correct it so we can remove that negative conduct, or at least make people aware of what they are doing so that they can participate more fully in healthy conversation.”

“We’re biasing a whole lot of the service towards topics of interest, and that’s why people come to Twitter in the first place,” Dorsey says.

“You don’t come to Twitter because it’s a social network, your don’t come to Twitter because the people in your address book are there, you come because you’re interested in something, you’re interested in what’s happening in the world.”

He continued: “We’re moving away from just a pure social network understanding of what we’re doing and more towards an interest-based network.

“You’re starting to see this in the service itself where we’re making it a lot easier to follow events and topics, engagement around hashtags, this is what we believe people want to do… which is finding people around those interests and having a conversation.”

“We’re removing the ability to game the system. We’re removing the ability to unfairly amplify some of that conduct.

“Our terms of service is around less content, more conduct, and if we see abusive conduct against particular accounts, particular movements, that is against our terms of service, we have a whole suite of tools to [prevent] that situation from locking accounts until those tweets are deleted, to multiple strikes which results in a permanent suspension, which is a last resort.”

In terms of tackling the bots, Dorsey said Twitter is taking a heavy hand to that technology too.

“We challenge eight million accounts every single week to declare that they are human or not. So eight million accounts a week will get a Captcha, which is when you go to a website and you type in those letters you see on the screen to verify you’re a human, to help deal with [bots].

“The manpulation of using bots and human manipulation is against our terms of service.

“We’re going to make a bunch of mistakes along the way, because like security, like privacy this is not an end point that you get to and you check the box.