Periscope has a new tool to combat spam and abuse in broadcasts. Periscope allows anyone to broadcast live to a global audience and enables viewers to interact in real time. Designed to be transparent, live, and community-led, the new reporting tool gives people watching the broadcast the ability to report comments they find inappropriate as they appear on the screen.

Here’s how it works:
1. During a broadcast, viewers can report comments as spam or abuse. The viewer that reports the comment will no longer see messages from that commenter for the remainder of the broadcast. The system may also identify commonly reported phrases.
2. When a comment is reported, a few viewers are randomly selected to vote on whether they think the comment is spam, abuse, or looks okay.
3. The result of the vote is shown to voters. If the majority votes that the comment is spam or abuse, the commenter will be notified that their ability to chat in the broadcast has been temporarily disabled. Repeat offenses will result in chat being disabled for that commenter for the remainder of the broadcast.

“We want our community to feel comfortable when broadcasting,” said Kayvon Beykpour, Periscope CEO and co-founder. “One of the unique things about Periscope is that you’re often interacting with people you don’t know; that immediate intimacy is what makes it such a captivating experience. But that intimacy can also be a vulnerability if strangers post abusive comments. Broadcasters have always been able to moderate commenters in their broadcast, but we’ve now created a transparent tool that allows the collective actions of viewers to help moderate bad actors as well.”

For more information on how the comment moderation tool works visit the Periscope blog