Rave Safety

We use a multi-layered approach combining advanced AI, automated systems, and community reporting to keep Rave safe.

Our Safety Approach

Proactive Monitoring

Our systems work 24/7 to detect CSAM and illegal content, blocking it in real-time before it can be uploaded or shared.

User Reports

Every user reported content or profile is thoroughly reviewed and appropriate action is taken for any policy violations.

Immediate Action

Our automated systems remove CSAM and illegal content detected in real-time. We block 99.9% of CSAM content before it can be uploaded.

Legal Cooperation

We work with law enforcement agencies and child safety organizations around the world to combat illegal content.

Advanced AI Protection

Zero-Tolerance CSAM Policy

Rave has a zero-tolerance policy for Child Sexual Abuse Material (CSAM). Our AI blocks illegal media during upload to protect users from harmful content. Detected content is reported to authorities, and accounts involved are permanently banned.

Hash Matching

We use industry-standard PhotoDNA technology to block known CSAM images.

Proprietary AI

Our machine learning model provides industry-leading performance in the automated detection and blocking of CSAM content.

Pattern Detection

Our systems identify patterns of illegal activity to remove bad actors from the platform.

Additional Protection Systems

Spam Detection

Our algorithms identify and remove spam, bot accounts, and mass messaging campaigns to prevent unwanted content and protect users from malicious activity.

External Link Blocking

We block links to external messaging apps (Telegram, Signal, Zangi) to prevent coordination of illegal activities.

Gore Detection

Our AI automatically detects and blocks violent, disturbing, or exploitative content to maintain a safe viewing experience for everyone on Rave.

Transparency Report

Enforcement data from our automated and manual moderation systems.

1,000+
Daily User Bans
We ban more than 1,000 accounts every day for trying to upload or share illegal content.
30,000+
Permanent Account Deletions
Over 30,000 accounts have been permanently deleted for severe violations of our policies.
430,000+
Images Blocked in Rooms
Our systems have analyzed over 171 million images and blocked more than 430,000 for containing harmful content such as child sexual abuse material (CSAM) or graphic violence.
334,000+
Images & Videos Blocked in DMs
We have scanned 10 million images and videos in Direct Messages, and blocked more than 334,000 for containing harmful content such as CSAM or graphic violence.
30,000+
Gallery Uploads Denied
We have blocked more than 30,000 uploads to the User Profile Gallery for containing content such as CSAM.
1.5M
Images Scanned Every Day
We scan 1.5 million images every day to keep the platform safe for everyone.
8,000+
External Links Blocked Every Day
We block over 8,000 suspicious external links every day (e.g., Telegram, Signal, Zangi) to prevent the sharing or sale of illegal content.
Our systems continuously evaluate content and behavior patterns to maintain platform safety.

Report Content

In-App Reporting

Report content directly from any room. Reported content is reviewed by our moderation team and appropriate action is taken.

Available on all platforms

Email Report

You can also report content via email. Please include usernames, timestamps, and a description of the violation for a faster resolution.

[email protected]