Description
Stream Guard is an automated content moderation platform that uses AWS Rekognition for visual analysis and AWS Transcribe for speech-to-text to detect inappropriate content in live streams and uploaded videos. It integrates with multiple streaming platforms, provides customizable content detection, and offers real-time alerts and detailed reports for effective content management.
Practical Use Case and User Story
As a content moderator, I need Stream Guard to monitor live streams on platforms like YouTube, Twitch, or Facebook Live, ensuring compliance with community guidelines in real-time. It should also maintain professionalism in corporate and educational events and automatically scan user-generated content for violations before publishing. This will help maintain a safe and appropriate environment across various streaming and content-sharing platforms.
Tech Stack Involved
Selenium Python: Automates the process of connecting and scraping streams from multiple platforms.
AWS Transcribe: Converts speech from video into text for analyzing offensive language and inappropriate verbal content.
AWS Rekognition: Detects visual anomalies such as nudity, smoking, fire, and other inappropriate visuals.
Custom Labeling: Allows for tailored content moderation rules to suit specific industry needs.