Spaces:
Running
A newer version of the Streamlit SDK is available:
1.52.1
title: Video Virality Scoring
emoji: π₯
colorFrom: blue
colorTo: purple
sdk: streamlit
sdk_version: 1.38.0
app_file: ui/streamlit_app.py
pinned: false
Video Virality Scoring
This project evaluates how viral a video is likely to become by analyzing scenes, hooks, pacing, content quality, and trend alignment. It uses Computer Vision for scene detection and Large Language Models (LLMs) for scoring and content recommendations.
Overview
The system processes a user-uploaded video through four core stages:
- Scene Detection using Computer Vision
- Hook and Frame-Level Analysis
- LLM-Based Virality Scoring
- Trend Alignment and Performance Prediction
The output includes a final Virality Score and actionable suggestions.
Features
Scene Detection
- Extracts keyframes and identifies scene boundaries.
- Measures pacing, cuts, transitions, and visual variation.
- Determines whether the video maintains viewer attention.
LLM-Based Scoring
- Rates narrative clarity, emotional impact, pacing, and retention.
- Generates a 0β100 Virality Score.
- Provides strengths, weaknesses, and recommendations.
Hook Analysis
- Focuses on the opening moments of the video.
- Evaluates whether the introduction can stop scrolling.
- Highlights issues in timing, structure, or visual appeal.
Trend Prediction
- Evaluates video styles against current social-media trends.
- Checks pacing, topic relevance, and engagement patterns.
- Predicts potential performance on TikTok, Reels, and YouTube Shorts.
Recommendations
- Identifies scenes to cut, extend, or enhance.
- Suggests improvements to strengthen retention and shareability.
- Provides guidance for boosting engagement and performance.
Tech Stack
- Python
- Streamlit for UI
- OpenCV for frame and video processing
- Scene detection algorithms
- LLMs for scoring and content analysis
- Docker deployment on Hugging Face Spaces
Project Structure
video-virality-scoring/
β
βββ ui/
β βββ __init__.py
β βββ streamlit_app.py
β
βββ files/
β βββ pipeline/
β βββ __init__.py
β βββ audio_analysis.py
β βββ frame_analysis.py
β βββ frame_extract.py
β βββ scene_detect.py
β βββ scoring.py
β
βββ .github/workflows/
βββ __pycache__/
β
βββ .env
βββ .huggingface.yml
βββ .python-version
βββ Dockerfile
βββ README.md
βββ START HERE.txt
βββ __init__.py
βββ config.py
βββ demo.mp4
βββ demo.txt
βββ entrypoint.sh
βββ main.py
βββ packages.txt
βββ pyproject.toml
How It Works
User uploads a video.
Frames are extracted, and scenes are detected.
Audio, hooks, and key segments are analyzed.
Extracted descriptors and metadata are passed to the LLM.
Final output includes:
- Virality Score
- Hook Score
- Trend Alignment
- Improvement Suggestions
Supported Formats
- MP4
- MOV
- WEBM
- Other social-media-friendly formats
Roadmap
- Platform-specific scoring
- Thumbnail quality evaluation
- Timestamp-based auto-cut suggestions
- Audio-only hook quality analysis
Contributing
Contributions are welcome. Open an issue or submit a pull request to suggest improvements or add new features.
For more tools and projects, visit: https://techtics.ai