| # Phase 4: Web Deployment Guide | |
| ## Overview | |
| Phase 4 deploys CompI to Hugging Face Spaces with automatic CI/CD from GitHub. This enables public access to your multimodal AI art generation platform. | |
| ## 4.A: Repository Preparation β | |
| The following files have been added to your repo: | |
| - `packages.txt` - System dependencies for audio processing and OpenGL | |
| - `.gitattributes` - Git LFS configuration for model files | |
| - `requirements.txt` - Already present with Python dependencies | |
| ## 4.B: Create Hugging Face Space | |
| ### Step 1: Create New Space | |
| 1. Go to [Hugging Face Spaces](https://huggingface.co/spaces) | |
| 2. Click "Create new Space" | |
| 3. Choose: | |
| - **SDK**: Streamlit | |
| - **Space name**: `compi-final-dashboard` (or your preference) | |
| - **Visibility**: Public | |
| - **Hardware**: CPU basic (free tier) | |
| ### Step 2: Configure Space README | |
| Replace the default README.md in your Space with: | |
| ```markdown | |
| --- | |
| title: CompI β Final Dashboard | |
| emoji: π¨ | |
| sdk: streamlit | |
| app_file: src/ui/compi_phase3_final_dashboard.py | |
| pinned: false | |
| --- | |
| # CompI - Multimodal AI Art Generation Platform | |
| The ultimate creative platform combining text, audio, data, emotion, and real-time inputs for AI art generation. | |
| ## Features | |
| π§© **Multimodal Inputs** - Text, Audio, Data, Emotion, Real-time feeds | |
| πΌοΈ **Advanced References** - Multi-image upload with role assignment | |
| βοΈ **Model Management** - SD 1.5/SDXL switching, LoRA integration | |
| πΌοΈ **Professional Gallery** - Filtering, rating, annotation system | |
| πΎ **Preset Management** - Save/load complete configurations | |
| π¦ **Export System** - Complete bundles with metadata | |
| ## Usage | |
| 1. Configure your inputs in the "Inputs" tab | |
| 2. Upload reference images in "Advanced References" | |
| 3. Choose your model and performance settings | |
| 4. Generate with intelligent fusion of all inputs | |
| 5. Review results in the gallery and export bundles | |
| Built with Streamlit, PyTorch, and Diffusers. | |
| ``` | |
| ### Step 3: Add Secrets (Optional) | |
| In your Space Settings β Repository secrets, add: | |
| - `OPENWEATHER_KEY` - Your OpenWeatherMap API key for real-time weather data | |
| **Important**: Do NOT link the Space to GitHub yet. We'll deploy via CI/CD. | |
| ## 4.C: GitHub Actions Setup | |
| ### Step 1: Add GitHub Secrets | |
| In your GitHub repo, go to Settings β Secrets and variables β Actions: | |
| 1. **New repository secret**: `HF_TOKEN` | |
| - Value: Your Hugging Face **Write** token from [HF Settings β Access Tokens](https://huggingface.co/settings/tokens) | |
| 2. **New repository secret**: `HF_SPACE_ID` | |
| - Value: `your-username/your-space-name` (e.g., `AXRZCE/compi-final-dashboard`) | |
| ### Step 2: GitHub Actions Workflow | |
| The workflow file `.github/workflows/deploy-to-hf-spaces.yml` will be created next. | |
| ## 4.D: Runtime Optimization | |
| Default settings optimized for free CPU tier: | |
| - **Model**: SD 1.5 (faster than SDXL) | |
| - **Resolution**: 512Γ512 (good quality/speed balance) | |
| - **Steps**: 20-24 (sufficient for good results) | |
| - **Batch size**: 1 (memory efficient) | |
| - **ControlNet**: Off by default (users can enable) | |
| ## 4.E: Deployment Workflow | |
| 1. **Development**: Work on feature branches | |
| 2. **Testing**: Test locally with `streamlit run src/ui/compi_phase3_final_dashboard.py` | |
| 3. **Deploy**: Merge to `main` β GitHub Actions automatically deploys to HF Space | |
| 4. **Rollback**: Revert commit on `main` if issues occur | |
| ## Next Steps | |
| 1. Complete the HF Space setup above | |
| 2. Add GitHub secrets | |
| 3. The GitHub Actions workflow will be created automatically | |
| 4. Test deployment by pushing to `main` | |
| Your deployed app will be available at: `https://your-username-your-space.hf.space` | |