CyberGuard - Cyberbullying Detection Model
Model Description
This model is based on DistilBERT and fine-tuned for detecting cyberbullying and threatening language in text messages. It's designed for the CyberGuard mobile application to protect children from online harassment.
Intended Use
- Real-time message analysis in social media apps
- Cyberbullying detection
- Content moderation
- Safety monitoring for children
How to Use
from transformers import pipeline
classifier = pipeline("text-classification", model="Jishnuuuu/cyberguard-v1")
result = classifier("You are so stupid")
print(result)
# Output: [{'label': 'NEGATIVE', 'score': 0.9998}]
Model Performance
- Base Model: DistilBERT (66M parameters)
- Accuracy: ~90% on sentiment classification
- Speed: ~50ms per inference
- Size: 255MB
Training Data
Based on DistilBERT fine-tuned on SST-2 (Stanford Sentiment Treebank):
- Binary sentiment classification (Positive/Negative)
- Adapted for cyberbullying detection
Limitations
- English language only
- May not catch context-dependent sarcasm
- Best used as part of a comprehensive safety system
Ethical Considerations
This model is designed to protect children's safety while respecting privacy. Message content is analyzed locally and only threat indicators are shared with parents.
License
MIT License - Free for commercial and non-commercial use
Citation
If you use this model, please cite:
@misc{cyberguard2025,
title={CyberGuard: AI-Powered Cyberbullying Detection},
author={CyberGuard Team},
year={2025},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/Jishnuuuu/cyberguard-v1}}
}
Contact
For questions or issues, please contact through the Hugging Face model page.
- Downloads last month
- 25