Spaces:
Running
Running
metadata
title: README
emoji: π
colorFrom: green
colorTo: indigo
sdk: static
pinned: false
license: apache-2.0
The General Medical AI (GMAI) team at Shanghai AI Lab is dedicated to building general-purpose AI for healthcare. We aim to make healthcare AI more efficient and accessible through cutting-edge research and open-source contributions.
Our research spans a wide spectrum of medical AI:
- General medical image segmentation
- General-purpose multimodal large models for medicine
- 2D/3D medical image generation
- Medical foundation models
- Surgical video foundation & multimodal models
- Surgical video generation
π Large-Scale Medical Data
We have curated massive-scale medical data resources to fuel the vision of General Medical AI.
Key Statistics:
- 100M+ Medical images
- Hundreds of millions of segmentation masks
- 20M+ Medical text dialogue records
- 10M+ Large-scale medical imageβtext pairs
- 20M+ Multimodal Q&A entries
π Selected Achievements
Multimodal Large Models (LVLMs)
- SlideChat: A Large Vision-Language Assistant for Whole-Slide Pathology Image Understanding.
- UniMedVL: Unifying Medical Multimodal Understanding and Generation through Observation-Knowledge-Analysis.
- GMAI-VL: A Large Vision-Language Model and Comprehensive Multimodal Dataset Towards General Medical AI.
- OmniMedVQA: A Large-Scale Comprehensive Evaluation Benchmark for Medical LVLM.
- GMAI-MMBench: A Comprehensive Multimodal Benchmark for General Medical AI.
Foundation Models & Segmentation
- SAM-Med3D: A Vision Foundation Model for General-Purpose Segmentation on Volumetric Medical Images.
- SAM-Med2D: Comprehensive Segment Anything Model for 2D Medical Imaging.
- STU-Net: Scalable and Transferable Medical Image Segmentation (1.4B parameters).
- IMIS-Bench: Interactive Medical Image Segmentation Benchmark and Baseline.
π Connect with Us
We welcome collaboration across academia, healthcare, and industry.
- GitHub Organization: github.com/uni-medical
- Zhihu Blog: GMAI Team
- Contact: hejunjun@pjlab.org.cn