Instructions to use kernels-community/flash-attn2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Kernels
How to use kernels-community/flash-attn2 with Kernels:
# !pip install kernels from kernels import get_kernel kernel = get_kernel("kernels-community/flash-attn2") - Notebooks
- Google Colab
- Kaggle
[Windows] Add torch 2.10 + cuda 13
#4
by mfuntowicz HF Staff - opened
No description provided.
mfuntowicz changed pull request status to closed