ceccurr commited on
Commit
a1a4e91
·
verified ·
1 Parent(s): fca6e2d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +366 -3
README.md CHANGED
@@ -1,3 +1,366 @@
1
- ---
2
- license: bsd-2-clause
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bsd-2-clause
3
+ tags:
4
+ - human-motion-generation
5
+ - human-motion-prediction
6
+ - probabilistic-human-motion-generation
7
+ pinned: true
8
+ language:
9
+ - en
10
+ datasets:
11
+ - wjwow/FreeMan
12
+ ---
13
+ # SkeletonDiffusion Model Card
14
+ This model card focuses on the model associated with the SkeletonDiffusion model, from _Nonisotropic Gaussian Diffusion for Realistic 3D Human Motion Prediction_, codebase available [here](https://github.com/Ceveloper/SkeletonDiffusion/tree/main).
15
+
16
+ SkeletonDiffusion is a probabilistic human motion prediction model that takes as input 0.5s of human motion and generates future motions of 2s with a inference time of 0.4s.
17
+ SkeletonDiffusion generates motions that are at the same time realistic and diverse. It is a latent diffusion model that with a custom graph attention architecture trained with nonisotropic Gaussian diffusion.
18
+
19
+ We provide a model for each dataset mentioned in the paper (AMASS, FreeMan, Human3.6M), and a further model trained on AMASS with hands joints (AMASS-MANO).
20
+
21
+ <img src="./media/trailer.gif" alt="trailer" width="512">
22
+
23
+ ## Models & Workflows
24
+
25
+ | Name | Notes | inference.py config | ComfyUI workflow (Recommended) |
26
+ |----------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|
27
+ | ltxv-13b-0.9.7-dev | Highest quality, requires more VRAM | [ltxv-13b-0.9.7-dev.yaml](https://github.com/Lightricks/LTX-Video/blob/main/configs/ltxv-13b-0.9.7-dev.yaml) | [ltxv-13b-i2v-base.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/ltxv-13b-i2v-base.json) |
28
+ | [ltxv-13b-0.9.7-mix](https://app.ltx.studio/motion-workspace?videoModel=ltxv-13b) | Mix ltxv-13b-dev and ltxv-13b-distilled in the same multi-scale rendering workflow for balanced speed-quality | N/A | [ltxv-13b-i2v-mixed-multiscale.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/ltxv-13b-i2v-mixed-multiscale.json) |
29
+ | [ltxv-13b-0.9.7-distilled](https://app.ltx.studio/motion-workspace?videoModel=ltxv) | Faster, less VRAM usage, slight quality reduction compared to 13b. Ideal for rapid iterations | [ltxv-13b-0.9.7-distilled.yaml](https://github.com/Lightricks/LTX-Video/blob/main/configs/ltxv-13b-0.9.7-dev.yaml) | [ltxv-13b-dist-i2v-base.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/13b-distilled/ltxv-13b-dist-i2v-base.json) |
30
+ | [ltxv-13b-0.9.7-distilled-lora128](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-distilled-lora128.safetensors) | LoRA to make ltxv-13b-dev behave like the distilled model | N/A | N/A |
31
+ | ltxv-13b-0.9.7-fp8 | Quantized version of ltxv-13b | Coming soon | [ltxv-13b-i2v-base-fp8.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/ltxv-13b-i2v-base-fp8.json) |
32
+ | ltxv-13b-0.9.7-distilled-fp8 | Quantized version of ltxv-13b-distilled | Coming soon | [ltxv-13b-dist-i2v-base-fp8.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/13b-distilled/ltxv-13b-dist-i2v-base-fp8.json) |
33
+ | ltxv-2b-0.9.6 | Good quality, lower VRAM requirement than ltxv-13b | [ltxv-2b-0.9.6-dev.yaml](https://github.com/Lightricks/LTX-Video/blob/main/configs/ltxv-2b-0.9.6-dev.yaml) | [ltxvideo-i2v.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/low_level/ltxvideo-i2v.json) |
34
+ | ltxv-2b-0.9.6-distilled
35
+
36
+
37
+
38
+ ## Online demo
39
+ The model trained on AMASS is accessible in a demo workflow that predicts future motions from videos.
40
+ The demo extracts 3D human poses from video via Neural Localizer Fields ([NLF](https://istvansarandi.com/nlf/)) by Sarandi et al., and SkeletonDiffusion generates future motions conditioned on the extracted poses:
41
+ - [LTX-Studio image-to-video (13B-mix)](https://app.ltx.studio/motion-workspace?videoModel=ltxv-13b)
42
+
43
+
44
+ SkeletonDiffusion has not been trained with real-world, noisy data, but despite this fact it can handle most cases reasonably.
45
+
46
+
47
+ ## Usage
48
+
49
+ ### Direct use
50
+ You can use the model for purposes under the license:
51
+
52
+ ### Train and Inference
53
+
54
+ Please refer to our [GitHub](https://github.com/Ceveloper/SkeletonDiffusion/tree/main) codebase for both usecases.
55
+
56
+
57
+
58
+
59
+ ## Limitations
60
+ - This model is not intended or able to provide factual information.
61
+ - As a statistical model this checkpoint might amplify existing societal biases.
62
+ - The model may fail to generate videos that matches the prompts perfectly.
63
+ - Prompt following is heavily influenced by the prompting-style.
64
+
65
+
66
+
67
+ | | | | |
68
+ |:---:|:---:|:---:|:---:|
69
+ | ![example1](./media/ltx-video_example_00001.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A woman with long brown hair and light skin smiles at another woman...</summary>A woman with long brown hair and light skin smiles at another woman with long blonde hair. The woman with brown hair wears a black jacket and has a small, barely noticeable mole on her right cheek. The camera angle is a close-up, focused on the woman with brown hair's face. The lighting is warm and natural, likely from the setting sun, casting a soft glow on the scene. The scene appears to be real-life footage.</details> | ![example2](./media/ltx-video_example_00002.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A woman walks away from a white Jeep parked on a city street at night...</summary>A woman walks away from a white Jeep parked on a city street at night, then ascends a staircase and knocks on a door. The woman, wearing a dark jacket and jeans, walks away from the Jeep parked on the left side of the street, her back to the camera; she walks at a steady pace, her arms swinging slightly by her sides; the street is dimly lit, with streetlights casting pools of light on the wet pavement; a man in a dark jacket and jeans walks past the Jeep in the opposite direction; the camera follows the woman from behind as she walks up a set of stairs towards a building with a green door; she reaches the top of the stairs and turns left, continuing to walk towards the building; she reaches the door and knocks on it with her right hand; the camera remains stationary, focused on the doorway; the scene is captured in real-life footage.</details> | ![example3](./media/ltx-video_example_00003.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A woman with blonde hair styled up, wearing a black dress...</summary>A woman with blonde hair styled up, wearing a black dress with sequins and pearl earrings, looks down with a sad expression on her face. The camera remains stationary, focused on the woman's face. The lighting is dim, casting soft shadows on her face. The scene appears to be from a movie or TV show.</details> | ![example4](./media/ltx-video_example_00004.gif)<br><details style="max-width: 300px; margin: auto;"><summary>The camera pans over a snow-covered mountain range...</summary>The camera pans over a snow-covered mountain range, revealing a vast expanse of snow-capped peaks and valleys.The mountains are covered in a thick layer of snow, with some areas appearing almost white while others have a slightly darker, almost grayish hue. The peaks are jagged and irregular, with some rising sharply into the sky while others are more rounded. The valleys are deep and narrow, with steep slopes that are also covered in snow. The trees in the foreground are mostly bare, with only a few leaves remaining on their branches. The sky is overcast, with thick clouds obscuring the sun. The overall impression is one of peace and tranquility, with the snow-covered mountains standing as a testament to the power and beauty of nature.</details> |
70
+ | ![example5](./media/ltx-video_example_00005.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A woman with light skin, wearing a blue jacket and a black hat...</summary>A woman with light skin, wearing a blue jacket and a black hat with a veil, looks down and to her right, then back up as she speaks; she has brown hair styled in an updo, light brown eyebrows, and is wearing a white collared shirt under her jacket; the camera remains stationary on her face as she speaks; the background is out of focus, but shows trees and people in period clothing; the scene is captured in real-life footage.</details> | ![example6](./media/ltx-video_example_00006.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A man in a dimly lit room talks on a vintage telephone...</summary>A man in a dimly lit room talks on a vintage telephone, hangs up, and looks down with a sad expression. He holds the black rotary phone to his right ear with his right hand, his left hand holding a rocks glass with amber liquid. He wears a brown suit jacket over a white shirt, and a gold ring on his left ring finger. His short hair is neatly combed, and he has light skin with visible wrinkles around his eyes. The camera remains stationary, focused on his face and upper body. The room is dark, lit only by a warm light source off-screen to the left, casting shadows on the wall behind him. The scene appears to be from a movie.</details> | ![example7](./media/ltx-video_example_00007.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A prison guard unlocks and opens a cell door...</summary>A prison guard unlocks and opens a cell door to reveal a young man sitting at a table with a woman. The guard, wearing a dark blue uniform with a badge on his left chest, unlocks the cell door with a key held in his right hand and pulls it open; he has short brown hair, light skin, and a neutral expression. The young man, wearing a black and white striped shirt, sits at a table covered with a white tablecloth, facing the woman; he has short brown hair, light skin, and a neutral expression. The woman, wearing a dark blue shirt, sits opposite the young man, her face turned towards him; she has short blonde hair and light skin. The camera remains stationary, capturing the scene from a medium distance, positioned slightly to the right of the guard. The room is dimly lit, with a single light fixture illuminating the table and the two figures. The walls are made of large, grey concrete blocks, and a metal door is visible in the background. The scene is captured in real-life footage.</details> | ![example8](./media/ltx-video_example_00008.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A woman with blood on her face and a white tank top...</summary>A woman with blood on her face and a white tank top looks down and to her right, then back up as she speaks. She has dark hair pulled back, light skin, and her face and chest are covered in blood. The camera angle is a close-up, focused on the woman's face and upper torso. The lighting is dim and blue-toned, creating a somber and intense atmosphere. The scene appears to be from a movie or TV show.</details> |
71
+ | ![example9](./media/ltx-video_example_00009.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A man with graying hair, a beard, and a gray shirt...</summary>A man with graying hair, a beard, and a gray shirt looks down and to his right, then turns his head to the left. The camera angle is a close-up, focused on the man's face. The lighting is dim, with a greenish tint. The scene appears to be real-life footage. Step</details> | ![example10](./media/ltx-video_example_00010.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A clear, turquoise river flows through a rocky canyon...</summary>A clear, turquoise river flows through a rocky canyon, cascading over a small waterfall and forming a pool of water at the bottom.The river is the main focus of the scene, with its clear water reflecting the surrounding trees and rocks. The canyon walls are steep and rocky, with some vegetation growing on them. The trees are mostly pine trees, with their green needles contrasting with the brown and gray rocks. The overall tone of the scene is one of peace and tranquility.</details> | ![example11](./media/ltx-video_example_00011.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A man in a suit enters a room and speaks to two women...</summary>A man in a suit enters a room and speaks to two women sitting on a couch. The man, wearing a dark suit with a gold tie, enters the room from the left and walks towards the center of the frame. He has short gray hair, light skin, and a serious expression. He places his right hand on the back of a chair as he approaches the couch. Two women are seated on a light-colored couch in the background. The woman on the left wears a light blue sweater and has short blonde hair. The woman on the right wears a white sweater and has short blonde hair. The camera remains stationary, focusing on the man as he enters the room. The room is brightly lit, with warm tones reflecting off the walls and furniture. The scene appears to be from a film or television show.</details> | ![example12](./media/ltx-video_example_00012.gif)<br><details style="max-width: 300px; margin: auto;"><summary>The waves crash against the jagged rocks of the shoreline...</summary>The waves crash against the jagged rocks of the shoreline, sending spray high into the air.The rocks are a dark gray color, with sharp edges and deep crevices. The water is a clear blue-green, with white foam where the waves break against the rocks. The sky is a light gray, with a few white clouds dotting the horizon.</details> |
72
+ | ![example13](./media/ltx-video_example_00013.gif)<br><details style="max-width: 300px; margin: auto;"><summary>The camera pans across a cityscape of tall buildings...</summary>The camera pans across a cityscape of tall buildings with a circular building in the center. The camera moves from left to right, showing the tops of the buildings and the circular building in the center. The buildings are various shades of gray and white, and the circular building has a green roof. The camera angle is high, looking down at the city. The lighting is bright, with the sun shining from the upper left, casting shadows from the buildings. The scene is computer-generated imagery.</details> | ![example14](./media/ltx-video_example_00014.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A man walks towards a window, looks out, and then turns around...</summary>A man walks towards a window, looks out, and then turns around. He has short, dark hair, dark skin, and is wearing a brown coat over a red and gray scarf. He walks from left to right towards a window, his gaze fixed on something outside. The camera follows him from behind at a medium distance. The room is brightly lit, with white walls and a large window covered by a white curtain. As he approaches the window, he turns his head slightly to the left, then back to the right. He then turns his entire body to the right, facing the window. The camera remains stationary as he stands in front of the window. The scene is captured in real-life footage.</details> | ![example15](./media/ltx-video_example_00015.gif)<br><details style="max-width: 300px; margin: auto;"><summary>Two police officers in dark blue uniforms and matching hats...</summary>Two police officers in dark blue uniforms and matching hats enter a dimly lit room through a doorway on the left side of the frame. The first officer, with short brown hair and a mustache, steps inside first, followed by his partner, who has a shaved head and a goatee. Both officers have serious expressions and maintain a steady pace as they move deeper into the room. The camera remains stationary, capturing them from a slightly low angle as they enter. The room has exposed brick walls and a corrugated metal ceiling, with a barred window visible in the background. The lighting is low-key, casting shadows on the officers' faces and emphasizing the grim atmosphere. The scene appears to be from a film or television show.</details> | ![example16](./media/ltx-video_example_00016.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A woman with short brown hair, wearing a maroon sleeveless top...</summary>A woman with short brown hair, wearing a maroon sleeveless top and a silver necklace, walks through a room while talking, then a woman with pink hair and a white shirt appears in the doorway and yells. The first woman walks from left to right, her expression serious; she has light skin and her eyebrows are slightly furrowed. The second woman stands in the doorway, her mouth open in a yell; she has light skin and her eyes are wide. The room is dimly lit, with a bookshelf visible in the background. The camera follows the first woman as she walks, then cuts to a close-up of the second woman's face. The scene is captured in real-life footage.</details> |
73
+
74
+ # Models & Workflows
75
+
76
+ | Name | Notes | inference.py config | ComfyUI workflow (Recommended) |
77
+ |----------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|
78
+ | ltxv-13b-0.9.7-dev | Highest quality, requires more VRAM | [ltxv-13b-0.9.7-dev.yaml](https://github.com/Lightricks/LTX-Video/blob/main/configs/ltxv-13b-0.9.7-dev.yaml) | [ltxv-13b-i2v-base.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/ltxv-13b-i2v-base.json) |
79
+ | [ltxv-13b-0.9.7-mix](https://app.ltx.studio/motion-workspace?videoModel=ltxv-13b) | Mix ltxv-13b-dev and ltxv-13b-distilled in the same multi-scale rendering workflow for balanced speed-quality | N/A | [ltxv-13b-i2v-mixed-multiscale.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/ltxv-13b-i2v-mixed-multiscale.json) |
80
+ | [ltxv-13b-0.9.7-distilled](https://app.ltx.studio/motion-workspace?videoModel=ltxv) | Faster, less VRAM usage, slight quality reduction compared to 13b. Ideal for rapid iterations | [ltxv-13b-0.9.7-distilled.yaml](https://github.com/Lightricks/LTX-Video/blob/main/configs/ltxv-13b-0.9.7-dev.yaml) | [ltxv-13b-dist-i2v-base.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/13b-distilled/ltxv-13b-dist-i2v-base.json) |
81
+ | [ltxv-13b-0.9.7-distilled-lora128](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-distilled-lora128.safetensors) | LoRA to make ltxv-13b-dev behave like the distilled model | N/A | N/A |
82
+ | ltxv-13b-0.9.7-fp8 | Quantized version of ltxv-13b | Coming soon | [ltxv-13b-i2v-base-fp8.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/ltxv-13b-i2v-base-fp8.json) |
83
+ | ltxv-13b-0.9.7-distilled-fp8 | Quantized version of ltxv-13b-distilled | Coming soon | [ltxv-13b-dist-i2v-base-fp8.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/13b-distilled/ltxv-13b-dist-i2v-base-fp8.json) |
84
+ | ltxv-2b-0.9.6 | Good quality, lower VRAM requirement than ltxv-13b | [ltxv-2b-0.9.6-dev.yaml](https://github.com/Lightricks/LTX-Video/blob/main/configs/ltxv-2b-0.9.6-dev.yaml) | [ltxvideo-i2v.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/low_level/ltxvideo-i2v.json) |
85
+ | ltxv-2b-0.9.6-distilled | 15× faster, real-time capable, fewer steps needed, no STG/CFG required | [ltxv-2b-0.9.6-distilled.yaml](https://github.com/Lightricks/LTX-Video/blob/main/configs/ltxv-2b-0.9.6-distilled.yaml) | [ltxvideo-i2v-distilled.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/low_level/ltxvideo-i2v-distilled.json) |
86
+
87
+ ## Model Details
88
+ - **Developed by:** Lightricks
89
+ - **Model type:** Diffusion-based text-to-video and image-to-video generation model
90
+ - **Language(s):** English
91
+
92
+
93
+ ## Usage
94
+
95
+ ### Direct use
96
+ You can use the model for purposes under the license:
97
+ - 2B version 0.9: [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltx-video-2b-v0.9.license.txt)
98
+ - 2B version 0.9.1 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltx-video-2b-v0.9.1.license.txt)
99
+ - 2B version 0.9.5 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltx-video-2b-v0.9.5.license.txt)
100
+ - 2B version 0.9.6-dev [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-2b-0.9.6-dev-04-25.license.txt)
101
+ - 2B version 0.9.6-distilled [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-2b-0.9.6-distilled-04-25.license.txt)
102
+ - 13B version 0.9.7-dev [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-dev.license.txt)
103
+ - 13B version 0.9.7-dev-fp8 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-dev-fp8.license.txt)
104
+ - 13B version 0.9.7-distilled [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-distilled.license.txt)
105
+ - 13B version 0.9.7-distilled-fp8 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-distilled-fp8.license.txt)
106
+ - 13B version 0.9.7-distilled-lora128 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-distilled-lora128.license.txt)
107
+ - Temporal upscaler version 0.9.7 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-temporal-upscaler-0.9.7.license.txt)
108
+ - Spatial upscaler version 0.9.7 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-spatial-upscaler-0.9.7.license.txt)
109
+
110
+
111
+ ### General tips:
112
+ * The model works on resolutions that are divisible by 32 and number of frames that are divisible by 8 + 1 (e.g. 257). In case the resolution or number of frames are not divisible by 32 or 8 + 1, the input will be padded with -1 and then cropped to the desired resolution and number of frames.
113
+ * The model works best on resolutions under 720 x 1280 and number of frames below 257.
114
+ * Prompts should be in English. The more elaborate the better. Good prompt looks like `The turquoise waves crash against the dark, jagged rocks of the shore, sending white foam spraying into the air. The scene is dominated by the stark contrast between the bright blue water and the dark, almost black rocks. The water is a clear, turquoise color, and the waves are capped with white foam. The rocks are dark and jagged, and they are covered in patches of green moss. The shore is lined with lush green vegetation, including trees and bushes. In the background, there are rolling hills covered in dense forest. The sky is cloudy, and the light is dim.`
115
+
116
+ ### Online demo
117
+ The model is accessible right away via the following links:
118
+ - [LTX-Studio image-to-video (13B-mix)](https://app.ltx.studio/motion-workspace?videoModel=ltxv-13b)
119
+ - [LTX-Studio image-to-video (13B distilled)](https://app.ltx.studio/motion-workspace?videoModel=ltxv)
120
+ - [Fal.ai text-to-video](https://fal.ai/models/fal-ai/ltx-video)
121
+ - [Fal.ai image-to-video](https://fal.ai/models/fal-ai/ltx-video/image-to-video)
122
+ - [Replicate text-to-video and image-to-video](https://replicate.com/lightricks/ltx-video)
123
+
124
+ ### ComfyUI
125
+ To use our model with ComfyUI, please follow the instructions at a dedicated [ComfyUI repo](https://github.com/Lightricks/ComfyUI-LTXVideo/).
126
+
127
+ ### Run locally
128
+
129
+ #### Installation
130
+
131
+ The codebase was tested with Python 3.10.5, CUDA version 12.2, and supports PyTorch >= 2.1.2.
132
+
133
+ ```bash
134
+ git clone https://github.com/Lightricks/LTX-Video.git
135
+ cd LTX-Video
136
+ # create env
137
+ python -m venv env
138
+ source env/bin/activate
139
+ python -m pip install -e .\[inference-script\]
140
+ ```
141
+
142
+ #### Inference
143
+
144
+ To use our model, please follow the inference code in [inference.py](https://github.com/Lightricks/LTX-Video/blob/main/inference.py):
145
+
146
+ ##### For text-to-video generation:
147
+
148
+ ```bash
149
+ python inference.py --prompt "PROMPT" --height HEIGHT --width WIDTH --num_frames NUM_FRAMES --seed SEED --pipeline_config configs/ltxv-13b-0.9.7-distilled.yaml
150
+ ```
151
+
152
+ ##### For image-to-video generation:
153
+
154
+ ```bash
155
+ python inference.py --prompt "PROMPT" --input_image_path IMAGE_PATH --height HEIGHT --width WIDTH --num_frames NUM_FRAMES --seed SEED --pipeline_config configs/ltxv-13b-0.9.7-distilled.yaml
156
+ ```
157
+
158
+ ### Diffusers 🧨
159
+
160
+ LTX Video is compatible with the [Diffusers Python library](https://huggingface.co/docs/diffusers/main/en/index). It supports both text-to-video and image-to-video generation.
161
+
162
+ Make sure you install `diffusers` before trying out the examples below.
163
+
164
+ ```bash
165
+ pip install -U git+https://github.com/huggingface/diffusers
166
+ ```
167
+
168
+ Now, you can run the examples below (note that the upsampling stage is optional but reccomeneded):
169
+
170
+ ### text-to-video:
171
+ ```py
172
+ import torch
173
+ from diffusers import LTXConditionPipeline, LTXLatentUpsamplePipeline
174
+ from diffusers.pipelines.ltx.pipeline_ltx_condition import LTXVideoCondition
175
+ from diffusers.utils import export_to_video
176
+ pipe = LTXConditionPipeline.from_pretrained("Lightricks/LTX-Video-0.9.7-dev", torch_dtype=torch.bfloat16)
177
+ pipe_upsample = LTXLatentUpsamplePipeline.from_pretrained("Lightricks/ltxv-spatial-upscaler-0.9.7", vae=pipe.vae, torch_dtype=torch.bfloat16)
178
+ pipe.to("cuda")
179
+ pipe_upsample.to("cuda")
180
+ pipe.vae.enable_tiling()
181
+ def round_to_nearest_resolution_acceptable_by_vae(height, width):
182
+ height = height - (height % pipe.vae_spatial_compression_ratio)
183
+ width = width - (width % pipe.vae_spatial_compression_ratio)
184
+ return height, width
185
+ prompt = "The video depicts a winding mountain road covered in snow, with a single vehicle traveling along it. The road is flanked by steep, rocky cliffs and sparse vegetation. The landscape is characterized by rugged terrain and a river visible in the distance. The scene captures the solitude and beauty of a winter drive through a mountainous region."
186
+ negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"
187
+ expected_height, expected_width = 512, 704
188
+ downscale_factor = 2 / 3
189
+ num_frames = 121
190
+ # Part 1. Generate video at smaller resolution
191
+ downscaled_height, downscaled_width = int(expected_height * downscale_factor), int(expected_width * downscale_factor)
192
+ downscaled_height, downscaled_width = round_to_nearest_resolution_acceptable_by_vae(downscaled_height, downscaled_width)
193
+ latents = pipe(
194
+ conditions=None,
195
+ prompt=prompt,
196
+ negative_prompt=negative_prompt,
197
+ width=downscaled_width,
198
+ height=downscaled_height,
199
+ num_frames=num_frames,
200
+ num_inference_steps=30,
201
+ generator=torch.Generator().manual_seed(0),
202
+ output_type="latent",
203
+ ).frames
204
+ # Part 2. Upscale generated video using latent upsampler with fewer inference steps
205
+ # The available latent upsampler upscales the height/width by 2x
206
+ upscaled_height, upscaled_width = downscaled_height * 2, downscaled_width * 2
207
+ upscaled_latents = pipe_upsample(
208
+ latents=latents,
209
+ output_type="latent"
210
+ ).frames
211
+ # Part 3. Denoise the upscaled video with few steps to improve texture (optional, but recommended)
212
+ video = pipe(
213
+ prompt=prompt,
214
+ negative_prompt=negative_prompt,
215
+ width=upscaled_width,
216
+ height=upscaled_height,
217
+ num_frames=num_frames,
218
+ denoise_strength=0.4, # Effectively, 4 inference steps out of 10
219
+ num_inference_steps=10,
220
+ latents=upscaled_latents,
221
+ decode_timestep=0.05,
222
+ image_cond_noise_scale=0.025,
223
+ generator=torch.Generator().manual_seed(0),
224
+ output_type="pil",
225
+ ).frames[0]
226
+ # Part 4. Downscale the video to the expected resolution
227
+ video = [frame.resize((expected_width, expected_height)) for frame in video]
228
+ export_to_video(video, "output.mp4", fps=24)
229
+ ```
230
+
231
+ ### For image-to-video:
232
+
233
+ ```py
234
+ import torch
235
+ from diffusers import LTXConditionPipeline, LTXLatentUpsamplePipeline
236
+ from diffusers.pipelines.ltx.pipeline_ltx_condition import LTXVideoCondition
237
+ from diffusers.utils import export_to_video, load_image, load_video
238
+ pipe = LTXConditionPipeline.from_pretrained("Lightricks/LTX-Video-0.9.7-dev", torch_dtype=torch.bfloat16)
239
+ pipe_upsample = LTXLatentUpsamplePipeline.from_pretrained("Lightricks/ltxv-spatial-upscaler-0.9.7", vae=pipe.vae, torch_dtype=torch.bfloat16)
240
+ pipe.to("cuda")
241
+ pipe_upsample.to("cuda")
242
+ pipe.vae.enable_tiling()
243
+ def round_to_nearest_resolution_acceptable_by_vae(height, width):
244
+ height = height - (height % pipe.vae_spatial_compression_ratio)
245
+ width = width - (width % pipe.vae_spatial_compression_ratio)
246
+ return height, width
247
+ image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/penguin.png")
248
+ video = load_video(export_to_video([image])) # compress the image using video compression as the model was trained on videos
249
+ condition1 = LTXVideoCondition(video=video, frame_index=0)
250
+ prompt = "A cute little penguin takes out a book and starts reading it"
251
+ negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"
252
+ expected_height, expected_width = 480, 832
253
+ downscale_factor = 2 / 3
254
+ num_frames = 96
255
+ # Part 1. Generate video at smaller resolution
256
+ downscaled_height, downscaled_width = int(expected_height * downscale_factor), int(expected_width * downscale_factor)
257
+ downscaled_height, downscaled_width = round_to_nearest_resolution_acceptable_by_vae(downscaled_height, downscaled_width)
258
+ latents = pipe(
259
+ conditions=[condition1],
260
+ prompt=prompt,
261
+ negative_prompt=negative_prompt,
262
+ width=downscaled_width,
263
+ height=downscaled_height,
264
+ num_frames=num_frames,
265
+ num_inference_steps=30,
266
+ generator=torch.Generator().manual_seed(0),
267
+ output_type="latent",
268
+ ).frames
269
+ # Part 2. Upscale generated video using latent upsampler with fewer inference steps
270
+ # The available latent upsampler upscales the height/width by 2x
271
+ upscaled_height, upscaled_width = downscaled_height * 2, downscaled_width * 2
272
+ upscaled_latents = pipe_upsample(
273
+ latents=latents,
274
+ output_type="latent"
275
+ ).frames
276
+ # Part 3. Denoise the upscaled video with few steps to improve texture (optional, but recommended)
277
+ video = pipe(
278
+ conditions=[condition1],
279
+ prompt=prompt,
280
+ negative_prompt=negative_prompt,
281
+ width=upscaled_width,
282
+ height=upscaled_height,
283
+ num_frames=num_frames,
284
+ denoise_strength=0.4, # Effectively, 4 inference steps out of 10
285
+ num_inference_steps=10,
286
+ latents=upscaled_latents,
287
+ decode_timestep=0.05,
288
+ image_cond_noise_scale=0.025,
289
+ generator=torch.Generator().manual_seed(0),
290
+ output_type="pil",
291
+ ).frames[0]
292
+ # Part 4. Downscale the video to the expected resolution
293
+ video = [frame.resize((expected_width, expected_height)) for frame in video]
294
+ export_to_video(video, "output.mp4", fps=24)
295
+ ```
296
+
297
+ ### For video-to-video:
298
+
299
+ ```py
300
+ import torch
301
+ from diffusers import LTXConditionPipeline, LTXLatentUpsamplePipeline
302
+ from diffusers.pipelines.ltx.pipeline_ltx_condition import LTXVideoCondition
303
+ from diffusers.utils import export_to_video, load_video
304
+ pipe = LTXConditionPipeline.from_pretrained("Lightricks/LTX-Video-0.9.7-dev", torch_dtype=torch.bfloat16)
305
+ pipe_upsample = LTXLatentUpsamplePipeline.from_pretrained("Lightricks/ltxv-spatial-upscaler-0.9.7", vae=pipe.vae, torch_dtype=torch.bfloat16)
306
+ pipe.to("cuda")
307
+ pipe_upsample.to("cuda")
308
+ pipe.vae.enable_tiling()
309
+ def round_to_nearest_resolution_acceptable_by_vae(height, width):
310
+ height = height - (height % pipe.vae_spatial_compression_ratio)
311
+ width = width - (width % pipe.vae_spatial_compression_ratio)
312
+ return height, width
313
+ video = load_video(
314
+ "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cosmos/cosmos-video2world-input-vid.mp4"
315
+ )[:21] # Use only the first 21 frames as conditioning
316
+ condition1 = LTXVideoCondition(video=video, frame_index=0)
317
+ prompt = "The video depicts a winding mountain road covered in snow, with a single vehicle traveling along it. The road is flanked by steep, rocky cliffs and sparse vegetation. The landscape is characterized by rugged terrain and a river visible in the distance. The scene captures the solitude and beauty of a winter drive through a mountainous region."
318
+ negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"
319
+ expected_height, expected_width = 768, 1152
320
+ downscale_factor = 2 / 3
321
+ num_frames = 161
322
+ # Part 1. Generate video at smaller resolution
323
+ downscaled_height, downscaled_width = int(expected_height * downscale_factor), int(expected_width * downscale_factor)
324
+ downscaled_height, downscaled_width = round_to_nearest_resolution_acceptable_by_vae(downscaled_height, downscaled_width)
325
+ latents = pipe(
326
+ conditions=[condition1],
327
+ prompt=prompt,
328
+ negative_prompt=negative_prompt,
329
+ width=downscaled_width,
330
+ height=downscaled_height,
331
+ num_frames=num_frames,
332
+ num_inference_steps=30,
333
+ generator=torch.Generator().manual_seed(0),
334
+ output_type="latent",
335
+ ).frames
336
+ # Part 2. Upscale generated video using latent upsampler with fewer inference steps
337
+ # The available latent upsampler upscales the height/width by 2x
338
+ upscaled_height, upscaled_width = downscaled_height * 2, downscaled_width * 2
339
+ upscaled_latents = pipe_upsample(
340
+ latents=latents,
341
+ output_type="latent"
342
+ ).frames
343
+ # Part 3. Denoise the upscaled video with few steps to improve texture (optional, but recommended)
344
+ video = pipe(
345
+ conditions=[condition1],
346
+ prompt=prompt,
347
+ negative_prompt=negative_prompt,
348
+ width=upscaled_width,
349
+ height=upscaled_height,
350
+ num_frames=num_frames,
351
+ denoise_strength=0.4, # Effectively, 4 inference steps out of 10
352
+ num_inference_steps=10,
353
+ latents=upscaled_latents,
354
+ decode_timestep=0.05,
355
+ image_cond_noise_scale=0.025,
356
+ generator=torch.Generator().manual_seed(0),
357
+ output_type="pil",
358
+ ).frames[0]
359
+ # Part 4. Downscale the video to the expected resolution
360
+ video = [frame.resize((expected_width, expected_height)) for frame in video]
361
+ export_to_video(video, "output.mp4", fps=24)
362
+ ```
363
+
364
+ To learn more, check out the [official documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/ltx_video).
365
+
366
+ Diffusers also supports directly loading from the original LTX checkpoints using the `from_single_file()` method. Check out [this section](https://huggingface.co/docs/diffusers/main/en/api/pipelines/ltx_video#loading-single-files) to learn more.