Image-to-3D
Diffusers
Safetensors
SpatialGenDiffusionPipeline
bertjiazheng commited on
Commit
b78ed69
·
0 Parent(s):

Initial commit

Browse files
.gitattributes ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ assets/vis_img2scene.png filter=lfs diff=lfs merge=lfs -text
37
+ assets/vis_text2scene.png filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: creativeml-openrail-m
3
+ datasets:
4
+ - manycore-research/SpatialGen-Testset
5
+ base_model:
6
+ - stabilityai/stable-diffusion-2-1
7
+ pipeline_tag: image-to-image
8
+ ---
9
+ # SpatialGen
10
+
11
+ <!-- markdownlint-disable first-line-h1 -->
12
+ <!-- markdownlint-disable html -->
13
+ <!-- markdownlint-disable no-duplicate-header -->
14
+
15
+ <div align="center">
16
+ <img src="assets/logo.png" width="60%" alt="SpatialGen" />
17
+ </div>
18
+ <hr style="margin-top: 0; margin-bottom: 8px;">
19
+ <div align="center" style="margin-top: 0; padding-top: 0; line-height: 1;">
20
+ <a href="https://github.com/manycore-research/SpatialGen" target="_blank" style="margin: 2px;"><img alt="GitHub"
21
+ src="https://img.shields.io/badge/GitHub-SpatialGen-24292e?logo=github&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a>
22
+ <a href="https://huggingface.co/manycore-research/SpatialGen-1.0" target="_blank" style="margin: 2px;"><img alt="Hugging Face"
23
+ src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-SpatialGen-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a>
24
+ </div>
25
+
26
+ <div align="center">
27
+
28
+ | Image-to-Scene Results | Text-to-Scene Results |
29
+ | :--------------------------------------: | :----------------------------------------: |
30
+ | ![Img2Scene](./assets/vis_img2scene.png) | ![Text2Scene](./assets/vis_text2scene.png) |
31
+
32
+ <p>SpatialGen produces multi-view, multi-modal information from a semantic layout using a multi-view, multi-modal diffusion model.</p>
33
+ </div>
34
+
35
+ ## ✨ News
36
+
37
+ - [Aug, 2025] Initial release of SpatialGen-1.0!
38
+
39
+
40
+ ## SpatialGen Models
41
+
42
+ <div align="center">
43
+
44
+ | **Model** | **Download** |
45
+ | :-------------: | -------------------------------------------------------------------------- |
46
+ | SpatialGen-1.0 | [🤗 HuggingFace](https://huggingface.co/manycore-research/SpatialGen-1.0) |
47
+
48
+ </div>
49
+
50
+ ## Usage
51
+
52
+ ### 🔧 Installation
53
+
54
+ Tested with the following environment:
55
+ * Python 3.10
56
+ * PyTorch 2.3.1
57
+ * CUDA Version 12.1
58
+
59
+ ```bash
60
+ # clone the repository
61
+ git clone https://github.com/manycore-research/SpatialGen.git
62
+ cd SpatialGen
63
+
64
+ python -m venv .venv
65
+ source .venv/bin/activate
66
+
67
+ pip install -r requirements.txt
68
+ # Optional: fix the [flux inference bug](https://github.com/vllm-project/vllm/issues/4392)
69
+ pip install nvidia-cublas-cu12==12.4.5.8
70
+ ```
71
+
72
+ ### 📊 Dataset
73
+
74
+ We provide [SpatialGen-Testset](https://huggingface.co/datasets/manycore-research/SpatialGen-Testset) with 48 rooms, which labeled with 3D layout and 4.8K rendered images (48 x 100 views, including RGB, normal, depth maps and semantic maps) for MVD inference.
75
+
76
+ ### Inference
77
+
78
+ ```bash
79
+ # Single image-to-3D Scene
80
+ bash scripts/infer_spatialgen_i2s.sh
81
+
82
+ # Text-to-image-to-3D Scene
83
+ bash scripts/infer_spatialgen_t2s.sh
84
+ ```
85
+
86
+ ## License
87
+
88
+ [SpatialGen-1.0](https://huggingface.co/manycore-research/SpatialGen-1.0) is derived from [Stable-Diffusion-v2.1](https://github.com/Stability-AI/stablediffusion), which is licensed under the [CreativeML Open RAIL++-M License](https://github.com/Stability-AI/stablediffusion/blob/main/LICENSE-MODEL).
89
+
90
+ ## Acknowledgements
91
+
92
+ We would like to thank the following projects that made this work possible:
93
+
94
+ [DiffSplat](https://github.com/chenguolin/DiffSplat) | [SD 2.1](https://github.com/Stability-AI/stablediffusion) | [TAESD](https://github.com/madebyollin/taesd) | [SpatialLM](https://github.com/manycore-research/SpatialLM)
assets/logo.png ADDED
assets/vis_img2scene.png ADDED

Git LFS Details

  • SHA256: 906178bdd4eef04ebe5ba116d5089fe022fcfd7c33fedb49c549987be5106da6
  • Pointer size: 132 Bytes
  • Size of remote file: 6.77 MB
assets/vis_text2scene.png ADDED

Git LFS Details

  • SHA256: 07943d765cb860725c6b2e5ae33df7b6986f816f0efb3f6ed1ad52c6e75c17a3
  • Pointer size: 132 Bytes
  • Size of remote file: 6.47 MB
depth_vae/config.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "AutoencoderTiny",
3
+ "_diffusers_version": "0.32.0",
4
+ "_name_or_path": "/alluxio/training/experiments/zhenqing/diffsplat/tinyvae-ckpt-wconf-016000",
5
+ "act_fn": "relu",
6
+ "block_out_channels": [
7
+ 64,
8
+ 64,
9
+ 64,
10
+ 64
11
+ ],
12
+ "decoder_block_out_channels": [
13
+ 64,
14
+ 64,
15
+ 64,
16
+ 64
17
+ ],
18
+ "encoder_block_out_channels": [
19
+ 64,
20
+ 64,
21
+ 64,
22
+ 64
23
+ ],
24
+ "force_upcast": false,
25
+ "in_channels": 3,
26
+ "latent_channels": 4,
27
+ "latent_magnitude": 3,
28
+ "latent_shift": 0.5,
29
+ "num_decoder_blocks": [
30
+ 3,
31
+ 3,
32
+ 3,
33
+ 1
34
+ ],
35
+ "num_encoder_blocks": [
36
+ 1,
37
+ 3,
38
+ 3,
39
+ 3
40
+ ],
41
+ "out_channels": 4,
42
+ "scaling_factor": 1.0,
43
+ "shift_factor": 0.0,
44
+ "upsample_fn": "nearest",
45
+ "upsampling_scaling_factor": 2
46
+ }
depth_vae/diffusion_pytorch_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd0367355d7f7708938fca214a47cc6c66beedeedcd651c4b4632d3ad76e9351
3
+ size 4904288
model_index.json ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "SpatialGenDiffusionPipeline",
3
+ "_diffusers_version": "0.32.0",
4
+ "depth_vae": [
5
+ "diffusers",
6
+ "AutoencoderTiny"
7
+ ],
8
+ "feature_extractor": [
9
+ null,
10
+ null
11
+ ],
12
+ "ray_encoder": [
13
+ "src.models.pose_adapter",
14
+ "RayMapEncoder"
15
+ ],
16
+ "requires_safety_checker": true,
17
+ "safety_checker": [
18
+ null,
19
+ null
20
+ ],
21
+ "scheduler": [
22
+ "diffusers",
23
+ "DDPMScheduler"
24
+ ],
25
+ "text_encoder": [
26
+ "transformers",
27
+ "CLIPTextModel"
28
+ ],
29
+ "tokenizer": [
30
+ "transformers",
31
+ "CLIPTokenizer"
32
+ ],
33
+ "unet": [
34
+ "diffusers_spatialgen.models.unets.unet_mvmm2d_condition",
35
+ "UNetMVMM2DConditionModel"
36
+ ],
37
+ "vae": [
38
+ "diffusers",
39
+ "AutoencoderKL"
40
+ ]
41
+ }
ray_encoder/config.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "RayMapEncoder"
4
+ ],
5
+ "image_size": 512,
6
+ "in_channel": 6,
7
+ "inter_dims": 384,
8
+ "model_type": "ray_map_encoder",
9
+ "out_channel": 16,
10
+ "patch_size": 8,
11
+ "torch_dtype": "float16",
12
+ "transformer_dim_head": 64,
13
+ "transformer_heads": 6,
14
+ "transformer_layers": 1,
15
+ "transformer_mlp_dim": 384,
16
+ "transformers_version": "4.51.3"
17
+ }
ray_encoder/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d7103d562a6d769d379fb3eb70c72d233cea8054808fbf9c95ce7ae6c576fd33
3
+ size 2087952
scheduler/scheduler_config.json ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "DDPMScheduler",
3
+ "_diffusers_version": "0.32.0",
4
+ "beta_end": 0.012,
5
+ "beta_schedule": "scaled_linear",
6
+ "beta_start": 0.00085,
7
+ "clip_sample": false,
8
+ "clip_sample_range": 1.0,
9
+ "dynamic_thresholding_ratio": 0.995,
10
+ "num_train_timesteps": 1000,
11
+ "prediction_type": "v_prediction",
12
+ "rescale_betas_zero_snr": false,
13
+ "sample_max_value": 1.0,
14
+ "set_alpha_to_one": false,
15
+ "skip_prk_steps": true,
16
+ "steps_offset": 1,
17
+ "thresholding": false,
18
+ "timestep_spacing": "leading",
19
+ "trained_betas": null,
20
+ "variance_type": "fixed_small"
21
+ }
scm_vae/config.json ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "AutoencoderKL",
3
+ "_diffusers_version": "0.32.0",
4
+ "_name_or_path": "stable-diffusion-v1-5/stable-diffusion-v1-5",
5
+ "act_fn": "silu",
6
+ "block_out_channels": [
7
+ 128,
8
+ 256,
9
+ 512,
10
+ 512
11
+ ],
12
+ "down_block_types": [
13
+ "DownEncoderBlock2D",
14
+ "DownEncoderBlock2D",
15
+ "DownEncoderBlock2D",
16
+ "DownEncoderBlock2D"
17
+ ],
18
+ "force_upcast": true,
19
+ "in_channels": 3,
20
+ "latent_channels": 4,
21
+ "latents_mean": null,
22
+ "latents_std": null,
23
+ "layers_per_block": 2,
24
+ "mid_block_add_attention": true,
25
+ "norm_num_groups": 32,
26
+ "out_channels": 4,
27
+ "sample_size": 512,
28
+ "scaling_factor": 0.18215,
29
+ "shift_factor": null,
30
+ "up_block_types": [
31
+ "UpDecoderBlock2D",
32
+ "UpDecoderBlock2D",
33
+ "UpDecoderBlock2D",
34
+ "UpDecoderBlock2D"
35
+ ],
36
+ "use_post_quant_conv": true,
37
+ "use_quant_conv": true
38
+ }
scm_vae/diffusion_pytorch_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29d8856dbd51d87b31f6e63294bad931061a4b8af0ad5f8fdd7cd6b5750f24c6
3
+ size 334647888
text_encoder/config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "CLIPTextModel"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "bos_token_id": 0,
7
+ "dropout": 0.0,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "gelu",
10
+ "hidden_size": 1024,
11
+ "initializer_factor": 1.0,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 4096,
14
+ "layer_norm_eps": 1e-05,
15
+ "max_position_embeddings": 77,
16
+ "model_type": "clip_text_model",
17
+ "num_attention_heads": 16,
18
+ "num_hidden_layers": 23,
19
+ "pad_token_id": 1,
20
+ "projection_dim": 512,
21
+ "torch_dtype": "float16",
22
+ "transformers_version": "4.51.3",
23
+ "vocab_size": 49408
24
+ }
text_encoder/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc1827c465450322616f06dea41596eac7d493f4e95904dcb51f0fc745c4e13f
3
+ size 680820392
tokenizer/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer/special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|startoftext|>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "!",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<|endoftext|>",
25
+ "lstrip": false,
26
+ "normalized": true,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
tokenizer/tokenizer_config.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "!",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "49406": {
13
+ "content": "<|startoftext|>",
14
+ "lstrip": false,
15
+ "normalized": true,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "49407": {
21
+ "content": "<|endoftext|>",
22
+ "lstrip": false,
23
+ "normalized": true,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ }
28
+ },
29
+ "bos_token": "<|startoftext|>",
30
+ "clean_up_tokenization_spaces": false,
31
+ "do_lower_case": true,
32
+ "eos_token": "<|endoftext|>",
33
+ "errors": "replace",
34
+ "extra_special_tokens": {},
35
+ "model_max_length": 77,
36
+ "pad_token": "!",
37
+ "tokenizer_class": "CLIPTokenizer",
38
+ "unk_token": "<|endoftext|>"
39
+ }
tokenizer/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
unet/config.json ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "UNetMVMM2DConditionModel",
3
+ "_diffusers_version": "0.32.0",
4
+ "_name_or_path": "/alluxio/training/experiments/zhenqing/diffsplat/out/abla_sd21_8rgbsscm_256_wlay_ww/pipeline/pipeline-016000",
5
+ "act_fn": "silu",
6
+ "addition_embed_type": null,
7
+ "addition_embed_type_num_heads": 64,
8
+ "addition_time_embed_dim": null,
9
+ "attention_head_dim": [
10
+ 5,
11
+ 10,
12
+ 20,
13
+ 20
14
+ ],
15
+ "attention_type": "default",
16
+ "block_out_channels": [
17
+ 320,
18
+ 640,
19
+ 1280,
20
+ 1280
21
+ ],
22
+ "cd_attention_mid": true,
23
+ "center_input_sample": false,
24
+ "class_embed_type": "projection",
25
+ "class_embeddings_concat": false,
26
+ "conv_in_kernel": 3,
27
+ "conv_out_kernel": 3,
28
+ "cross_attention_dim": 1024,
29
+ "cross_attention_norm": null,
30
+ "disable_mv_attention_in_64x64": true,
31
+ "down_block_types": [
32
+ "CrossAttnDownBlockMVMM2D",
33
+ "CrossAttnDownBlockMVMM2D",
34
+ "CrossAttnDownBlockMVMM2D",
35
+ "DownBlock2D"
36
+ ],
37
+ "downsample_padding": 1,
38
+ "dropout": 0.0,
39
+ "dual_cross_attention": false,
40
+ "encoder_hid_dim": null,
41
+ "encoder_hid_dim_type": null,
42
+ "flip_sin_to_cos": true,
43
+ "freq_shift": 0,
44
+ "in_channels": 25,
45
+ "input_concat_binary_mask": true,
46
+ "input_concat_plucker": true,
47
+ "input_concat_warpped_image": true,
48
+ "layers_per_block": 2,
49
+ "mid_block_only_cross_attention": null,
50
+ "mid_block_scale_factor": 1,
51
+ "mid_block_type": "UNetMidBlockMVMM2DCrossAttn",
52
+ "multiview_attention": true,
53
+ "norm_eps": 1e-05,
54
+ "norm_num_groups": 32,
55
+ "num_attention_heads": null,
56
+ "num_class_embeds": null,
57
+ "num_input_views": 1,
58
+ "num_output_views": 7,
59
+ "num_tasks": 5,
60
+ "only_cross_attention": false,
61
+ "out_channels": 4,
62
+ "projection_class_embeddings_input_dim": 8,
63
+ "resnet_out_scale_factor": 1.0,
64
+ "resnet_skip_time_act": false,
65
+ "resnet_time_scale_shift": "default",
66
+ "reverse_transformer_layers_per_block": null,
67
+ "sample_size": 64,
68
+ "sparse_mv_attention": false,
69
+ "time_cond_proj_dim": null,
70
+ "time_embedding_act_fn": null,
71
+ "time_embedding_dim": null,
72
+ "time_embedding_type": "positional",
73
+ "timestep_post_act": null,
74
+ "transformer_layers_per_block": 1,
75
+ "up_block_types": [
76
+ "UpBlock2D",
77
+ "CrossAttnUpBlockMVMM2D",
78
+ "CrossAttnUpBlockMVMM2D",
79
+ "CrossAttnUpBlockMVMM2D"
80
+ ],
81
+ "upcast_attention": true,
82
+ "use_linear_projection": true,
83
+ "view_concat_condition": true
84
+ }
unet/diffusion_pytorch_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ba49c80ed2b92db8afddaa4528da0df3eec6c6535e7be0f053b896fd308de8e
3
+ size 1834542936
vae/config.json ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "AutoencoderKL",
3
+ "_diffusers_version": "0.32.0",
4
+ "_name_or_path": "/alluxio/training/experiments/zhenqing/diffsplat/out/abla_sd21_8rgbsscm_256_wlay_ww/pipeline/pipeline-016000",
5
+ "act_fn": "silu",
6
+ "block_out_channels": [
7
+ 128,
8
+ 256,
9
+ 512,
10
+ 512
11
+ ],
12
+ "down_block_types": [
13
+ "DownEncoderBlock2D",
14
+ "DownEncoderBlock2D",
15
+ "DownEncoderBlock2D",
16
+ "DownEncoderBlock2D"
17
+ ],
18
+ "force_upcast": true,
19
+ "in_channels": 3,
20
+ "latent_channels": 4,
21
+ "latents_mean": null,
22
+ "latents_std": null,
23
+ "layers_per_block": 2,
24
+ "mid_block_add_attention": true,
25
+ "norm_num_groups": 32,
26
+ "out_channels": 3,
27
+ "sample_size": 768,
28
+ "scaling_factor": 0.18215,
29
+ "shift_factor": null,
30
+ "up_block_types": [
31
+ "UpDecoderBlock2D",
32
+ "UpDecoderBlock2D",
33
+ "UpDecoderBlock2D",
34
+ "UpDecoderBlock2D"
35
+ ],
36
+ "use_post_quant_conv": true,
37
+ "use_quant_conv": true
38
+ }
vae/diffusion_pytorch_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e4c08995484ee61270175e9e7a072b66a6e4eeb5f0c266667fe1f45b90daf9a
3
+ size 167335342