Update README.md
Browse files
README.md
CHANGED
|
@@ -3,7 +3,7 @@ language:
|
|
| 3 |
- en
|
| 4 |
library_name: mir
|
| 5 |
---
|
| 6 |
-
<div align="center"><img src="https://github.com/darkshapes/
|
| 7 |
|
| 8 |
|
| 9 |
#
|
|
@@ -18,18 +18,79 @@ The work is inspired by:
|
|
| 18 |
Example:
|
| 19 |
|
| 20 |
> [!NOTE]
|
| 21 |
-
> # mir :
|
| 22 |
|
| 23 |
|
| 24 |
```
|
| 25 |
mir : model . lora . hyper : flux-1
|
| 26 |
↑ ↑ ↑ ↑ ↑
|
| 27 |
-
[URI]:
|
| 28 |
```
|
| 29 |
|
| 30 |
-
|
| 31 |
|
| 32 |
-
More information about MIR available at [darkshapes wiki](https://github.com/darkshapes/sdbx/wiki/_MIR:-Usage)
|
| 33 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |

|
|
|
|
| 3 |
- en
|
| 4 |
library_name: mir
|
| 5 |
---
|
| 6 |
+
<div align="center"><img src="https://github.com/darkshapes/entity-statement/raw/main/png/mir/mir300_dark.png" width="25%"></div>
|
| 7 |
|
| 8 |
|
| 9 |
#
|
|
|
|
| 18 |
Example:
|
| 19 |
|
| 20 |
> [!NOTE]
|
| 21 |
+
> # mir : model . transformer . clip-l : stable-diffusion-xl
|
| 22 |
|
| 23 |
|
| 24 |
```
|
| 25 |
mir : model . lora . hyper : flux-1
|
| 26 |
↑ ↑ ↑ ↑ ↑
|
| 27 |
+
[URI]:[Domain].[Architecture].[Series]:[Compatibility]
|
| 28 |
```
|
| 29 |
|
| 30 |
+
Code for this project can be found at [darkshapes/MIR on GitHub](https://github.com/darkshapes/MIR)
|
| 31 |
|
|
|
|
| 32 |
|
| 33 |
+
## Definitions:
|
| 34 |
+
|
| 35 |
+
Like other URI schema, the order of the identifiers roughly indicates their specificity from left (broad) to right (narrow)
|
| 36 |
+
|
| 37 |
+
### Domains
|
| 38 |
+
|
| 39 |
+
|
| 40 |
+
- `dev`: Varying local neural network layers, in-training, pre-release, items under evaluation, likely in unexpected formats<br>
|
| 41 |
+
- `model`: Static local neural network layers. Publicly released machine learning models with an identifier in the database<br>
|
| 42 |
+
- `operations`: Varying global neural network attributes, algorithms, optimizations and procedures on models<br>
|
| 43 |
+
- `info`: Static global neural network attributes, metadata with an identifier in the database<br>
|
| 44 |
+
|
| 45 |
+
### Architecture
|
| 46 |
+
Broad and general terms for system architectures.
|
| 47 |
+
- `dit`: Diffusion transformer, typically Vision Synthesis
|
| 48 |
+
- `unet`: Unet diffusion structure
|
| 49 |
+
- `art` : Autoregressive transformer, typically LLMs
|
| 50 |
+
- `lora`: Low-Rank Adapter (may work with dit or transformer)
|
| 51 |
+
- `vae`: Variational Autoencoder
|
| 52 |
+
etc
|
| 53 |
+
|
| 54 |
+
### Series
|
| 55 |
+
Foundational network and technique types.
|
| 56 |
+
|
| 57 |
+
### Compatibility
|
| 58 |
+
Implementation details based on version-breaking changes, configuration inconsistencies, or other conflicting indicators that have practical application.
|
| 59 |
+
|
| 60 |
+
### Goals
|
| 61 |
+
- Standard identification scheme for **ALL** fields of ML-related development
|
| 62 |
+
- Simplification of code for model-related logistics
|
| 63 |
+
- Rapid retrieval of resources and metadata
|
| 64 |
+
- Efficient and reliable compatibility checks
|
| 65 |
+
- Organized hyperparameter management
|
| 66 |
+
|
| 67 |
+
> <details> <summary>Why not use `diffusion`/`sgm`, `ldm`/`text`/hf.co folder-structure/brand or trade name/preprint paper/development house/algorithm</summary>
|
| 68 |
+
>
|
| 69 |
+
> - The format here isnt finalized, but overlapping resource definitions or complicated categories that are difficult to narrow have been pruned
|
| 70 |
+
> - Likewise, definitions that are too specific have also been trimmed
|
| 71 |
+
> - HF.CO become inconsistent across folders/files and often the metadata enforcement of many important developments is neglected
|
| 72 |
+
> - Development credit often shared, [Paper heredity tree](https://www.connectedpapers.com/search?q=generative%20diffusion), super complicated
|
| 73 |
+
> - Algorithms (esp application) are less common knowledge, vague, ~~and I'm too smooth-brain.~~
|
| 74 |
+
> - Overall an attempt at impartiality and neutrality with regards to brand/territory origins
|
| 75 |
+
> </details>
|
| 76 |
+
|
| 77 |
+
> <details><summary>Why `unet`, `dit`, `lora` over alternatives</summary>
|
| 78 |
+
>
|
| 79 |
+
> - UNET/DiT/Transformer are shared enough to be genre-ish but not too narrowly specific
|
| 80 |
+
> - Very similar technical process on this level
|
| 81 |
+
> - Functional and efficient for random lookups
|
| 82 |
+
> - Short to type
|
| 83 |
+
> </details>
|
| 84 |
+
|
| 85 |
+
> <details><summary>Roadmap</summary>
|
| 86 |
+
>
|
| 87 |
+
> - Decide on `@` or `:` delimiters (like @8cfg for an indistinguishable 8 step lora that requires cfg)
|
| 88 |
+
> - crucial spec element, or an optional, MIR app-determined feature?
|
| 89 |
+
> - Proof of concept generative model registry
|
| 90 |
+
> - Ensure compatability/integration/cross-pollenation with [OECD AI Classifications](https://oecd.ai/en/classification)
|
| 91 |
+
> - Ensure compatability/integration/cross-pollenation with [NIST AI 200-1 NIST Trustworthy and Responsible AI](https://www.nist.gov/publications/ai-use-taxonomy-human-centered-approach)
|
| 92 |
+
> </details>
|
| 93 |
+
|
| 94 |
+
massive thank you to [@silveroxides](https://huggingface.co/silveroxides) for phenomenal work collecting pristine state dicts and related information
|
| 95 |
|
| 96 |

|