--- base_model: - meta-llama/Llama-3.3-70B-Instruct - Delta-Vector/Shimamura-70B - TheDrummer/Anubis-70B-v1.1 base_model_relation: merge license: unknown thumbnail: https://huggingface.co/ddh0/Cassiopeia-70B/resolve/main/cassiopeia.png --- # Cassiopeia-70B ![Cassiopeia-70B](cassiopeia.png) **Cassiopeia-70B** is the result of an experimental multi-step SLERP merge of [Llama-3.3-70B-Instruct](hf.co/meta-llama/Llama-3.3-70B-Instruct), [Shimamura-70B](hf.co/Delta-Vector/Shimamura-70B), and [Anubis-70B-v1.1](hf.co/TheDrummer/Anubis-70B-v1.1). It is a coherent, unaligned model intended to be used for creative tasks such as storywriting, brainstorming, interactive roleplay, etc. ## Merge composition ### Intermediate model Stay close to Anubis v1.1, but step 20% closer to stock L3.3 70B. ```yaml models: - model: /opt/workspace/hf/Anubis-70B-v1.1 - model: /opt/workspace/hf/Llama-3.3-70B-Instruct merge_method: slerp base_model: /opt/workspace/hf/Anubis-70B-v1.1 parameters: t: 0.2 dtype: bfloat16 ``` ### Final model Pull the middle of the intermediate model towards Shimamura, but leave the ends as they are. ```yaml models: - model: /opt/workspace/hf/Anubis-70B-v1.1-0.8x - model: /opt/workspace/hf/Shimamura-70B merge_method: slerp base_model: /opt/workspace/hf/Anubis-70B-v1.1-0.8x parameters: t: [0.0, 0.5, 1.0, 0.5, 0.0] dtype: bfloat16 ``` ## Feedback If you like this model, please support one of the original model creators: - [Delta-Vector on ko-fi](https://ko-fi.com/deltavector) - [TheDrummer on Patreon](https://www.patreon.com/TheDrummer) Feedback on this merge is very welcome, good or bad! Please leave a comment in this discussion with your thoughts: [Cassiopeia-70B/discussions/1](https://huggingface.co/ddh0/Cassiopeia-70B/discussions/1)