HybriKo-117M Function Calling

HybriKo-117M (checkpoint 1962) λͺ¨λΈμ„ Function Calling λ°μ΄ν„°λ‘œ λ―Έμ„Έμ‘°μ •ν•œ λͺ¨λΈμž…λ‹ˆλ‹€.

ν•™μŠ΅ 정보

  • Base Model: Yaongi/hybridko-exp6
  • Dataset: heegyu/glaive-function-calling-v2-ko (5,000 samples)
  • Epochs: 2
  • Final Loss: ~0.14
  • Performance: κΈ°λ³Έ 포맷 ν•™μŠ΅ μ™„λ£Œ (Calculation, Search, Weather λ“± 지원)

μ‚¬μš©λ²• (Colab)

import torch
import torch.nn.functional as F
import sentencepiece as spm
from transformers import AutoModelForCausalLM
from huggingface_hub import hf_hub_download

# 1. λͺ¨λΈ λ‘œλ“œ
print("πŸ“₯ Model loading...")
model = AutoModelForCausalLM.from_pretrained(
    "Yaongi/HybriKo-117M-Exp6-FunctionCall",
    trust_remote_code=True,
    torch_dtype=torch.float32
)
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
model.eval()

# 2. ν† ν¬λ‚˜μ΄μ € λ‘œλ“œ
print("πŸ“₯ Tokenizer loading...")
sp_path = hf_hub_download("Yaongi/HybriKo-117M-Exp6-FunctionCall", "HybriKo_tok.model")
sp = spm.SentencePieceProcessor()
sp.Load(sp_path)

# 3. 생성 ν•¨μˆ˜ (Stop Logic 포함)
def generate(text, max_len=200, temp=0.01, top_k=1):
    input_ids = torch.tensor([[sp.bos_id()] + sp.EncodeAsIds(text)]).to(device)
    
    # 쀑지 ν…μŠ€νŠΈ 리슀트
    stop_sequences = ["<|im_end|>", "</tool_code>"]
    
    print("πŸ€– Generating...", end="", flush=True)
    with torch.no_grad():
        for _ in range(max_len):
            outputs = model(input_ids[:, -512:])
            logits = outputs.logits[:, -1] / temp
            
            if top_k:
                v, _ = torch.topk(logits, min(top_k, logits.size(-1)))
                logits[logits < v[:, [-1]]] = float("-inf")
            
            probs = F.softmax(logits, dim=-1)
            next_token = torch.multinomial(probs, 1)
            
            # EOS 토큰 체크
            if next_token.item() == sp.eos_id():
                break
            
            input_ids = torch.cat([input_ids, next_token], dim=1)
            
            # πŸ’‘ Stop Sequence 체크 (λ§€ μŠ€ν… λ””μ½”λ”©ν•˜μ—¬ 확인)
            curr_text = sp.DecodeIds(input_ids[0].tolist())
            
            # ν”„λ‘¬ν”„νŠΈ 이후 μƒμ„±λœ λΆ€λΆ„λ§Œ μž˜λΌμ„œ 확인
            # (SentencePiece νŠΉμ„±μƒ μ •ν™•ν•œ μŠ¬λΌμ΄μ‹±μ„ μœ„ν•΄ 전체 λ””μ½”λ”© ν›„ 비ꡐ가 μ•ˆμ „)
            gen_part = curr_text[len(text):] # 근사적인 방법
            
            # 정확도λ₯Ό μœ„ν•΄ full textμ—μ„œ 검색
            should_stop = False
            for seq in stop_sequences:
                if seq in curr_text and not (seq in text): # ν”„λ‘¬ν”„νŠΈμ— 이미 μžˆλŠ” κ²½μš°λŠ” μ œμ™Έ
                     # 방금 μƒμ„±λœ 뢀뢄에 토큰이 μ™„μ„±λ˜μ—ˆλŠ”μ§€ 확인
                     should_stop = True
                     break
            
            if should_stop:
                break
                
    return sp.DecodeIds(input_ids[0].tolist())

# 4. μ‹€ν–‰ μ˜ˆμ‹œ
prompt = '''<|im_start|>system
당신은 도ꡬ 호좜(function calling)이 κ°€λŠ₯ν•œ AI μ–΄μ‹œμŠ€ν„΄νŠΈμž…λ‹ˆλ‹€.
<tools>
{"name": "get_news_headlines", "parameters": {"country": "string"}}
</tools><|im_end|>
<|im_start|>user
ν•œκ΅­μ˜ μ΅œμ‹  λ‰΄μŠ€ μ•Œλ €μ€˜<|im_end|>
<|im_start|>assistant
'''

print("\nPrompt:")
print(prompt)

result = generate(prompt, max_len=200)

# 좜λ ₯ κΉ”λ”ν•˜κ²Œ 정리
print("\n" + "="*50)
print("Result:")
print(result)
print("="*50)

'''
Downloads last month
26
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Yaongi/HybriKo-117M-Exp6-FunctionCall

Finetuned
(1)
this model

Dataset used to train Yaongi/HybriKo-117M-Exp6-FunctionCall