Dataset Viewer
Auto-converted to Parquet Duplicate
run_id
stringclasses
3 values
model
stringclasses
1 value
agent_type
stringclasses
1 value
provider
stringclasses
1 value
timestamp
stringdate
2025-11-25 13:37:11
2025-11-25 14:02:26
submitted_by
stringclasses
1 value
results_dataset
stringclasses
3 values
traces_dataset
stringclasses
3 values
metrics_dataset
stringclasses
3 values
dataset_used
stringclasses
1 value
total_tests
int64
15
15
successful_tests
int64
8
15
failed_tests
int64
0
7
success_rate
float64
53.3
100
avg_steps
float64
3
10.8
avg_duration_ms
float64
27.5k
186k
total_duration_ms
float64
412k
2.79M
total_tokens
int64
37.2k
234k
avg_tokens_per_test
int64
2.48k
15.6k
total_cost_usd
float64
0.08
0.52
avg_cost_per_test_usd
float64
0.01
0.03
co2_emissions_g
int64
0
0
power_cost_total_usd
float64
0.01
0.03
gpu_utilization_avg
float64
41.3
93.7
gpu_utilization_max
int64
100
100
gpu_memory_avg_mib
float64
13.2k
19.3k
gpu_memory_max_mib
float64
19k
22.3k
gpu_temperature_avg
float64
34.6
55
gpu_temperature_max
int64
49
61
gpu_power_avg_w
float64
127
225
notes
stringclasses
1 value
job_5fde605b
Kiy-K/Fyodor-Q3-8B-Instruct
both
transformers
2025-11-25T13:37:11.810908
Kiy-K
Kiy-K/smoltrace-results-20251125_131628
Kiy-K/smoltrace-traces-20251125_131628
Kiy-K/smoltrace-metrics-20251125_131628
kshitijthakkar/smoltrace-tasks
15
15
0
100
10.8
185,747.79
2,786,216.84
233,751
15,583
0.524647
0.034976
0
0.033154
93.65
100
19,275.62
19,589.31
55.02
61
222.3
Evaluation on 2025-11-25; 15 tests
job_ef7a6d17
Kiy-K/Fyodor-Q3-8B-Instruct
both
transformers
2025-11-25T13:55:22.920599
Kiy-K
Kiy-K/smoltrace-results-20251125_134933
Kiy-K/smoltrace-traces-20251125_134933
Kiy-K/smoltrace-metrics-20251125_134933
kshitijthakkar/smoltrace-tasks
15
8
7
53.33
3
27,484.76
412,271.42
37,175
2,478
0.07688
0.005125
0
0.005224
41.26
100
13,217.77
22,295.31
34.62
49
126.85
Evaluation on 2025-11-25; 15 tests
job_947243d7
Kiy-K/Fyodor-Q3-8B-Instruct
both
transformers
2025-11-25T14:02:27.419116
Kiy-K
Kiy-K/smoltrace-results-20251125_134447
Kiy-K/smoltrace-traces-20251125_134447
Kiy-K/smoltrace-metrics-20251125_134447
kshitijthakkar/smoltrace-tasks
15
15
0
100
10.67
158,947
2,384,204.93
220,550
14,703
0.507377
0.033825
0
0.02787
91.61
100
18,620.96
18,951.31
54.92
61
224.8
Evaluation on 2025-11-25; 15 tests
SMOLTRACE Logo

Tiny Agents. Total Visibility.

GitHub PyPI Documentation


SMOLTRACE Leaderboard

This dataset contains aggregated evaluation metrics for comparing model performance across SMOLTRACE benchmark runs.

Dataset Information

Field Value
Owner Kiy-K
Updated 2025-11-25 14:02:29 UTC
Purpose Model comparison and ranking

Schema

Identification

Column Type Description
run_id string Unique run identifier
model string Model name/identifier
agent_type string Agent type ("tool", "code", "both")
provider string Model provider (litellm, openai, etc.)
timestamp string Evaluation timestamp
submitted_by string HuggingFace username

Dataset References

Column Type Description
results_dataset string Link to results dataset
traces_dataset string Link to traces dataset
metrics_dataset string Link to metrics dataset
dataset_used string Source benchmark dataset

Performance Metrics

Column Type Description
total_tests int Number of test cases
successful_tests int Passed tests
failed_tests int Failed tests
success_rate float Success percentage (0-100)
avg_steps float Average agent steps per test
avg_duration_ms float Average execution time (ms)
total_duration_ms float Total evaluation time (ms)
total_tokens int Total tokens consumed
avg_tokens_per_test int Average tokens per test
total_cost_usd float Total API cost (USD)
avg_cost_per_test_usd float Average cost per test (USD)

Environmental Impact

Column Type Description
co2_emissions_g float Total CO2 emissions (gCO2e)
power_cost_total_usd float Total power cost (USD)

GPU Metrics (if available)

Column Type Description
gpu_utilization_avg float Average GPU utilization (%)
gpu_utilization_max float Peak GPU utilization (%)
gpu_memory_avg_mib float Average GPU memory (MiB)
gpu_memory_max_mib float Peak GPU memory (MiB)
gpu_temperature_avg float Average GPU temperature (°C)
gpu_temperature_max float Peak GPU temperature (°C)
gpu_power_avg_w float Average GPU power (W)

Usage

from datasets import load_dataset
import pandas as pd

# Load leaderboard
ds = load_dataset("Kiy-K/smoltrace-leaderboard")
df = pd.DataFrame(ds['train'])

# Rank by success rate
df_ranked = df.sort_values('success_rate', ascending=False)
print(df_ranked[['model', 'success_rate', 'avg_duration_ms', 'total_cost_usd']])

# Compare models
top_models = df_ranked.head(10)
print("Top 10 Models by Success Rate:")
for i, row in top_models.iterrows():
    print(f"  {row['model']}: {row['success_rate']:.1f}%")

Contributing Results

Run your own evaluations to add to this leaderboard:

pip install smoltrace
smoltrace-eval --model your-model --provider litellm

About SMOLTRACE

SMOLTRACE is a comprehensive benchmarking and evaluation framework for Smolagents - HuggingFace's lightweight agent library.

Key Features

  • Automated agent evaluation with customizable test cases
  • OpenTelemetry-based tracing for detailed execution insights
  • GPU metrics collection (utilization, memory, temperature, power)
  • CO2 emissions and power cost tracking
  • Leaderboard aggregation and comparison

Quick Links

Installation

pip install smoltrace

Citation

If you use SMOLTRACE in your research, please cite:

@software{smoltrace,
  title = {SMOLTRACE: Benchmarking Framework for Smolagents},
  author = {Thakkar, Kshitij},
  url = {https://github.com/Mandark-droid/SMOLTRACE},
  year = {2025}
}

Generated by SMOLTRACE
Downloads last month
49