AI & ML interests

None defined yet.

Recent Activity

ronantakizawa 
posted an update 3 days ago
view post
Post
188
Introducing the github-top-projects dataset: A comprehensive dataset of 423,098 GitHub trending repository entries spanning 12+ years (August 2013 - November 2025).

This dataset captures the evolution of GitHub's trending repositories over time, providing insights into software development trends across programming languages and domains, popular open-source projects and their trending patterns, and community interests and shifts in developer focus over 12 years.

ronantakizawa/github-top-projects

#github #softwareengineering
sergiopaniego 
posted an update 3 days ago
view post
Post
2409
Want to get started with fine-tuning but don’t know where to begin? 🤓☝️

We’re expanding our collection of beginner-friendly free Colab notebooks so you can learn and fine-tune models using TRL at no cost

🔬 Check out the full list of free notebooks: https://huggingface.co/docs/trl/main/en/example_overview#notebooks

🔬 If you want more advanced content, we also have a lot to cover in the community tutorials: https://huggingface.co/docs/trl/community_tutorials

And now the obvious question: what would you like us to add next?
ronantakizawa 
posted an update 5 days ago
view post
Post
1060
Introducing the twitter-trending-hashtags dataset, a compilation of 12,000+ unique trending hashtags on Twitter / X from 2020 to 2025. This dataset captures viral and cultural moments on Twitter / X and is perfect for researchers studying viral content patterns on social media.

ronantakizawa/twitter-trending-hashtags

#twitter #trends #socialmedia
sergiopaniego 
posted an update 5 days ago
view post
Post
2256
NEW: @mistralai released a fantastic family of multimodal models, Ministral 3.

You can fine-tune them for free on Colab using TRL ⚡️, supporting both SFT and GRPO

Link to the notebooks:
- SFT: https://colab.research.google.com/github/huggingface/trl/blob/main/examples/notebooks/sft_ministral3_vl.ipynb
- GRPO: https://colab.research.google.com/github/huggingface/trl/blob/main/examples/notebooks/grpo_ministral3_vl.ipynb
- TRL and more examples: https://huggingface.co/docs/trl/index
  • 2 replies
·
sergiopaniego 
posted an update 6 days ago
ronantakizawa 
posted an update 7 days ago
view post
Post
1622
Introducing the tiktok-trending-hashtags dataset: a compilation of 1,830 unique trending hashtags on TikTok from 2022 to 2025. This dataset captures viral one-time and seasonal viral moments on TikTok and is perfect for researchers, marketers, and content creators studying viral content patterns on social media.

ronantakizawa/tiktok-trending-hashtags
#tiktok #trends #social-media
sergiopaniego 
posted an update 7 days ago
view post
Post
3056
want to use open models easily through an API?

Inference Providers might be exactly what you’re looking for sooo here’s a complete beginner-friendly walkthrough 🧐

https://www.youtube.com/watch?v=oxwsizy1Spw
  • 2 replies
·
flozi00 
posted an update 7 days ago
view post
Post
3037
We recently discussed how Tensor Parallelism slices matrices to reduce latency within a single node. But what happens when you need to scale beyond that, where the bandwidth drops?

That is where Pipeline Parallelism (PP) takes over.

Instead of slicing the operation, PP slices the model depth. It turns your GPU cluster into an assembly line: GPU 0 handles layers 1-12, GPU 1 handles 13-24, and so on.

The hardware challenge here isn't the interconnect speed—it is the "Pipeline Bubble." In a naive setup, expensive H100s sit idle for most of the cycle waiting for data to flow through the chain.

My latest guide breaks down the scheduling strategies used to minimize this idle silicon time.

In this deep dive, we cover:

The Hardware Mechanics: Vertical Slicing
Unlike TP which requires "chatty" All-Reduce operations, PP relies on lightweight Point-to-Point (Send/Recv) communication. This makes it the only viable strategy for crossing node boundaries over Ethernet or InfiniBand.

Fighting the Bubble: 1F1B vs. GPipe
We analyze the scheduling algorithms that keep the GPUs fed:

GPipe: The "flush and fill" approach. Simple, but memory-intensive.
1F1B (One-Forward-One-Backward): The industry standard. By interleaving forward and backward passes, we aggressively free up memory and reduce the bubble size.
The Math of Efficiency
The "Bubble" is a mathematical inevitability. We look at the efficiency formula
M+N−1
M

to understand why you need massive global batch sizes to make PP worth the effort.

The article includes a conceptual PyTorch implementation of the 1F1B state machine to illustrate exactly how the data is handed off between stages.

Read the full breakdown here:
https://flozi.net/en/guides/ai/scaling/pipeline_parallel
ronantakizawa 
posted an update 10 days ago
view post
Post
306
Reached 2500+ total downloads across my models and datasets! 🎉

Follow me for more @ronantakizawa
sergiopaniego 
posted an update 11 days ago
view post
Post
1700
nanochat is now in transformers!

The LLM by @karpathy is officially in the library, and we wrote a blog covering: how did we port the model, differences from the original, and how to run or train it.

go read it 🤓

nanochat-students/transformers
ronantakizawa 
posted an update 12 days ago
view post
Post
317
Introducing the india-trending-words dataset: a compilation of 900 trending Google searches from 2006-2024 based on https://trends.withgoogle.com. This dataset captures search trends in 80 categories, and is perfect for analyzing cultural shifts and predicting future trends in India.

#india #indiadataset #googlesearches

ronantakizawa/india-trending-words
Nymbo 
posted an update 12 days ago
view post
Post
4667
🚀 I've just shipped a major update to the Nymbo/Tools MCP server: the Agent_Terminal, a single "master tool" that cuts token usage by over 90%!

Anthropic found 98.7% context savings using code execution with MCP, Cloudflare published similar findings. This is my open-source implementation of the same idea.

# The Problem

Traditional MCP exposes every tool definition directly to the model. With 12 tools, that's thousands of tokens consumed *before the conversation even starts*. Each tool call also passes intermediate results through the context window — a 10,000-row spreadsheet? That's all going into context just to sum a column.

# The Solution: One Tool to Rule Them All

Agent_Terminal wraps all 12 tools (Web_Search, Web_Fetch, File_System, Generate_Image, Generate_Speech, Generate_Video, Deep_Research, Memory_Manager, Obsidian_Vault, Shell_Command, Code_Interpreter) into a single Python code execution gateway.

Instead of the model making individual tool calls, it writes Python code that orchestrates the tools directly:

# Search for Bitcoin price
result = Web_Search("current price of bitcoin", max_results=3)
print(result)


Don't know what tools are available? The agent can discover them at runtime:

print(search_tools('image'))  # Find tools by keyword
print(usage('Generate_Image'))  # Get full docs for a specific tool


The individual direct tool calls are all still there, but they can be disabled if using the Agent_Terminal. Try it now - https://www.nymbo.net/nymbot
  • 1 reply
·
sergiopaniego 
posted an update 13 days ago
flozi00 
posted an update 13 days ago
view post
Post
3564
When models get too large for a single GPU, simply stacking layers vertically (Pipeline Parallelism) isn't always the answer. Sometimes, you need to slice the matrices themselves.

My latest guide breaks down the hardware mechanics of Tensor Parallelism (TP). We look at how to shard individual operations across devices to make a cluster function as one massive accelerator.

This isn't high-level theory—it is a look at the bare metal implementation.

Here is what is covered in the deep dive:

The Strategies: Column vs. Row Parallelism
We analyze how to split weight matrices (W) and inputs (X).

Column-Linear: Splits weights by columns. Requires an All-Gather to reconstruct the output.
Row-Linear: Splits weights by rows. Requires an All-Reduce to sum partial results.
The "Megatron-LM" Optimization
Efficiency comes from minimizing communication. By sandwiching the non-linearity (GeLU) between a Column-Parallel layer and a Row-Parallel layer, we can skip synchronization entirely during the activation phase. This cuts communication events by 50% per block.

The Hardware Reality: The Bandwidth Wall
In TP, the dist.all_reduce operation sits on the critical path. The CUDA cores effectively stall while waiting for the ring-reduce to finish.

Intra-Node: Works well because NVLink provides enough bandwidth to hide this latency.
Inter-Node: Fails at scale. Standard networking (Ethernet/InfiniBand) is too slow for the high-frequency syncs required by TP.
The article includes a raw PyTorch implementation using torch.distributed primitives to show exactly where the data moves and where the bottlenecks sit.

Read the full hardware-centric guide here:
https://flozi.net/en/guides/ai/scaling/tensor_parallel
ronantakizawa 
posted an update 14 days ago
view post
Post
2457
Introducing the japanese-trending-words dataset: a dataset consisting 593 words from Japan’s annual trending word rankings (流行語大賞) from 2006-2025. This dataset provides the top 30 words from each year and its meaning in Japanese and english. This resource is awesome for NLP tasks understanding recent Japanese culture and history.

ronantakizawa/japanese-trending-words

#japanese #japanesedataset #trending


sergiopaniego 
posted an update 14 days ago
sergiopaniego 
posted an update 18 days ago
sergiopaniego 
posted an update 19 days ago
view post
Post
2571
we've just added several example scripts to TRL showing how to train models with GRPO using some of the new OpenEnv environments

train a model to interact with a browser (🎮 BrowserGym Env), play Wordle (🎮 Wordle Env) and moooore!

TRL (GRPO + vLLM) + OpenEnv! ⚡️

📝 go play with them: https://github.com/huggingface/trl/tree/main/examples/scripts/openenv

📝 examples list: https://huggingface.co/docs/trl/main/en/example_overview#scripts
ronantakizawa 
posted an update 19 days ago
view post
Post
989
Introducing the google-trending-words dataset: a compilation of 2784 trending Google searches from 2001-2024 based on https://trends.withgoogle.com. This dataset captures search trends in 93 categories, and is perfect for analyzing cultural shifts, predicting future trends, and understanding how global events shape online behavior.

#trends #google #googlesearches

ronantakizawa/trending-words-google