Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

umarbutler 
posted an update 2 days ago
view post
Post
4592
Isaacus, the AI research company building legal superintelligence, is hiring!

We're looking for passionate engineers who love to build and tinker and want to have an impact on the world. Specifically, we're hiring:
• ML engineers (Australia).
• Data engineers (Australia).
• Full-stack engineers (Australia).
• DevRel engineers (Australia, San Francisco, and London).
• DevOps engineers (Australia, San Francisco, and London).

If you'd like to be a founding employee at one of the few VC-backed LLM research labs in the world, receive generous equity compensation, and work alongside other highly motivated, highly skilled engineers, get in touch: https://isaacus.com/careers
salma-remyx 
posted an update 1 day ago
view post
Post
2332
We built an OpenClaw 🦞 skill that sends daily ranked research recommendations to Slack using the Remyx AI CLI.

You define Research Interests (topics, HF models, GitHub repos, blogs etc), Remyx ranks new arXiv papers and repos to find the most relevant resources, and an OpenClaw cron job posts a formatted digest to your team's #research channel every weekday morning.

The tutorial covers the full setup end-to-end: installing the CLI, creating interests, connecting OpenClaw to Slack, installing the Remyx skill, and scheduling the cron. About 15 minutes start to finish.

Tutorial: https://docs.remyx.ai/tutorials/daily-research-digest-slack

Would love to hear how folks are tracking research for their projects. If you give this a try, let us know what you think!
prithivMLmods 
posted an update 3 days ago
view post
Post
3708
Map-Anything v1 (Universal Feed-Forward Metric 3D Reconstruction) demo is now available on Hugging Face Spaces. Built with Gradio and integrated with Rerun, it performs multi-image and video-based 3D reconstruction, depth, normal map, and interactive measurements.

🤗 Demo: prithivMLmods/Map-Anything-v1
🤗 Model: facebook/map-anything-v1
🤗 Hf-Papers: MapAnything: Universal Feed-Forward Metric 3D Reconstruction (2509.13414)
Shrijanagain 
posted an update about 16 hours ago
view post
Post
1204

​We are thrilled to announce the launch of SKT-OMNI-CORPUS-146T-V1, a massive-scale, high-quality dataset designed to power the next generation of Foundation Models (LLMs) from scratch.
​Developed at SKT AI LABS, this corpus is not just a collection of data; it’s a mission to decentralize high-grade AI training for regional languages and global knowledge.

​💎 Key Highlights:

​•• Massive Scale: Targeting a multi-terabyte architecture for 146T-level tokenization.

•• ​Pure Quality: Curated from 500+ Elite Sources

•• ​Structured for MoE: Perfectly sharded into 3.5GB standardized units (SKT-𝕻 series) for seamless distributed training.

​🤝 Open for Collaboration!

​We are looking for AI researchers, CUDA engineers, and data scientists to join us in this journey of building Project Surya and the ST-X Series models. Whether it's optimization, custom tokenization, or architecture design—let’s build the future together.

​Explore the Dataset on Hugging Face:

🔗 Shrijanagain/SKT-OMNI-CORPUS-146T-V1

DSR -- 🔗 Shrijanagain/SKT-DSRx10000

​#AI #MachineLearning #OpenSource #IndicAI #SKTAILABS #LLM #BigData #HuggingFace #InnovationIndia
DedeProGames 
posted an update about 22 hours ago
view post
Post
2369
Introducing GRM2, a powerful 3 billion parameter model designed for long-term reasoning and high performance in complex tasks.

Even with only 3 billion parameters, it outperforms qwen3-32b in several benchmarks and complex reasoning tasks.

With just 3 billion parameters, it can also generate extensive and complex code with over 1000 lines, utilize tools comparable to larger models, and is perfect for agentic tasks.

GRM2 is licensed under Apache 2.0, making it ideal as a base for FineTune in other tasks.
You can see more here: OrionLLM/GRM2-3b
unmodeled-tyler 
posted an update 1 day ago
view post
Post
2413
Here's a demo of Vessel Browser in action!

Minimax M2.7 was challenged with navigating to a large Ecom site, curate a selection of 5 different products, and add them all to the cart with included reasoning behind choices. (or try it yourself - open source, MIT license, and BYOK!)
npm i @quanta-intellect/vessel-browser

Vessel is a browser that I've been working on which is designed specifically for agents with human-in-the-loop visibility. It comes with a local MCP server allowing any harness that supports custom MCP to control the browser. Additionally, you can BYOK to 8+ different providers (including custom OAI compatible endpoints and local models).

One of my favorite features of the browser is persistent, bi-directional highlighting - meaning that both you AND the agent can highlight anything on the screen and the agent receives the context.

Vessel Browser is unique in that it surfaces available tools contextually to the agent, meaning the agent doesn't have to decide between 80+ tools at any given time, but rather is focused on a subset of tools most applicable to the current state.

Give it a try!

https://github.com/unmodeled-tyler/vessel-browser
  • 2 replies
·
salma-remyx 
posted an update 3 days ago
view post
Post
3743
Looking to execute on your next great idea? 💡

Search for relevant papers and find pre-built Docker images to interactively explore the code with Remyx!

Check out the new space 🔍
remyxai/remyx-explorer
  • 4 replies
·
cahlen 
posted an update 4 days ago
view post
Post
2923
It’s wild to me how you can just make shit now.

You can take a weekend with a raspberry pi 5, a pi camera, a 3d printer, and a smidgen of custom fine tuning (wakeword, whisper, tinybert, and pipertts) and you have physical device as a talking personal assistant.

What a time to be alive.

Edge ai, physical ai, ai augmented animatronics… tiny models. Tiny agents.

Going to be a wild year.
  • 5 replies
·
kanaria007 
posted an update 2 days ago
view post
Post
151
✅ Article highlight: *Personal SI-Core* (art-60-047, v0.1)

TL;DR:
What would it mean for a person or household to have an *SI-Core of their own*?

This note sketches *Personal SI-Core / PersonalOS*: a personal-scale runtime where *you remain the primary principal*, goals are explicit, delegations to apps and providers are scoped and revocable, memory is governed, and effectful actions still pass through structured *ID / OBS / MEM / ETH / EVAL / RML* boundaries.

Read:
kanaria007/agi-structural-intelligence-protocols

Why it matters:
• avoids the “50 assistants, 50 KPIs” problem
• keeps platforms from silently optimizing your life for their own goals
• makes delegation visible, capability-scoped, and revocable
• brings consent, auditability, rollout modes, and rollback into everyday life systems

What’s inside:
• *CityOS → PersonalOS* mapping for person/household-scale governance
• personal GoalSurfaces for wellbeing, finances, learning, and schedules
• identity / role / delegation models for apps, employers, banks, and providers
• governed memory across devices, accounts, and external services
• *Personal Genius Traces (PGT)* for replaying good life patterns
• personal *PoLB* modes for safe experiments in routines, budget rules, and automation

Key idea:
Personal SI-Core is not “an AI that runs your life.” It is a way to make personal coordination, delegation, memory, and experimentation *structural, reviewable, and governed*.
DedeProGames 
posted an update 4 days ago
view post
Post
573
Introducing GRM2, a powerful 3b parameter model designed for long-term reasoning and high performance in complex tasks.

Even with only 3b of parameters, it outperforms qwen3-32b in several benchmarks.

With only 3b of parameters, it can also generate large and complex code of over 1000 lines, use tools in a way comparable to large models, and is perfect for agentic tasks.

GRM2 is licensed under Apache 2.0, making it perfect as a FineTune base for other tasks.

OrionLLM/GRM2-3b
  • 2 replies
·