Highlights
- OpenAI releases GPT OSS-120b and 20b as free, open-weight language models under Apache 2.0.
- The models are optimized for efficiency and can even run on laptops with 16GB RAM.
- Strong performance in coding tasks, but factual hallucination remains a major drawback.

OpenAI just released something called GPT OSS , and honestly, this changes a lot. After spending years on locked, closed AI tools, now they’re like, “Hey, here’s something you all can use.”
I didn’t really expect this, especially when GPT-5 is around the corner. But here we are. They gave us two models, GPT OSS-120b and GPT OSS-20b , and they’re totally open-weight and free to use.
OpenAI drops GPT OSS models, and it’s kind of a big deal
What’s this GPT OSS thing?
Don’t want to miss the best from TechLatest ? Set us as a preferred source in Google Search and make sure you never miss our latest.
Basically, GPT OSS is OpenAI’s way of stepping back into the open-source world. The models are released under Apache 2.0 license , which means no drama, no weird rules.
Anyone can use it, tweak it, or build on it. Doesn’t matter if you’re a solo developer or a big company. These models are just language models , by the way. No images or audio. Just pure text.
Why it matters
The real reason this is a big move is that OpenAI is clearly trying to win back the dev community. There are a lot of open models coming out lately, and if OpenAI stays closed forever, they risk falling behind in some areas.
So with GPT OSS , they’re showing that they still care about transparency and sharing. Also, it helps with all the pressure from governments and policymakers who keep asking for more open and clear AI development.

Image Credits: GitHub
How it performs
So performance-wise, not bad at all. The bigger model, GPT OSS-120b , runs on just a high-end Nvidia GPU.
The smaller one, 20b , can literally run on a 16GB RAM laptop . That’s actually crazy when you think about it. That’s real accessibility.
They’re using something called Mixture-of-Experts (MoE) architecture. Not gonna go too deep into it, but long story short, the model doesn’t use all its parameters every time. It picks the parts it needs and runs faster and lighter.
Strengths and issues
On the Codeforces benchmark (which is used for coding skills), GPT OSS-120b scored 2622 and the 20b version scored 2516 . Pretty solid, to be honest. Beats a few other models like DeepSeek R1.
But yeah, it’s not perfect. The hallucination rate is kind of bad. The big model gives wrong info like 49% of the time in factual tests.
The smaller one does worse at 53% . That’s a lot, but it’s expected since these models are smaller and don’t have deep world knowledge like GPT-4.
To keep it simple, GPT OSS is OpenAI’s comeback into the open space. It’s not perfect, but it’s powerful, fast, and more accessible than you’d expect.
If you’re someone who builds or tests AI stuff, this is a good time to jump in. At the end of the day, it’s not about beating GPT-4, it’s about giving people a starting point to build smarter tools.
Enjoyed this article?
If TechLatest has helped you, consider supporting us with a one-time tip on Ko-fi. Every contribution keeps our work free and independent.