OpenAI returns to open source with powerful new models
OpenAI is getting back to its roots as an open source AI company with today’s announcement and release of two new, open source, frontier large language models (LLMs): gpt-oss-120b and gpt-oss-20b.
The 120-billion parameter model can run on a single Nvidia H100 GPU while the 20-billion parameter version is small enough to run locally on consumer hardware. Both are text-only models available for free download with full weights on Hugging Face and GitHub.
Impressive performance benchmarks
According to OpenAI’s testing, gpt-oss-120b matches or exceeds its proprietary o4-mini model on key benchmarks:
- Competition mathematics (AIME 2024 & 2025)
- General problem solving (MMLU and HLE)
- Agentic evaluations (TauBench)
- Health-specific evaluations (HealthBench)

The smaller gpt-oss-20b model performs comparably to o3-mini, surpassing it in some tests. Both models are multilingual though OpenAI hasn’t specified which languages are supported.
Enterprise-friendly Apache license
The models use Apache 2.0 licensing – more permissive than Meta’s Llama license which restricts large-scale commercial use. This allows:
- Free commercial use without restrictions
- Local deployment for maximum privacy
- Modification and customization
- Revenue generation without paying OpenAI
This makes the models particularly attractive for regulated industries like healthcare, finance, and government where data privacy is critical.
Why OpenAI is going open source now
After six years of proprietary models since GPT-2’s release, OpenAI’s shift comes as:
- Open source competitors like DeepSeek R1 offer near-parity performance
- Many API customers already use mixed proprietary/open source models
- CEO Sam Altman admitted being on the “wrong side of history” regarding open source
OpenAI hopes to keep users in its ecosystem even if it doesn’t directly monetize these models.
Technical specifications
The models feature:
- Mixture-of-Experts (MoE) architecture with Transformer backbone
- 128,000 token context length
- Locally banded sparse attention
- Rotary Positional Embeddings
- New “o200k_harmony” tokenizer
Developers can select low, medium or high reasoning effort settings based on performance needs.
Safety measures
OpenAI conducted extensive safety evaluations including:
- Filtering harmful training data
- Deliberative alignment techniques
- Instruction hierarchy for refusal behavior
- Malicious fine-tuning simulations
- Third-party expert reviews
Testing showed the models remained below “High” risk thresholds in sensitive domains like biosecurity.
Availability
The models are available through:
- Hugging Face
- Major cloud platforms (Azure, AWS, Databricks)
- Hardware partners (NVIDIA, AMD, Cerebras)
- Windows via ONNX Runtime
OpenAI also announced a $500,000 Red Teaming Challenge on Kaggle to further test model safety.
Competitive landscape
OpenAI enters a crowded open source market competing with:
- China’s DeepSeek-R1 and Alibaba’s Qwen 3
- Europe’s Mistral
- Meta’s Llama 3.1
- UAE’s Falcon
Whether enterprises will pay for OpenAI’s proprietary models when comparable open options exist remains the key business question.
Enterprises can use a powerful, near topline OpenAI LLM on their hardware totally privately and securely, without sending data to the cloud.Read More
More Information & Source
Original Source:
Visit Original Website
Read Full News:
Click Here to Read More
Have questions or feedback?
Contact Us

No Comment! Be the first one.