Skip to content
/
OpenRouter
© 2026 OpenRouter, Inc

Product

  • Chat
  • Rankings
  • Apps
  • Models
  • Providers
  • Pricing
  • Enterprise
  • Labs

Company

  • About
  • Announcements
  • CareersHiring
  • Privacy
  • Terms of Service
  • Support
  • State of AI
  • Works With OR
  • Data

Developer

  • Documentation
  • API Reference
  • SDK
  • Status

Connect

  • Discord
  • GitHub
  • LinkedIn
  • X
  • YouTube
Favicon for deepseek

DeepSeek: R1 Distill Llama 70B

deepseek/deepseek-r1-distill-llama-70b

ChatCompare

DeepSeek R1 Distill Llama 70B is a distilled large language model based on Llama-3.3-70B-Instruct, using outputs from DeepSeek R1. The model combines advanced distillation techniques to achieve high performance across multiple benchmarks, including:

  • AIME 2024 pass@1: 70.0
  • MATH-500 pass@1: 94.5
  • CodeForces Rating: 1633

The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.

Modalities

In / Out Price

$0.70 / $0.80per 1M

Context

131K

Weekly Rank

#276on OpenRouter

Knowledge Cutoff

Jul 31, 2024

Overview
Playground
Providers
Performance
Pricing
Benchmarks
Apps
Activity
Uptime
API

Recent activity on R1 Distill Llama 70B

Total usage per day on OpenRouter

Not enough data to display yet.