DeepSeek – Telegram
DeepSeek
1.1K subscribers
38 photos
32 links
Unravel the mystery of AGI with curiousity. Answer the essential questions with long-termism. https://www.deepseek.com
Download Telegram
Re Key Features of DeepSeek App:

🔐 Easy login: E-mail/Google Account/Apple ID
☁️ Cross-platform chat history sync
🔍 Web search & Deep-Think mode
📄 File upload & text extraction

🌟 2/3

via Twitter @DeepSeek
👍21🐳1
Re ⚠️ Important Notice:

100% FREE - No ads, no in-app purchases
🛡️ Download only from official channels to avoid being misled
📲 Search "DeepSeek" in your app store or visit our website for direct links

🌟 3/3

via Twitter @DeepSeek
👍21🔥1🐳1
🚀 DeepSeek-R1 is here!

Performance on par with OpenAI-o1
📖 Fully open-source model & technical report
🏆 MIT licensed: Distill & commercialize freely!

🌐 Website & API are live now! Try DeepThink at http://chat.deepseek.com today!

🐋 1/n

via Twitter @DeepSeek
🐳4🔥2
Re 🛠️ DeepSeek-R1: Technical Highlights

📈 Large-scale RL in post-training
🏆 Significant performance boost with minimal labeled data
🔢 Math, code, and reasoning tasks on par with OpenAI-o1
📄 More details: https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf

🐋 4/n

via Twitter @DeepSeek
4🐳4🔥2🥰1👏1
Re 🌐 API Access & Pricing

⚙️ Use DeepSeek-R1 by setting model=deepseek-reasoner
💰 $0.14 / million input tokens (cache hit)
💰 $0.55 / million input tokens (cache miss)
💰 $2.19 / million output tokens

📖 API guide: https://api-docs.deepseek.com/guides/reasoning_model

🐋 5/n

via Twitter @DeepSeek
👍6🐳63💩3🔥1
To prevent any potential harm, we reiterate that @deepseek_ai is our sole official account on Twitter/X.

Any accounts:
- representing us
- using identical avatars
- using similar names
are impersonations.

Please stay vigilant to avoid being misled!
🔥8🐳83👍1🥰1🤩1
📢 Terminology Correction: DeepSeek-R1’s code and models are released under the MIT License.
🐳14🔥42
🎉 Excited to see everyone’s enthusiasm for deploying DeepSeek-R1! Here are our recommended settings for the best experience:

• No system prompt
• Temperature: 0.6
• Official prompts for search & file upload: bit.ly/4hyH8np
• Guidelines to mitigate model bypass thinking: bit.ly/4gJrhkF

The official DeepSeek deployment runs the same model as the open-source version—enjoy the full DeepSeek-R1 experience! 🚀
🐳16👍2🔥2🫡1
🚀 Introducing NSA: A Hardware-Aligned and Natively Trainable Sparse Attention mechanism for ultra-fast long-context training & inference!

Core components of NSA:
• Dynamic hierarchical sparse strategy
• Coarse-grained token compression
• Fine-grained token selection

💡 With optimized design for modern hardware, NSA speeds up inference while reducing pre-training costs—without compromising performance. It matches or outperforms Full Attention models on general benchmarks, long-context tasks, and instruction-based reasoning.

📖 For more details, check out our paper here: https://arxiv.org/abs/2502.11089
👍8🐳7🔥21
🚀 Day 0: Warming up for #OpenSourceWeek!

We're a tiny team @deepseek_ai
exploring AGI. Starting next week, we'll be open-sourcing 5 repos, sharing our small but sincere progress with full transparency.

These humble building blocks in our online service have been documented, deployed and battle-tested in production.

As part of the open-source community, we believe that every line shared becomes collective momentum that accelerates the journey.

Daily unlocks are coming soon. No ivory towers - just pure garage-energy and community-driven innovation.
🐳323🔥2🥰1🤓1🫡1
🚀 Day 1 of #OpenSourceWeek: FlashMLA

Honored to share FlashMLA - our efficient MLA decoding kernel for Hopper GPUs, optimized for variable-length sequences and now in production.

BF16 support
Paged KV cache (block size 64)
⚡️ 3000 GB/s memory-bound & 580 TFLOPS compute-bound on H800

🔗 Explore on GitHub: https://github.com/deepseek-ai/FlashMLA
🐳133👍2🥰1🆒1
🚀 Day 2 of #OpenSourceWeek: DeepEP

Excited to introduce DeepEP - the first open-source EP communication library for MoE model training and inference.

Efficient and optimized all-to-all communication
Both intranode and internode support with NVLink and RDMA
High-throughput kernels for training and inference prefilling
Low-latency kernels for inference decoding
Native FP8 dispatch support
Flexible GPU resource control for computation-communication overlapping

🔗 GitHub: github.com/deepseek-ai/DeepEP
🐳14🔥72🥰1👏1
🚀 Day 3 of #OpenSourceWeek: DeepGEMM

Introducing DeepGEMM - an FP8 GEMM library that supports both dense and MoE GEMMs, powering V3/R1 training and inference.

⚡️ Up to 1350+ FP8 TFLOPS on Hopper GPUs
No heavy dependency, as clean as a tutorial
Fully Just-In-Time compiled
Core logic at ~300 lines - yet outperforms expert-tuned kernels across most matrix sizes
Supports dense layout and two MoE layouts

🔗 GitHub: https://github.com/deepseek-ai/DeepGEMM
🐳83👍2🔥1👏1
🚨 Off-Peak Discounts Alert!

Starting today, enjoy off-peak discounts on the DeepSeek API Platform from 16:30–00:30 UTC daily:

🔹 DeepSeek-V3 at 50% off
🔹 DeepSeek-R1 at a massive 75% off

Maximize your resources smarter — save more during these high-value hours!
🐳16👍4🔥3🥰1
🚀 Day 4 of #OpenSourceWeek: Optimized Parallelism Strategies

DualPipe - a bidirectional pipeline parallelism algorithm for computation-communication overlap in V3/R1 training.
🔗 https://github.com/deepseek-ai/DualPipe

EPLB - an expert-parallel load balancer for V3/R1.
🔗 https://github.com/deepseek-ai/eplb

📊 Analyze computation-communication overlap in V3/R1.
🔗 https://github.com/deepseek-ai/profile-data
🐳16🔥3
🚀 Day 5 of #OpenSourceWeek: 3FS, Thruster for All DeepSeek Data Access

Fire-Flyer File System (3FS) - a parallel file system that utilizes the full bandwidth of modern SSDs and RDMA networks.

6.6 TiB/s aggregate read throughput in a 180-node cluster
3.66 TiB/min throughput on GraySort benchmark in a 25-node cluster
40+ GiB/s peak throughput per client node for KVCache lookup
🧬 Disaggregated architecture with strong consistency semantics
Training data preprocessing, dataset loading, checkpoint saving/reloading, embedding vector search & KVCache lookups for inference in V3/R1

📥 3FS → github.com/deepseek-ai/3FS
Smallpond - data processing framework on 3FS → github.com/deepseek-ai/smallpond
🐳21👍31🥰1
🚀 Day 6 of #OpenSourceWeek: One More Thing – DeepSeek-V3/R1 Inference System Overview

Optimized throughput and latency via:
🔧 Cross-node EP-powered batch scaling
🔄 Computation-communication overlap
⚖️ Load balancing

Statistics of DeepSeek's Online Service:
73.7k/14.8k input/output tokens per second per H800 node
🚀 Cost profit margin 545%

💡 We hope this week's insights offer value to the community and contribute to our shared AGI goals.
📖 Deep Dive: bit.ly/4ihZUiO
🐳173👍2👏2🥰1
🚀 DeepSeek-V3-0324 is out now!

🔹 Major boost in reasoning performance
🔹 Stronger front-end development skills
🔹 Smarter tool-use capabilities

For non-complex reasoning tasks, we recommend using V3 — just turn off “DeepThink”
🔌 API usage remains unchanged
📜 Models are now released under the MIT License, just like DeepSeek-R1!
🔗 Open-source weights: https://huggingface.co/deepseek-ai/DeepSeek-V3-0324
👍8🐳65🔥4🥰1
🚀 DeepSeek-R1-0528 is here!

🔹 Improved benchmark performance
🔹 Enhanced front-end capabilities
🔹 Reduced hallucinations
🔹 Supports JSON output & function calling

Try it now: https://chat.deepseek.com
🔌 No change to API usage — docs here: https://api-docs.deepseek.com/guides/reasoning_model
🔗 Open-source weights: https://huggingface.co/deepseek-ai/DeepSeek-R1-0528
🐳307👏4🔥3❤‍🔥21👀1🆒1
Introducing DeepSeek-V3.1: our first step toward the agent era! 🚀

🧠 Hybrid inference: Think & Non-Think — one model, two modes
⚡️ Faster thinking: DeepSeek-V3.1-Think reaches answers in less time vs. DeepSeek-R1-0528
🛠️ Stronger agent skills: Post-training boosts tool use and multi-step agent tasks

Try it now — toggle Think/Non-Think via the "DeepThink" button: chat.deepseek.com
🐳42👏21👍1👎1