DeepSpeed 通过单击赋能类似 ChatGPT 的模型训练,相较于 SOTA RLHF 系统提供 15 倍的加速,并在所有规模上实现前所未有的成本降低;了解如何实现.
DeepSpeed 使世界上最强大的语言模型如 MT-530B 和 BLOOM 成为可能。它是一个易于使用的深度学习优化软件套件,为训练和推理提供前所未有的规模和速度。使用 DeepSpeed,您可以:
<img src=docs/assets/images/DeepSpeed-pillars.png width=800px>
DeepSpeed 提供了一系列系统创新,使大规模深度学习训练变得高效且有效,极大地改善了易用性,并重新定义了可能的训练规模。这些创新,如 ZeRO、3D-并行、DeepSpeed-MoE、ZeRO-Infinity 等,都属于训练支柱。了解更多:DeepSpeed-Training
DeepSpeed 将张量、管道、专家和 ZeRO 并行等并行技术创新结合在一起,并结合高性能的定制推理内核、通信优化和异构内存技术,以前所未有的规模实现推理,同时实现无与伦比的延迟、吞吐量和成本降低。这种系统性组合的推理技术属于推理支柱。了解更多:DeepSpeed-Inference
为了进一步提高推理效率,DeepSpeed 提供易于使用和灵活组合的压缩技术,帮助研究人员和从业者在提供更快速度、更小模型大小和显著降低压缩成本的同时压缩他们的模型。此外,压缩支柱下包含了如 ZeroQuant 和 XTC 等在压缩领域的尖端创新。了解更多:DeepSpeed-Compression
为响应微软解决人类最紧迫挑战的使命,微软的 DeepSpeed 团队正在通过启动名为 DeepSpeed4Science 的新倡议来应对这一机遇,旨在通过 AI 系统技术创新构建独特能力,帮助领域专家揭开当今最大科学谜团。了解更多:DeepSpeed4Science 网站 和 教程
DeepSpeed 库(该存储库)实现并打包了 DeepSpeed 训练、推理和压缩支柱中的创新和技术,形成一个易于使用的开源存储库。它允许在单个训练、推理或压缩管道中轻松组合多种功能。DeepSpeed 库在深度学习社区被广泛采用,并已被用于支持一些最强大的模型(请参见 DeepSpeed 采用)。
推理模型实现 (MII) 是一个开源存储库,旨在通过减轻数据科学家自行应用复杂系统优化技术的需求,使低延迟和高吞吐量推理变得对所有人可及。MII 提供开箱即用的支持,涵盖成千上万种广泛使用的深度学习模型,这些模型经过 DeepSpeed-Inference 优化,可以通过几行代码进行部署,同时与其原始开源版本相比实现显著的延迟降低。
DeepSpeed 用户各不相同,能够访问不同的环境。我们建议在 Azure 上尝试 DeepSpeed,因为这是最简单、最容易的方法。推荐的 Azure 上尝试 DeepSpeed 的方法是通过 AzureML 食谱。作业提交和数据准备脚本可以在 此处 找到。有关如何在 Azure 上使用 DeepSpeed 的更多详细信息,请参阅 Azure 教程。
DeepSpeed 是微软新 AI at Scale 计划的重要组成部分,旨在实现下一代 AI 能力的大规模部署,您可以在 此处 找到更多信息。
DeepSpeed 已被用于训练多种不同的大规模模型,以下是我们所知道的几个示例(如果您希望添加您的模型,请提交 PR):
DeepSpeed 已与多种流行的开源深度学习框架集成,如下所示:
文档 | |
---|---|
<img src=docs/assets/images/transformers-light.png#gh-light-mode-only width=250px> | 使用 DeepSpeed 的 Transformers |
<img src=docs/assets/images/accelerate-light.png#gh-light-mode-only width=250px> | 使用 DeepSpeed 的 Accelerate |
<img src=docs/assets/images/lightning-light.svg#gh-light-mode-only width=200px> | 使用 DeepSpeed 的 Lightning |
<img src=docs/assets/images/mosaicml.svg width=200px> | 使用 DeepSpeed 的 MosaicML |
<img src=docs/assets/images/determined.svg width=225px> | 使用 DeepSpeed 的 Determined |
<img src=https://user-images.githubusercontent.com/58739961/187154444-fce76639-ac8d-429b-9354-c6fac64b7ef8.jpg width=150> | 使用 DeepSpeed 的 MMEngine |
描述 | 状态 |
---|---|
NVIDIA | |
AMD | |
CPU | |
Intel Gaudi | |
Intel XPU | |
PyTorch Nightly | |
集成 | |
其他 | |
华为 Ascend NPU |
使用 pip 安装 DeepSpeed 是最快的方法,这将安装最新版本的 DeepSpeed,该版本不依赖于特定的 PyTorch 或 CUDA 版本。DeepSpeed 包含多个 C++/CUDA 扩展,我们通常称之为“ops”。默认情况下,所有这些扩展/ops 都将使用 torch 的 JIT C++ 扩展加载器 进行即时构建(JIT)。
贡献者 | 硬件 | 加速器名称 | 贡献者验证 | 上游验证 |
---|---|---|---|---|
华为 | 华为 Ascend NPU | npu | 是 | 否 |
英特尔 | Intel(R) Gaudi(R) 2 AI 加速器 | hpu | 是 | 是 |
英特尔 | Intel(R) Xeon(R) 处理器 | cpu | 是 | 是 |
英特尔 | Intel(R) 数据中心 GPU Max 系列 | xpu | 是 | 是 |
我们定期将发布版本推送到 PyPI,并鼓励用户在大多数情况下从那里安装。
pip install deepspeed
安装后,您可以通过 DeepSpeed 环境报告验证您的安装,并查看您的机器兼容哪些扩展/ops。
ds_report
如果您希望预先安装任何 DeepSpeed 扩展/ops(而不是 JIT 编译)或通过 PyPI 安装预编译的 ops,请查看我们的 高级安装说明。
DeepSpeed 在 Windows 上的支持是部分支持的。在 Windows 上,您可以通过以下步骤构建 wheel,目前只支持推理模式。
python setup.py bdist_wheel
在 dist
文件夹中构建 wheel。请查看 DeepSpeed-Training、DeepSpeed-Inference 和 DeepSpeed-Compression 页面,了解这三大支柱提供的完整特性。
所有 DeepSpeed 文档、教程和博客都可以在我们的网站上找到:deepspeed.ai
描述 | |
---|---|
快速入门 | 与 DeepSpeed 的第一步 |
DeepSpeed JSON 配置 | 配置 DeepSpeed |
API 文档 | 生成的 DeepSpeed API 文档 |
教程 | 教程 |
博客 | 博客 |
DeepSpeed 欢迎您的贡献!有关格式、测试等的更多详细信息,请参阅我们的 贡献 指南。
非常感谢我们所有出色的贡献者!
<a href=https://github.com/microsoft/DeepSpeed/graphs/contributors> <img src=https://contrib.rocks/image?repo=microsoft/DeepSpeed&r= width=800px/>
该项目欢迎贡献和建议。大多数贡献要求您同意贡献者许可证协议(CLA),声明您有权并实际授予我们使用您的贡献的权利。有关详细信息,请访问 https://cla.opensource.microsoft.com。
当您提交拉取请求时,CLA 机器人将自动确定您是否需要提供 CLA,并适当地标记 PR(例如,状态检查、评论)。只需按照机器人提供的说明进行操作。您只需在所有使用我们 CLA 的仓库中执行此操作一次。
该项目已采用 Microsoft 开源行为准则。有关更多信息,请参见 行为准则 FAQ 或联系 opencode@microsoft.com 以获取其他问题或意见。
Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, Yuxiong He. (2019) ZeRO: memory optimizations toward training trillion parameter models. arXiv:1910.02054 and In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC 20).
Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. (2020) DeepSpeed: System Optimizations Enable Training Deep Learning Models with Over 100 Billion Parameters. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD 20, Tutorial).
Minjia Zhang, Yuxiong He. (2020) Accelerating Training of Transformer-Based Language Models with Progressive Layer Dropping. arXiv:2010.13369 and NeurIPS 2020.
Jie Ren, Samyam Rajbhandari, Reza Yazdani Aminabadi, Olatunji Ruwase, Shuangyan Yang, Minjia Zhang, Dong Li, Yuxiong He. (2021) ZeRO-Offload: Democratizing Billion-Scale Model Training. arXiv:2101.06840 and USENIX ATC 2021. [paper] [slides] [blog]
Hanlin Tang, Shaoduo Gan, Ammar Ahmad Awan, Samyam Rajbhandari, Conglong Li, Xiangru Lian, Ji Liu, Ce Zhang, Yuxiong He. (2021) 1-bit Adam: Communication Efficient Large-Scale Training with Adams Convergence Speed. arXiv:2102.02888 and ICML 2021.
Samyam Rajbhandari, Olatunji Ruwase, Jeff Rasley, Shaden Smith, Yuxiong He. (2021) ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning. arXiv:2104.07857 and SC 2021. [paper] [slides] [blog]
Conglong Li, Ammar Ahmad Awan, Hanlin Tang, Samyam Rajbhandari, Yuxiong He. (2021) 1-bit LAMB: Communication Efficient Large-Scale Large-Batch Training with LAMBs Convergence Speed. arXiv:2104.06069 and HiPC 2022.
Conglong Li, Minjia Zhang, Yuxiong He. (2021) The Stability-Efficiency Dilemma: Investigating Sequence Length Warmup for Training GPT Models. arXiv:2108.06084 and NeurIPS 2022.
Yucheng Lu, Conglong Li, Minjia Zhang, Christopher De Sa, Yuxiong He. (2022) Maximizing Communication Efficiency for Large-scale Training via 0/1 Adam. arXiv:2202.06009.
Samyam Rajbhandari, Conglong Li, Zhewei Yao, Minjia Zhang, Reza Yazdani Aminabadi, Ammar Ahmad Awan, Jeff Rasley, Yuxiong He. (2022) DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale arXiv:2201.05596 and ICML 2022. [pdf] [slides] [blog]
Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, Elton Zhang, Rewon Child, Reza Yazdani Aminabadi, Julie Bernauer, Xia Song, Mohammad Shoeybi, Yuxiong He, Michael Houston, Saurabh Tiwary, Bryan Catanzaro. (2022) Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model arXiv:2201.11990.
Xiaoxia Wu, Zhewei Yao, Minjia Zhang, Conglong Li, Yuxiong He. (2022) Extreme Compression for Pre-trained Transformers Made Simple and Efficient. arXiv:2206.01859 and NeurIPS 2022.
Zhewei Yao, Reza Yazdani Aminabadi, Minjia Zhang, Xiaoxia Wu, Conglong Li, Yuxiong He. (2022) ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers. arXiv:2206.01861 and NeurIPS 2022 [slides] [blog]
Reza Yazdani Aminabadi, Samyam Rajbhandari, Minjia Zhang, Ammar Ahmad Awan, Cheng Li, Du Li, Elton Zheng, Jeff Rasley, Shaden Smith, Olatunji Ruwase, Yuxiong He. (2022) DeepSpeed Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale. arXiv:2207.00032 and SC 2022. [paper] [slides] [blog]
Zhewei Yao, Xiaoxia Wu, Conglong Li, Connor Holmes, Minjia Zhang, Cheng Li, Yuxiong He. (2022) Random-LTD: Random and Layerwise Token Dropping Brings Efficient Training for Large-scale Transformers. arXiv:2211.11586.
Conglong Li, Zhewei Yao, Xiaoxia Wu, Minjia Zhang, Yuxiong He. (2022) DeepSpeed Data Efficiency: Improving Deep Learning Model Quality and Training Efficiency via Efficient Data Sampling and Routing. arXiv:2212.03597 ENLSP2023 Workshop at NeurIPS2023
Xiaoxia Wu, Cheng Li, Reza Yazdani Aminabadi, Zhewei Yao, Yuxiong He. (2023) Understanding INT4 Quantization for Transformer Models: Latency Speedup, Composability, and Failure Cases. arXiv:2301.12017 and ICML2023.
Syed Zawad, Cheng Li, Zhewei Yao, Elton Zheng, Yuxiong He, Feng Yan. (2023) DySR: Adaptive Super-Resolution via Algorithm and System Co-design. ICLR:2023.
Sheng Shen, Zhewei Yao, Chunyuan Li, Trevor Darrell, Kurt Keutzer, Yuxiong He. (2023) Scaling Vision-Language Models with Sparse Mixture of Experts. arXiv:2303.07226 and Finding at EMNLP2023.
Quentin Anthony, Ammar Ahmad Awan, Jeff Rasley, Yuxiong He, Aamir Shafi, Mustafa Abduljabbar, Hari Subramoni, Dhabaleswar Panda. (2023) MCR-DL: Mix-and-Match Communication Runtime for Deep Learning arXiv:2303.08374 and will appear at IPDPS 2023.
Siddharth Singh, Olatunji Ruwase, Ammar Ahmad Awan, Samyam Rajbhandari, Yuxiong He, Abhinav Bhatele. (2023) A Hybrid Tensor-Expert-Data Parallelism Approach to Optimize Mixture-of-Experts Training arXiv:2303.06318 and will appear at ICS 2023.
Guanhua Wang, Heyang Qin, Sam Ade Jacobs, Xiaoxia Wu, Connor Holmes, Zhewei Yao, Samyam Rajbhandari, Olatunji Ruwase, Feng Yan, Lei Yang, Yuxiong He. (2023) ZeRO++: Extremely Efficient Collective Communication for Giant Model Training arXiv:2306.10209 and ML for Sys Workshop at NeurIPS2023 [blog]
Zhewei Yao, Xiaoxia Wu, Cheng Li, Stephen Youn, Yuxiong He. (2023) ZeroQuant-V2: Exploring Post-training Quantization in LLMs from Comprehensive Study to Low Rank Compensation arXiv:2303.08302 and ENLSP2023 Workshop at NeurIPS2023 [slides]
Pareesa Ameneh Golnari, Zhewei Yao, Yuxiong He. (2023) Selective Guidance: Are All the Denoising Steps of Guided Diffusion Important? arXiv:2305.09847
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He. (2023) DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales arXiv:2308.01320.
Xiaoxia Wu, Zhewei Yao, Yuxiong He. (2023) ZeroQuant-FP: A Leap Forward in LLMs Post-Training W4A8 Quantization Using Floating-Point Formats arXiv:2307.09782 and ENLSP2023 Workshop at NeurIPS2023 [slides]
Zhewei Yao, Xiaoxia Wu, Conglong Li, Minjia Zhang, Heyang Qin, Olatunji Ruwase, Ammar Ahmad Awan, Samyam Rajbhandari, Yuxiong He. (2023) DeepSpeed-VisualChat: Multi-Round Multi-Image Interleave Chat via Multi-Modal Causal Attention arXiv:2309.14327
Shuaiwen Leon Song, Bonnie Kruft, Minjia Zhang, Conglong Li, Shiyang Chen, Chengming Zhang, Masahiro Tanaka, Xiaoxia Wu, Jeff Rasley, Ammar Ahmad Awan, Connor Holmes, Martin Cai, Adam Ghanem, Zhongzhu Zhou, Yuxiong He, et al. (2023) DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery through Sophisticated AI System Technologies arXiv:2310.04610 [blog]
Zhewei Yao, Reza Yazdani Aminabadi, Stephen Youn, Xiaoxia Wu, Elton Zheng, Yuxiong He. (2023) ZeroQuant-HERO: Hardware-Enhanced Robust Optimized Post-Training Quantization Framework for W8A8 Transformers arXiv:2310.17723
Xiaoxia Wu, Haojun Xia, Stephen Youn, Zhen Zheng, Shiyang Chen, Arash Bakhtiari, Michael Wyatt, Reza Yazdani Aminabadi, Yuxiong He, Olatunji Ruwase, Leon Song, Zhewei Yao (2023) ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric Strategy for Diverse Generative Tasks arXiv:2312.08583
Haojun Xia, Zhen Zheng, Xiaoxia Wu, Shiyang Chen, Zhewei Yao, Stephen Youn, Arash Bakhtiari, Michael Wyatt, Donglin Zhuang, Zhongzhu Zhou, Olatunji Ruwase, Yuxiong He, Shuaiwen Leon Song. (2024) FP6-LLM: Efficiently Serving Large Language Models Through FP6-Centric Algorithm-System Co-Design arXiv:2401.14112
Sam Ade Jacobs, Masahiro Tanaka, Chengming Zhang, Minjia Zhang, Reza Yazdani Aminadabi, Shuaiwen Leon Song, Samyam Rajbhandari, Yuxiong He. (2024) System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models
Xinyu Lian, Sam Ade Jacobs, Lev Kurilenko, Masahiro Tanaka, Stas Bekman, Olatunji Ruwase, Minjia Zhang. (2024) Universal Checkpointing: Efficient and Flexible Checkpointing for Large Scale Distributed Training arXiv:2406.18820