Jump to content

DeepSpeed

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Ghettoblaster (talk | contribs) at 22:08, 30 April 2021. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

DeepSpeed
Original author(s)Microsoft Research
Developer(s)Microsoft
Initial releaseMay 18, 2020; 4 years ago (2020-05-18)
Stable release
v0.3.16 / April 30, 2021; 3 years ago (2021-04-30)
Repositorygithub.com/microsoft/DeepSpeed
Written inPython, CUDA, C++
TypeSoftware library
LicenseMIT License
Websitedeepspeed.ai

DeepSpeed is an open source deep learning optimization library for PyTorch.[1] The library is designed to reduce computing power and memory use and to train large distributed models with better parallelism on existing computer hardware.[2][3] DeepSpeed is optimized for low latency, high throughput training. It includes the Zero Redundancy Optimizer (ZeRO) for training models with 100 billion parameters or more.[4] Features include mixed precision training, single-GPU, multi-GPU, and multi-node training as well as custom model parallelism. The DeepSpeed source code is licensed under MIT License and available on GitHub.[5]

See also

References

  1. ^ "Microsoft Updates Windows, Azure Tools with an Eye on The Future". PCMag UK. May 22, 2020.
  2. ^ Yegulalp, Serdar (February 10, 2020). "Microsoft speeds up PyTorch with DeepSpeed". InfoWorld.
  3. ^ Microsoft unveils "fifth most powerful" supercomputer in the world - Neowin
  4. ^ "Microsoft trains world's largest Transformer language model". February 10, 2020.
  5. ^ "microsoft/DeepSpeed". July 10, 2020 – via GitHub.

Further reading

External links