添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接

Repository files navigation

image

Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI libraries for simplifying ML compute:

Learn more about Ray AI Libraries :

  • Data : Scalable Datasets for ML
  • Train : Distributed Training
  • Tune : Scalable Hyperparameter Tuning
  • RLlib : Scalable Reinforcement Learning
  • Serve : Scalable and Programmable Serving
  • Or more about Ray Core and its key abstractions:

  • Tasks : Stateless functions executed in the cluster.
  • Actors : Stateful worker processes created in the cluster.
  • Objects : Immutable values accessible across the cluster.
  • Monitor and debug Ray applications and clusters using the Ray dashboard .

    Ray runs on any machine, cluster, cloud provider, and Kubernetes, and features a growing ecosystem of community integrations .

    Install Ray with: pip install ray . For nightly wheels, see the Installation page .

    Why Ray?

    Today's ML workloads are increasingly compute-intensive. As convenient as they are, single-node development environments such as your laptop cannot scale to meet these demands.

    Ray is a unified way to scale Python and AI applications from a laptop to a cluster.

    With Ray, you can seamlessly scale the same code from a laptop to a cluster. Ray is designed to be general-purpose, meaning that it can performantly run any kind of workload. If your application is written in Python, you can scale it with Ray, no other infrastructure required.

    More Information

  • Documentation
  • Ray Architecture whitepaper
  • Exoshuffle: large-scale data shuffle in Ray
  • Ownership: a distributed futures system for fine-grained tasks
  • RLlib paper
  • Tune paper
  • Older documents:

  • Ray paper
  • Ray HotOS paper
  • Ray Architecture v1 whitepaper
  • Getting Involved