添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
相关文章推荐
温柔的电梯  ·  I cannot get exact ...·  1 月前    · 
爱跑步的春卷  ·  Work with us - One ...·  1 月前    · 
性感的大象  ·  How does AOT/JIT C# ...·  1 月前    · 
想发财的遥控器  ·  The Best Security ...·  4 周前    · 
激动的警车  ·  Changing how shaders ...·  2 周前    · 
不羁的稀饭  ·  JAX-WS - ...·  2 年前    · 
...one of the most highly regarded and expertly designed C++ library projects in the world. Herb Sutter and Andrei Alexandrescu , C++ Coding Standards

The fibers in a thread are coordinated by a fiber manager. Fibers trade control cooperatively, rather than preemptively: the currently-running fiber retains control until it invokes some operation that passes control to the manager. Each time a fiber suspends (or yields), the fiber manager consults a scheduler to determine which fiber will run next. Boost.Fiber provides the fiber manager, but the scheduler is a customization point. (See Customization .) Each thread has its own scheduler. Different threads in a process may use different schedulers. By default, Boost.Fiber implicitly instantiates round_robin as the scheduler for each thread. You are explicitly permitted to code your own algorithm subclass. For the most part, your algorithm subclass need not defend against cross-thread calls: the fiber manager intercepts and defers such calls. Most algorithm methods are only ever directly called from the thread whose fibers it is managing — with exceptions as documented below. Your algorithm subclass is engaged on a particular thread by calling use_scheduling_algorithm() :
void thread_fn() {
    boost::fibers::use_scheduling_algorithm< my_fiber_scheduler >();
      A scheduler class must implement interface algorithm. Boost.Fiber provides schedulers: round_robin,
      work_stealing, numa::work_stealing and
      shared_work.
void thread( std::uint32_t thread_count) {
    // thread registers itself at work-stealing scheduler
    boost::fibers::use_scheduling_algorithm< boost::fibers::algo::work_stealing >( thread_count);
// count of logical cpus
std::uint32_t thread_count = std::thread::hardware_concurrency();
// start-thread registers itself at work-stealing scheduler
boost::fibers::use_scheduling_algorithm< boost::fibers::algo::work_stealing >( thread_count);
std::vector< std::thread > threads;
for ( std::uint32_t i = 1 /* count start-thread */; i < thread_count; ++i) {
    // spawn thread
    threads.emplace_back( thread, thread_count);
      The example spawns as many threads as std::thread::hardware_concurrency() returns. Each thread runs a work_stealing scheduler.
      Each instance of this scheduler needs to know how many threads run the work-stealing
      scheduler in the program. If the local queue of one thread runs out of ready
      fibers, the thread tries to steal a ready fiber from another thread running
      this scheduler.
      algorithm is the abstract base
      class defining the interface that a fiber scheduler must implement.
#include <boost/fiber/algo/algorithm.hpp>
namespace boost {
namespace fibers {
namespace algo {
struct algorithm {
    virtual ~algorithm();
    virtual void awakened( context *) noexcept = 0;
    virtual context * pick_next() noexcept = 0;
    virtual bool has_ready_fibers() const noexcept = 0;
    virtual void suspend_until( std::chrono::steady_clock::time_point const&) noexcept = 0;
    virtual void notify() noexcept = 0;
            Informs the scheduler that fiber f
            is ready to run. Fiber f
            might be newly launched, or it might have been blocked but has just been
            awakened, or it might have called this_fiber::yield().
Note:
This method advises the scheduler to add fiber f to its collection of fibers ready to run. A typical scheduler implementation places f into a queue.
See also:
round_robin This is where the scheduler actually specifies the fiber which is to run next. A typical scheduler implementation chooses the head of the ready queue.
See also:
round_robin This method allows a custom scheduler to yield control to the containing environment in whatever way makes sense. The fiber manager is stating that suspend_until() need not return until abs_time — or algorithm::notify() is called — whichever comes first. The interaction with notify() means that, for instance, calling std::this_thread::sleep_until(abs_time) would be too simplistic. round_robin::suspend_until() uses a std::condition_variable to coordinate with round_robin::notify().
Note:
Given that notify() might be called from another thread, your suspend_until() implementation — like the rest of your algorithm implementation — must guard any data it shares with your notify() implementation.
Effects:
Requests the scheduler to return from a pending call to algorithm::suspend_until().
Note:
Alone among the algorithm methods, notify() may be called from another thread. Your notify() implementation must guard any data it shares with the rest of your algorithm implementation. class round_robin : public algorithm { virtual void awakened( context *) noexcept; virtual context * pick_next() noexcept; virtual bool has_ready_fibers() const noexcept; virtual void suspend_until( std::chrono::steady_clock::time_point const&) noexcept; virtual void notify() noexcept; Placing ready fibers onto the tail of a queue, and returning them from the head of that queue, shares the thread between ready fibers in round-robin fashion. Informs round_robin that no ready fiber will be available until time-point abs_time. This implementation blocks in std::condition_variable::wait_until().
Throws:
Nothing. Wake up a pending call to round_robin::suspend_until(), some fibers might be ready. This implementation wakes suspend_until() via std::condition_variable::notify_all ().
Throws:
Nothing. This class implements algorithm; if the local ready-queue runs out of ready fibers, ready fibers are stolen from other schedulers.
The victim scheduler (from which a ready fiber is stolen) is selected at random. Worker-threads are stored in a static variable, dynamically adding/removing worker threads is not supported.
#include <boost/fiber/algo/work_stealing.hpp>
namespace boost {
namespace fibers {
namespace algo {
class work_stealing : public algorithm {
public:
    work_stealing( std::uint32_t thread_count, bool suspend = false);
    work_stealing( work_stealing const&) = delete;
    work_stealing( work_stealing &&) = delete;
    work_stealing & operator=( work_stealing const&) = delete;
    work_stealing & operator=( work_stealing &&) = delete;
    virtual void awakened( context *) noexcept;
    virtual context * pick_next() noexcept;
    virtual bool has_ready_fibers() const noexcept;
    virtual void suspend_until( std::chrono::steady_clock::time_point const&) noexcept;
    virtual void notify() noexcept;
      Constructor
work_stealing( std::uint32_t thread_count, bool suspend = false);
            Constructs work-stealing scheduling algorithm. thread_count
            represents the number of threads running this algorithm.
Throws:
system_error
Note:
If suspend is set to true, then the scheduler suspends if no ready fiber could be stolen. The scheduler will by woken up if a sleeping fiber times out or it was notified from remote (other thread or fiber scheduler). Placing ready fibers onto the tail of the sahred queue, and returning them from the head of that queue, shares the thread between ready fibers in round-robin fashion. Informs work_stealing that no ready fiber will be available until time-point abs_time. This implementation blocks in std::condition_variable::wait_until().
Throws:
Nothing. Wake up a pending call to work_stealing::suspend_until(), some fibers might be ready. This implementation wakes suspend_until() via std::condition_variable::notify_all().
Throws:
Nothing. This class implements algorithm, scheduling fibers in round-robin fashion. Ready fibers are shared between all instances (running on different threads) of shared_work, thus the work is distributed equally over all threads. Worker-threads are stored in a static variable, dynamically adding/removing worker threads is not supported.
#include <boost/fiber/algo/shared_work.hpp>
namespace boost {
namespace fibers {
namespace algo {
class shared_work : public algorithm {
    virtual void awakened( context *) noexcept;
    virtual context * pick_next() noexcept;
    virtual bool has_ready_fibers() const noexcept;
    virtual void suspend_until( std::chrono::steady_clock::time_point const&) noexcept;
    virtual void notify() noexcept;
            Placing ready fibers onto the tail of the shared queue, and returning
            them from the head of that queue, shares the thread between ready fibers
            in round-robin fashion.
            Informs shared_work that
            no ready fiber will be available until time-point abs_time.
            This implementation blocks in std::condition_variable::wait_until().
Throws:
Nothing. Wake up a pending call to shared_work::suspend_until(), some fibers might be ready. This implementation wakes suspend_until() via std::condition_variable::notify_all().
Throws:
Nothing. A scheduler class directly derived from algorithm can use any information available from context to implement the algorithm interface. But a custom scheduler might need to track additional properties for a fiber. For instance, a priority-based scheduler would need to track a fiber’s priority. Boost.Fiber provides a mechanism by which your custom scheduler can associate custom properties with each fiber. A custom fiber properties class must be derived from fiber_properties.
#include <boost/fiber/properties.hpp>
namespace boost {
namespace fibers {
class fiber_properties {
public:
    fiber_properties( context *) noexcept;
    virtual ~fiber_properties();
protected:
    void notify() noexcept;
      Constructor
fiber_properties( context * f) noexcept;
            Pass control to the custom algorithm_with_properties<> subclass’s
            algorithm_with_properties::property_change() method.
Throws:
Nothing.
Note:
A custom scheduler’s algorithm_with_properties::pick_next() method might dynamically select from the ready fibers, or algorithm_with_properties::awakened() might instead insert each ready fiber into some form of ready queue for pick_next(). In the latter case, if application code modifies a fiber property (e.g. priority) that should affect that fiber’s relationship to other ready fibers, the custom scheduler must be given the opportunity to reorder its ready queue. The custom property subclass should implement an access method to modify such a property; that access method should call notify() once the new property value has been stored. This passes control to the custom scheduler’s property_change() method, allowing the custom scheduler to reorder its ready queue appropriately. Use at your discretion. Of course, if you define a property which does not affect the behavior of the pick_next() method, you need not call notify() when that property is modified. A custom scheduler that depends on a custom properties class PROPS should be derived from algorithm_with_properties<PROPS>. PROPS should be derived from fiber_properties.
#include <boost/fiber/algorithm.hpp>
namespace boost {
namespace fibers




    
 {
namespace algo {
template< typename PROPS >
struct algorithm_with_properties {
    virtual void awakened( context *, PROPS &) noexcept = 0;
    virtual context * pick_next() noexcept;
    virtual bool has_ready_fibers() const noexcept;
    virtual void suspend_until( std::chrono::steady_clock::time_point const&) noexcept = 0;
    virtual void notify() noexcept = 0;
    PROPS & properties( context *) noexcept;
    virtual void property_change( context *, PROPS &) noexcept;
    virtual fiber_properties * new_properties( context *);
            Informs the scheduler that fiber f
            is ready to run, like algorithm::awakened(). Passes
            the fiber’s associated PROPS
            instance.
Throws:
Nothing.
Note:
An algorithm_with_properties<> subclass must override this method instead of algorithm::awakened().
Effects:
Requests the scheduler to return from a pending call to algorithm_with_properties::suspend_until().
Note:
same as algorithm::notify() The fiber’s associated PROPS instance is already passed to algorithm_with_properties::awakened() and algorithm_with_properties::property_change(). However, every algorithm subclass is expected to track a collection of ready context instances. This method allows your custom scheduler to retrieve the fiber_properties subclass instance for any context in its collection. Notify the custom scheduler of a possibly-relevant change to a property belonging to fiber f. properties contains the new values of all relevant properties.
Throws:
Nothing.
Note:
This method is only called when a custom fiber_properties subclass explicitly calls fiber_properties::notify(). By default, algorithm_with_properties<>::new_properties() simply returns new PROPS(f), placing the PROPS instance on the heap. Override this method to allocate PROPS some other way. The returned fiber_properties pointer must point to the PROPS instance to be associated with fiber f. While you are free to treat context* as an opaque token, certain context members may be useful to a custom scheduler implementation. Of particular note is the fact that context contains a hook to participate in a boost::intrusive::list typedef’ed as boost::fibers::scheduler::ready_queue_t. This hook is reserved for use by algorithm implementations. (For instance, round_robin contains a ready_queue_t instance to manage its ready fibers.) See context::ready_is_linked(), context::ready_link(), context::ready_unlink(). Your algorithm implementation may use any container you desire to manage passed context instances. ready_queue_t avoids some of the overhead of typical STL containers.
#include <boost/fiber/context.hpp>
namespace boost {
namespace fibers {
enum class type {
  none               = unspecified,
  main_context       = unspecified, // fiber associated with thread's stack
  dispatcher_context = unspecified, // special fiber for maintenance operations
  worker_context     = unspecified, // fiber not special to the library
  pinned_context     = unspecified  // fiber must not be migrated to another thread
class context {
public:
    class id;
    static context * active() noexcept;
    context( context const&) = delete;
    context & operator=( context const&) = delete;
    id get_id() const noexcept;
    void detach() noexcept;
    void attach( context *) noexcept;
    bool is_context( type) const noexcept;
    bool is_terminated() const noexcept;
    bool ready_is_linked() const noexcept;
    bool remote_ready_is_linked() const noexcept;
    bool wait_is_linked() const noexcept;
    template< typename List >
    void ready_link( List &) noexcept;
    template< typename List >
    void remote_ready_link( List &) noexcept;
    template< typename List >
    void wait_link( List &) noexcept;
    void ready_unlink() noexcept;
    void remote_ready_unlink() noexcept;
    void wait_unlink() noexcept;
    void suspend() noexcept;
    void schedule( context *) noexcept;
bool operator<( context const& l, context const& r) noexcept;
            If *this
            refers to a fiber of execution, an instance of fiber::id that represents that fiber. Otherwise
            returns a default-constructed fiber::id.
Throws:
Nothing
See also:
fiber::get_id() f must not be the running fiber’s context. It must not be blocked or terminated. It must not be a pinned_context. It must be currently detached. It must not currently be linked into an algorithm implementation’s ready queue. Most of these conditions are implied by f being owned by an algorithm implementation: that is, it has been passed to algorithm::awakened() but has not yet been returned by algorithm::pick_next(). Typically a pick_next() implementation would call attach() with the context* it is about to return. It must first remove f from its ready queue. You should never pass a pinned_context to attach() because you should never have called its detach() method in the first place. This method must be called on the thread with which the fiber is currently associated. *this must not be the running fiber’s context. It must not be blocked or terminated. It must not be a pinned_context. It must not be detached already. It must not already be linked into an algorithm implementation’s ready queue. Most of these conditions are implied by *this being passed to algorithm::awakened(); an awakened() implementation must, however, test for pinned_context. It must call detach() before linking *this into its ready queue.
Note:
In particular, it is erroneous to attempt to migrate a fiber from one thread to another by calling both detach() and attach() in the algorithm::pick_next() method. pick_next() is called on the intended destination thread. detach() must be called on the fiber’s original thread. You must call detach() in the corresponding awakened() method.
Note:
Unless you intend make a fiber available for potential migration to a different thread, you should call neither detach() nor attach() with its context. type::worker_context here means any fiber not special to the library. For type::main_context the context is associated with the main fiber of the thread: the one implicitly created by the thread itself, rather than one explicitly created by Boost.Fiber. For type::dispatcher_context the context is associated with a dispatching fiber, responsible for dispatching awakened fibers to a scheduler’s ready-queue. The dispatching fiber is an implementation detail of the fiber manager. The context of the main or dispatching fiber — any fiber for which is_context(pinned_context) is true — must never be passed to context::detach(). Specifically, this method indicates whether context::ready_link() has been called on *this. ready_is_linked() has no information about participation in any other containers. A context signaled as ready by another thread is first stored in the fiber manager’s remote-ready-queue. This is the mechanism by which the fiber manager protects an algorithm implementation from cross-thread algorithm::awakened() calls. The context of a fiber waiting on a synchronization object (e.g. mutex, condition_variable etc.) is stored in the wait-queue of that synchronization object. Argument lst must be a doubly-linked list from Boost.Intrusive, e.g. an instance of boost::fibers::scheduler::ready_queue_t. Specifically, it must be a boost::intrusive::list compatible with the list_member_hook stored in the context object.
Effects:
Suspends the running fiber (the fiber associated with *this) until some other fiber passes this to context::schedule(). *this is marked as not-ready, and control passes to the scheduler to select another fiber to run.
Throws:
Nothing
Note:
This is a low-level API potentially useful for integration with other frameworks. It is not intended to be directly invoked by a typical application program.
Note:
The burden is on the caller to arrange for a call to schedule() with a pointer to this at some future time. Mark the fiber associated with context *ctx as being ready to run. This does not immediately resume that fiber; rather it passes the fiber to the scheduler for subsequent resumption. If the scheduler is idle (has not returned from a call to algorithm::suspend_until()), algorithm::notify() is called to wake it up.
Throws:
Nothing
Note:
This is a low-level API potentially useful for integration with other frameworks. It is not intended to be directly invoked by a typical application program.
Note:
It is explicitly supported to call schedule(ctx) from a thread other than the one on which *ctx is currently suspended. The corresponding fiber will be resumed on its original thread in due course. Distributed under the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)