Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
menu search
person
Welcome To Ask or Share your Answers For Others

Categories

What are coroutines in ?

In what ways it is different from "Parallelism2" or/and "Concurrency2" (look into below image)?

The below image is from ISOCPP.

https://isocpp.org/files/img/wg21-timeline-2017-03.png

enter image description here

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
161 views
Welcome To Ask or Share your Answers For Others

1 Answer

At an abstract level, Coroutines split the idea of having an execution state off of the idea of having a thread of execution.

SIMD (single instruction multiple data) has multiple "threads of execution" but only one execution state (it just works on multiple data). Arguably parallel algorithms are a bit like this, in that you have one "program" run on different data.

Threading has multiple "threads of execution" and multiple execution states. You have more than one program, and more than one thread of execution.

Coroutines has multiple execution states, but does not own a thread of execution. You have a program, and the program has state, but it has no thread of execution.


The easiest example of coroutines are generators or enumerables from other languages.

In pseudo code:

function Generator() {
  for (i = 0 to 100)
    produce i
}

The Generator is called, and the first time it is called it returns 0. Its state is remembered (how much state varies with implementation of coroutines), and the next time you call it it continues where it left off. So it returns 1 the next time. Then 2.

Finally it reaches the end of the loop and falls off the end of the function; the coroutine is finished. (What happens here varies based on language we are talking about; in python, it throws an exception).

Coroutines bring this capability to C++.

There are two kinds of coroutines; stackful and stackless.

A stackless coroutine only stores local variables in its state and its location of execution.

A stackful coroutine stores an entire stack (like a thread).

Stackless coroutines can be extremely light weight. The last proposal I read involved basically rewriting your function into something a bit like a lambda; all local variables go into the state of an object, and labels are used to jump to/from the location where the coroutine "produces" intermediate results.

The process of producing a value is called "yield", as coroutines are bit like cooperative multithreading; you are yielding the point of execution back to the caller.

Boost has an implementation of stackful coroutines; it lets you call a function to yield for you. Stackful coroutines are more powerful, but also more expensive.


There is more to coroutines than a simple generator. You can await a coroutine in a coroutine, which lets you compose coroutines in a useful manner.

Coroutines, like if, loops and function calls, are another kind of "structured goto" that lets you express certain useful patterns (like state machines) in a more natural way.


The specific implementation of Coroutines in C++ is a bit interesting.

At its most basic level, it adds a few keywords to C++: co_return co_await co_yield, together with some library types that work with them.

A function becomes a coroutine by having one of those in its body. So from their declaration they are indistinguishable from functions.

When one of those three keywords are used in a function body, some standard mandated examining of the return type and arguments occurs and the function is transformed into a coroutine. This examining tells the compiler where to store the function state when the function is suspended.

The simplest coroutine is a generator:

generator<int> get_integers( int start=0, int step=1 ) {
  for (int current=start; true; current+= step)
    co_yield current;
}

co_yield suspends the functions execution, stores that state in the generator<int>, then returns the value of current through the generator<int>.

You can loop over the integers returned.

co_await meanwhile lets you splice one coroutine onto another. If you are in one coroutine and you need the results of an awaitable thing (often a coroutine) before progressing, you co_await on it. If they are ready, you proceed immediately; if not, you suspend until the awaitable you are waiting on is ready.

std::future<std::expected<std::string>> load_data( std::string resource )
{
  auto handle = co_await open_resouce(resource);
  while( auto line = co_await read_line(handle)) {
    if (std::optional<std::string> r = parse_data_from_line( line ))
       co_return *r;
  }
  co_return std::unexpected( resource_lacks_data(resource) );
}

load_data is a coroutine that generates a std::future when the named resource is opened and we manage to parse to the point where we found the data requested.

open_resource and read_lines are probably async coroutines that open a file and read lines from it. The co_await connects the suspending and ready state of load_data to their progress.

C++ coroutines are much more flexible than this, as they were implemented as a minimal set of language features on top of user-space types. The user-space types effectively define what co_return co_await and co_yield mean -- I've seen people use it to implement monadic optional expressions such that a co_await on an empty optional automatically propogates the empty state to the outer optional:

modified_optional<int> add( modified_optional<int> a, modified_optional<int> b ) {
  co_return (co_await a) + (co_await b);
}

instead of

std::optional<int> add( std::optional<int> a, std::optional<int> b ) {
  if (!a) return std::nullopt;
  if (!b) return std::nullopt;
  return *a + *b;
}

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
...