DEV Community

Discussion on: Are async frameworks really worth it?

 
legolord208 profile image
jD91mZM2

sure, use the way you described. But if that's the case - you might as well just use a blocking API. In fact, blocking is probably better since it's easier on the CPU.

My use case was running it in gtk::idle_add/timeout_add, which only supports non-blocking calls (it was run on the same thread). It was also to read from it over a mutex, which means I could not keep the mutex locked during a whole blocking read.

But - if you need to wait on multiple IOs, handled by different code? How will your approach work there?

for thing in things {
    if try_read(thing) {
        return read_thing;
    }
}

?

Thread Thread
 
idanarye profile image
Idan Arye

My use case was running it in gtk::idle_add/timeout_add, which only supports non-blocking calls (it was run on the same thread). It was also to read from it over a mutex, which means I could not keep the mutex locked during a whole blocking read.

A GUI's event loop is very similar to to an async framework's reactor, so you were already using one behind the scenes.

for thing in things...

I said "handled by different code" - in your case they all go to the same code, the one that called that function.

Consider this:

This small program takes the number from 1 to 9, uses the math.js webservice to square them, and then uses that webservice again to multiply the result by 100. This, of course, I could do inside Rust - but I want to demonstrate an async flow, so I make them web requests - and I also added some random delay for each request to simulate real world scenarios where requests can take different times.

For each number, this program sends two requests - one for squaring it and one for multiplying it by 100. If I only wanted one request, your approach could work(pseudo-rust):

fn next_available() -> (usize, i64) {
    for (i, (orig_index, request)) in remaining_requests.iter().enumerate() {
        if Some(response) = request.try_read() {
            remaining_requests.remove(i);
            return Some((orig_index, response));
        }
    }
    None
}

fn timeout_callback() {
    if let Some(i, response) = next_available() {
        result[i] = response.data;
    }
    timeout_add(timeout_callback);
}

But... I do two different requests for each number, and handle their responses differently - for the first I send another request based on it's result, and for the second I store the result in a vector(actually join_all does that for me - I just log it and pass it on). How would you do that? Use two callbacks?

Thread Thread
 
legolord208 profile image
jD91mZM2

Alright, you're making some good points. I suppose I should see if I can use GTK+'s reactor to read a connection. Doubt it, but perhaps.

In your example though, I don't see why you couldn't just make the connection vector hold a type parameter and add the second request with another type.

Thread Thread
 
idanarye profile image
Idan Arye

In your example though, I don't see why you couldn't just make the connection vector hold a type parameter and add the second request with another type.

You mean something like this?

fn next_available() -> (usize, i64) {
    for (i, (orig_index, request)) in remaining_requests.iter().enumerate() {
        if Some(response) = request.try_read() {
            remaining_requests.remove(i);
            return Some((orig_index, match request {
                FirstRequest(..) => FirstResponse(response),
                SecondRequest(..) => SecondResponse(response),
            }));
        }
    }
    None
}

if let Some(i, response) = next_available() {
    match response {
        FirstResponse(response) => {
            send_second_request(response.data);
        },
        SecondResponse(response) {
            result[i] = response.data;
        },
    }
}

It doesn't scale very well:

  • You need, for each IO in the algorithm, to add a branch to the enum.
  • Code sequence that normally goes together needs to be broken to different places.
  • If you want to add other algorithms on the same loop, it's hard to keep them from tangling with each other.
Thread Thread
 
legolord208 profile image
jD91mZM2

Alright. Thanks for a nice discussion!