0

I have the following proto-buf and the server side code. When SayHello is resolving a long running task, it puts all other calls ( any SayHello or SayHello2 ) to the server on a queue. the other calls are only resolved when the 1st call is complete.

I am likely missing a setting or config allowing tonic to handle multiple simultaneous calls. I tried googling a lot but failed to find any resource regarding handling ( resolving ) multiple calls at the same time with tonic.

I know flask ( a python framework) works on single core and needs gunicorn or some other server/worker manager to spin up instances using workers to handle multiple calls. Am I missing something like that ?

My question is how can i handle(resolve) multiple simultaneous calls made to a tonic server.

Thank you in advance for your help.

message HelloRequest {
  string name = 1;
}

message HelloResponse {
  string message = 1;
}

service Greeter {
  rpc SayHello (HelloRequest) returns (HelloResponse);
  rpc SayHello2 (HelloRequest) returns (HelloResponse); 
}

I have the following server side code

use tonic::{transport::Server, Request, Response, Status};

use greeter::greeter_server::{Greeter, GreeterServer};
use greeter::{HelloResponse, HelloRequest};

use std::{
    io::{stdout, Write}
};


// Import the generated proto-rust file into a module
pub mod greeter {
    tonic::include_proto!("greeter");
}

// Implement the service skeleton for the "Greeter" service
// defined in the proto
#[derive(Debug, Default)]
pub struct MyGreeter {}

// Implement the service function(s) defined in the proto
// for the Greeter service (SayHello...)
#[tonic::async_trait]
impl Greeter for MyGreeter {
    async fn say_hello(
        &self,
        request: Request<HelloRequest>,
    ) -> Result<Response<HelloResponse>, Status> {
        println!("Received request from: {:?}", request);

        let response = greeter::HelloResponse {
            message: format!("Hello {}!", request.into_inner().name).into(),
        };
        println!("Sleeping ");
        let mut stdout = stdout();
        let mut  print_count = 0 ;
        let mut flush_screen = 0;

        let mut print_char = ".";

        let mut char_set = 0;

        for _x in 0..999999999{
            print_count +=1;
            if print_count > 10000 {
                // stdout.write(print_char.as_bytes()).unwrap();
                stdout.write(format!("\r{}", print_char).as_bytes()).unwrap();
                print_count = 0;
                flush_screen += 1;
            }

            if flush_screen > 100 {
                if char_set == 0 {
                    char_set = 1 ;
                    print_char = "<>"
                }else if char_set == 1{
                    char_set = 0 ;
                    print_char = "."
                }
                flush_screen=0;
            }
            

        }
        

        Ok(Response::new(response))
    }

    async fn say_hello2(
        &self,
        request: Request<HelloRequest>,
    ) -> Result<Response<HelloResponse>, Status> {
        println!("Received request from: {:?}", request);

        let response = greeter::HelloResponse {
            message: format!("Hello {}!", request.into_inner().name).into(),
        };

        Ok(Response::new(response))
    }
}

// Runtime to run our server
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let addr = "0.0.0.0:50051".parse()?;
    let greeter = MyGreeter::default();

    println!("Starting gRPC Server...");
    Server::builder().concurrency_limit_per_connection(500)
        .add_service(GreeterServer::new(greeter))
        .serve(addr)
        .await?;

    Ok(())
}

  • What happens if you add `tokio::task::yield_now().await;` to the bottom of the `for _x` loop body? Also, using blocking I/O in an async function is bad; consider using `tokio::io::stdout` instead. – cdhowie Jul 20 '22 at 16:18
  • Thank you so much! Adding it at the beginning of the fn is allowing this task to get to the bottom of the queue and allowing the other tasks to move forward! Thank you so much! . I will be using the `tokio::io::stdout` – Sami Asimovich Jul 20 '22 at 17:35

0 Answers0