0

In go grpc service I have a receiver(publisher) event loop, and publisher can detect that it wants sender to stop. But channel principles says that we should not close channels on receiver side, only on sender side. How should it be threated?

Situation is like following. Imagine a chat. 1st client - subscriber - receives message, and its streaming cannot be done without goroutine due to grpc limitations. And 2nd client - publisher is sending a message to chat, so its another goroutine. You have to pass a message from publisher to subscriber receiving client, ONLY if subscriber not closed its connection (forces closing a channel from receiver side)

The problem in code:

//1st client goroutine - subscriber
func (s *GRPCServer) WatchMessageServer(req *WatchMessageRequest, stream ExampleService_WatchMessageServer) error {
    ch := s.NewClientChannel()
    // natively blocks goroutine with send to its stream, until send gets an error
    for {
        msg, ok := <-ch
        if !ok {
            return nil
        }
        err := stream.Send(msg) // if this fails - we need to close ch from receiver side to "propagate" closing signal
        if err != nil {
            return err
        }
    }
}

//2nd client goroutine - publisher
func (s *GRPCServer) SendMessage(ctx context.Context, req *SendMessageRequest) (*emptypb.Empty, error) {
    for i := range s.clientChannels {
        s.clientChannels[i] <- req
        // no way other than panic, to know when to remove channel from list. or need to make a wrapper with recover..
    }
    return nil
}
xakepp35
  • 2,878
  • 7
  • 26
  • 54
  • A send operation on a closed channel panics, period. You either have to coordinate this between the senders and receivers, or design a different solution. How you should do this is hard to say without an actual example. – JimB Oct 12 '21 at 15:25
  • @JimB maybe the solution is just to handle panic? i have provided example in a PS2 section. – xakepp35 Oct 12 '21 at 15:28
  • 1
    don't "solve" this via panic recovery - follow @JimB 's advice and think of a redesign. – colm.anseo Oct 12 '21 at 15:31
  • You probably wait a pair of channels. One for the server to transmit on, and one for the client to signal that its connection is closed. – user229044 Oct 12 '21 at 15:38
  • 1
    gRPC `client` and `server` can be running (and probably are) on entirely different networks/hardware/memory-space. `Go` channel communication makes no sense here. If the client wants to abort a stream - close the stream. The server will see this via the connection `context.Context` - so stop serving when this event is detected. – colm.anseo Oct 12 '21 at 15:43
  • Oh great, I can block streaming goroutine with `<-ServerStream.Context.Done()` – xakepp35 Oct 12 '21 at 15:44
  • A gRPC stream is not a channel. A stream *can* be closed on either side. If you had shown some code, the question might even have answered itself by now ;) – rustyx Oct 12 '21 at 16:11
  • @rustyx got a solution code – xakepp35 Oct 12 '21 at 16:47

1 Answers1

0

I've initially got a clue by searhing, and solution idea was provided in an answer here thanks to that answer.

Providing streaming solution sample code, i guess its an implementation for a generic pub-sub problem:

//1st client goroutine - subscriber
func (s *GRPCServer) WatchMessageServer(req *WatchMessageRequest, stream ExampleService_WatchMessageServer) error {
    s.AddClientToBroadcastList(stream)
    select {
    case <-stream.Context().Done(): // stackoverflow promised that it would signal when client closes stream
        return stream.Context().Err() // stream will be closed immediately after return
    case <-s.ctx.Done(): // program shutdown
        return s.ctx.Err()
    }
}

//2nd client goroutine - publisher
func (s *GRPCServer) SendMessage(ctx context.Context, req *SendMessageRequest) (*emptypb.Empty, error) {
    for i := range s.clientStreams {
        err := s.clientStreams.Send(req)
        if err != nil {
            s.RemoveClientFromBroadcastList(s.clientStreams[i])
        }
    }
    return nil
}
xakepp35
  • 2,878
  • 7
  • 26
  • 54