2

I'm trying to make a proxy to download server files with an authentication layer.

I'm using Golang (1.21.0) and Echo (4.11.1)

Problem description

When a user is downloading a big file, if I kill the Echo server (Ctrl+C), the download is just marked as "terminated" instead of being "canceled" or in a "interrupt error" state.

So, the user can't retry it and resume download (with Range headers) later when the server will come up later...

Question

Is there a way to wait every connection when stop server to gracefully interrupt it without killing user downloads ?

Or, better, is there a way to mark the download as "failed" and not just "terminated" ?

Code used

package main

import (
    "net/http"

    "github.com/labstack/echo/v4"
)

func main() {
    e := echo.New()
    e.GET("/file", func(c echo.Context) error {
        return c.Attachment("big_file.zip", "original file name.zip")
    })
    e.Logger.Fatal(e.Start(":1323"))
}
Jonathan Hall
  • 75,165
  • 16
  • 143
  • 189
Doubidou
  • 1,573
  • 3
  • 18
  • 35

1 Answers1

-1

I guess the issue you're facing is that when you stop the Echo server while a user is downloading a file, the download is marked as "terminated" instead of being "canceled" or in an "interrupt error" state. This prevents the user from retrying the download or resuming it later using Range headers.

To address this issue, you can create a custom implementation of the http.Handler interface and use it in Echo to handle file downloads. This custom handler can keep track of active downloads and gracefully interrupt them when the server is stopped.

Here's an example of how you can modify your code to achieve this:


import (
    "github.com/labstack/echo/v4"
)

type downloadHandler struct {
    activeDownloads map[*http.ResponseWriter]chan struct{}
    mu              sync.Mutex
}

func (h *downloadHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
    h.mu.Lock()
    defer h.mu.Unlock()

    // Create a channel to track this download
    done := make(chan struct{})
    h.activeDownloads[&w] = done

    // Remove the download channel when the response writer is closed
    wrapper := &responseWriterWrapper{
        ResponseWriter: w,
        onClose: func() {
            h.mu.Lock()
            defer h.mu.Unlock()
            delete(h.activeDownloads, &w)
            close(done)
        },
    }

    // Handle the file download
    err := h.handleDownload(wrapper, r)

    // Handle any errors during the download
    if err != nil {
        if errors.Is(err, os.ErrClosed) {
            // The server was stopped, so the download was interrupted
            wrapper.WriteHeader(http.StatusRequestTimeout)
        } else {
            // Other error occurred, mark the download as failed
            wrapper.WriteHeader(http.StatusInternalServerError)
        }
    }

    // Ensure the response is flushed before returning
    wrapper.Flush()
}

func (h *downloadHandler) handleDownload(w http.ResponseWriter, r *http.Request) error {
    // Implement your file downloading logic here
    // You can use the "http.ServeFile" or any other method

    filePath := "big_file.zip"
    file, err := os.Open(filePath)
    if err != nil {
        return err
    }
    defer file.Close()

    // Set the necessary headers for file download
    w.Header().Set("Content-Disposition", "attachment; filename=original file name.zip")
    w.Header().Set("Content-Type", "application/octet-stream")

    // Copy the file contents to the response writer
    _, err = io.Copy(w, file)
    if err != nil {
        return err
    }

    return nil
}

type responseWriterWrapper struct {
    http.ResponseWriter
    onClose func()
}

func (w *responseWriterWrapper) CloseNotify() <-chan bool {
    return w.ResponseWriter.(http.CloseNotifier).CloseNotify()
}

func (w *responseWriterWrapper) CloseNotifyChan() chan struct{} {
    c := make(chan struct{})
    go func() {
        select {
        case <-w.ResponseWriter.(http.CloseNotifier).CloseNotify():
        case <-c:
        }
        w.onClose()
    }()
    return c
}

func main() {
    // Set up a signal channel to listen for server stop signal
    stop := make(chan os.Signal, 1)
    signal.Notify(stop, os.Interrupt)

    // Initialize the download handler and map to store active downloads
    handler := &downloadHandler{
        activeDownloads: make(map[*http.ResponseWriter]chan struct{}),
    }

    // Create a new Echo server
    e := echo.New()

    // Register the custom download handler
    e.GET("/file", echo.WrapHandler(handler))

    // Start the server in a goroutine
    go func() {
        if err := e.Start(":1323"); err != nil {
            e.Logger.Fatal(err)
        }
    }()

    // Wait for the stop signal
    <-stop

    // Stop the server gracefully
    if err := e.Shutdown(); err != nil {
        e.Logger.Fatal(err)
    }

    // Notify all active downloads about the interruption
    handler.mu.Lock()
    defer handler.mu.Unlock()
    for _, done := range handler.activeDownloads {
        close(done)
    }
}

In this modified code, a custom downloadHandler struct is implemented to handle file downloads. It keeps track of active downloads in a map, where the key is a pointer to the http.ResponseWriter and the value is a channel (done) to track the state of the download.

When a request comes in, a new done channel is created and added to the map. The request and response are then passed to the handleDownload method, where you can implement your file downloading logic.

The responseWriterWrapper wraps the original http.ResponseWriter to intercept the CloseNotify method and call the onClose callback when the response writer is closed. This allows us to remove the download channel from the map and gracefully interrupt the download.

When the server receives a stop signal (e.g., Ctrl+C), it shuts down gracefully using e.Shutdown(). Before exiting, it notifies all active downloads about the interruption by closing their respective done channels.

This implementation ensures that the download is interrupted gracefully when the server is stopped, allowing the user to retry or resume the download later.

its a little long ,sorry for that.

hadirezaei
  • 152
  • 5