0

i am running goroutines in my code. say, if i set my threads to 50, it will not run the first 49 requests, but it will run the 50th request and continue with the rest. i am not really sure how to describe the issue i am having, and it gives no errors. this has only happened while using fasthttp, and works fine with net/http. could it be an issue with fasthttp? (this is not my whole code, just the area where i think the issues are occurring)

    threads := 50
    var Lock sync.Mutex
    semaphore := make(chan bool, threads)

    for len(userArray) != 0 {
        semaphore <- true
        go func() {
            Lock.Lock()
            var values []byte
            defer func() { <-semaphore }()
            fmt.Println(len(userArray))
            if len(userArray) == 0 {
                return
            }
            values, _ = json.Marshal(userArray[0])
            currentArray := userArray[0]
            userArray = userArray[1:]
            client := &fasthttp.Client{
                Dial: fasthttpproxy.FasthttpHTTPDialerTimeout(proxy, time.Second * 5),
            }
            time.Sleep(1 * time.Nanosecond)
            Lock.Unlock()

this is the output i get (the numbers are the amount of requests left)

200
199
198
197
196
195
194
193
192
191
190
189
188
187
186
185
184
183
182
181
180
179
178
177
176
175
174
173
172
171
170
169
168
167
166
165
164
163
162
161
160
159
158
157
156
155
154
153
152
151
(10 lines of output from req 151)
150
(10 lines of output from req 150)
cont.

sorry if my explanation is confusing, i honestly don't know how to explain this error

tanpug
  • 1
  • Not familiar with `fasthttp` but with most clients (and net/http's `http.Client`) they are designed to be created once & used concurrently. In your code you are creating a client for each goroutine. Create one client & reuse it. – colm.anseo May 22 '21 at 20:41
  • The goroutine may terminate without unlocking the mutex. That may or may not be related to your problem. If it works with standard HTTP library, use that. Very rarely HTTP is the bottleneck in a program. – Burak Serdar May 22 '21 at 20:44
  • colm.anseo - i have to create a client for each goroutine because I need to use a different proxy for each request, and with fasthttp, i need to define it every time to grab a new proxy. – tanpug May 22 '21 at 20:51
  • Burak Serdar - i am not sure why it would terminate without unlocking, but i will look into that. i cannot use net/http because of separate issues with it. – tanpug May 22 '21 at 20:52
  • 1
    @tanpug, there were race issues related fasthttp library. It uses unsafe to cut corners. Use the standard HTTP library. if len(userArray)==0, goroutine terminates without unlocking. – Burak Serdar May 22 '21 at 20:56

1 Answers1

0

I think the problem is with the scoping of the variables. In order to represent the queueing, I'd have a pool of parallel worker threads that all pull from the same channel and then wait for them using a waitgroup. The exact code might need to be adapted as I don't have a go compiler at hand, but the idea is like this:

    threads := 50
    queueSize := 100 // trying to add more into the queue will blocke

    semaphore := make(chan bool, threads)
    jobQueue := make(chan MyItemType, queueSize)

    var wg sync.WaitGroup

    func processQueue(jobQueue <- chan MyItemType) {
      defer wg.Done()
      for item := range jobQueue {
         values, _ = json.Marshal(item) // doesn't seem to be used?
         client := &fasthttp.Client{
            Dial: fasthttpproxy.FasthttpHTTPDialerTimeout(proxy, time.Second * 5),
         }
      }
    }

    for i := 0; i < threads; i++ {
      wg.Add(1)
      go processQueue(jobQueue)
    }
  
    close(jobQueue)
    wg.Wait()

Now you can put items into jobQueue and they will be processed by one of these threads.

Marcus Ilgner
  • 6,935
  • 2
  • 30
  • 44
  • i cannot do this, because the array variables need to be called from every goroutine because it is a queue system. – tanpug May 22 '21 at 21:51
  • ah, now I see... in that case, I think I have a better idea. It's already quite late here, though, so I can't help but feel like I'm still missing something. I'll update my answer with a different approach, though... – Marcus Ilgner May 22 '21 at 22:01