-2

Is it concurrent safe to Stop a gouroutine after a period of time, like this?

  1. Code: (Note: data race since ok changes in another goroutine):
package main

import (
    "fmt"
    "time"
)

func main() {
    var ok byte
    time.AfterFunc(1000*time.Millisecond, func() {
        ok = 1
    })

    var i uint64
    for ok == 0 {
        i++ // CPU intensive task
    }
    fmt.Println(i) // 2_776_813_033
}

Terminal:

go run -race .

==================
WARNING: DATA RACE
Write at 0x00c000132010 by goroutine 8:
  main.main.func1()
      ./main.go:11 +0x46

Previous read at 0x00c000132010 by main goroutine:
  main.main()
      ./main.go:15 +0xf4

Goroutine 8 (running) created at:
  time.goFunc()
      go/src/time/sleep.go:180 +0x51
==================
80849692
Found 1 data race(s)

  1. Code (No data race):
package main

import (
    "fmt"
    "sync/atomic"
    "time"
)

func main() {
    var ok int32
    time.AfterFunc(1000*time.Millisecond, func() {
        atomic.StoreInt32(&ok, 1)
    })

    var i uint64
    for atomic.LoadInt32(&ok) == 0 {
        i++ // CPU intensive task
    }
    fmt.Println(i) // 2_835_935_488
}

Terminal:

go run -race .

31934042
wasmup
  • 14,541
  • 6
  • 42
  • 58
  • 1
    https://software.intel.com/en-us/blogs/2013/01/06/benign-data-races-what-could-possibly-go-wrong – kostix Dec 28 '19 at 22:05

2 Answers2

2

There is no guarantee that the busy-wait for loop will terminate even if ok is set to false by another goroutine. There is no explicit synchronization during the setting and reading of ok, so the main goroutine is not guaranteed to see the changes made to it. In other words, there is no way to establish a happened-before relationship between the two goroutines.

https://golang.org/ref/mem

Second version of the code is safe even though it is not stated in the Go memory model with respect to ok, but it is not safe because such tight loops may not allow other goroutines to execute. Atomic read/write has the memory barriers necessary for a happened-before relationship. You should use one of the synchronization primitives (mutex, channel) to guarantee that.

wasmup
  • 14,541
  • 6
  • 42
  • 58
Burak Serdar
  • 46,455
  • 3
  • 40
  • 59
0

The 2nd code is OK with Go 1.14+:

go1.14

Goroutines are now asynchronously preemptible. As a result, loops without function calls no longer potentially deadlock the scheduler or significantly delay garbage collection. This is supported on all platforms except windows/arm, darwin/arm, js/wasm, and plan9/*.


Stop a gouroutine after a period of time

BenchmarkAfterFunc-8            1000000000 0.4468 ns/op  0 B/op  0 allocs/op
BenchmarkDoneChannel-8          121966824   9.855 ns/op  0 B/op  0 allocs/op
BenchmarkTimeSince-8            89790115    12.95 ns/op  0 B/op  0 allocs/op
BenchmarkContextErr-8           58508900    19.78 ns/op  0 B/op  0 allocs/op
BenchmarkAfterFuncMutex-8       58323207    20.00 ns/op  0 B/op  0 allocs/op
BenchmarkContext-8              48947625    27.43 ns/op  0 B/op  0 allocs/op

Tests:

package main

import (
    "context"
    "sync"
    "sync/atomic"
    "testing"
    "time"
)

const d = 200 * time.Millisecond //  To stop a task after a period of time

func BenchmarkTimeSince(b *testing.B) {
    t0 := time.Now()
    var count = 0
    for i := 0; i < b.N; i++ {
        if time.Since(t0) < d {
            count++
        }
    }
    _ = count
}

func BenchmarkContext(b *testing.B) {
    var ctx, cancel = context.WithTimeout(context.Background(), d)
    defer cancel()
    var count = 0
    for i := 0; i < b.N; i++ {
        select {
        case <-ctx.Done():
            // break
        default:
            count++
        }
    }
    _ = count
}
func BenchmarkContextErr(b *testing.B) {
    var ctx, cancel = context.WithTimeout(context.Background(), d)
    defer cancel()
    var count = 0
    for i := 0; i < b.N; i++ {
        if ctx.Err() == nil {
            count++
        }
    }
    _ = count
}

func BenchmarkAfterFunc(b *testing.B) {
    var done uint32
    time.AfterFunc(d, func() { atomic.StoreUint32(&done, 1) })
    var count = 0
    for i := 0; i < b.N; i++ {
        if atomic.LoadUint32(&done) == 0 {
            count++
        }
    }
    _ = count
}

func BenchmarkDoneChannel(b *testing.B) {
    var done = make(chan struct{})
    time.AfterFunc(d, func() { close(done) })
    var count = 0
    for i := 0; i < b.N; i++ {
        select {
        case <-done:
            // break
        default:
            count++
        }
    }
    _ = count
}

type foo struct {
    sync.Mutex
    state bool
}

func (p *foo) end() {
    p.Lock()
    p.state = true
    p.Unlock()
}
func (p *foo) isDone() bool {
    var b bool
    p.Lock()
    b = p.state
    p.Unlock()
    return b
}
func BenchmarkAfterFuncMutex(b *testing.B) {
    var it = foo{}
    time.AfterFunc(d, func() { it.end() })
    var count = 0
    for i := 0; i < b.N; i++ {
        if it.isDone() {
            count++
        }
    }
    _ = count
}


https://medium.com/a-journey-with-go/go-asynchronous-preemption-b5194227371c

The preemption is an important part of the scheduler that lets it distribute the running time among the goroutines. Indeed, without preemption, a long-running goroutine that hogs the CPU would block the other goroutines from being scheduled. The version 1.14 introduces a new technique of asynchronous preemption, giving more power and control to the scheduler.

See:
Goroutine preemption
At which point a goroutine can yield?
https://github.com/golang/go/issues/10958
https://github.com/golang/proposal/blob/master/design/24543-non-cooperative-preemption.md

wasmup
  • 14,541
  • 6
  • 42
  • 58