Goroutine Collaboration You Should Know In Golang

Advanced Techniques for Efficient Concurrency In Go

golang golang-advanced-01(from github.com/MariaLetta/free-gophers-pack)|300

2024 has passed, and this article is the first one for 2025, wishing everyone a happy new year. 2024 was a memorable year, and about a month from now will be China’s New Year, so I’ll take some time to summarize the past year then!

Now let’s take a look at the first article of 2025!


Although Go has made significant progress in making coroutines lightweight, there are still some overheads involved. The largest overhead comes from stack expansion.

Each Goroutine initially has a stack size of 2KB, and when a function is called within the coroutine, it uses morestack to determine whether the stack needs to be expanded. If so, it calls copystack to copy the entire stack. However, in reality, most online programs have a certain fixed pattern of operation, such as a server where all requests go through a series of fixed function call chains. If these functions ultimately lead to the stack being expanded to 8KB, then this program will repeatedly call copystack for each request.

Based on this, we hope that each request can reuse the Goroutine’s stack that was already expanded from the previous request.

Reusable Coroutines

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
type Pool struct {
tasks chan func()
}

func (p *Pool) Go(task func()) {
select {
case p.tasks <- task: // reuse exist worker
return
default:
}
go func() { // start a new worker
for {
task()
task = <-p.tasks // waiting for new task
}
}()
}

This is a simple implementation of a Goroutine pool in Go, without considering timeout destruction and other logic. If the concurrency level is 3, we only need to keep 3 Goroutines resident to complete all tasks.

For example:

Cooperative Coroutines

“Don’t communicate by sharing memory; instead, share memory by communicating.”

Since coroutines can easily switch contexts, we can effortlessly pass memory objects “across” to other coroutines that need them, without needing to use locks to ensure concurrency safety.

In concurrent programming, the producer-consumer model is one of the most common scenarios. We’ll take this as an example to see how to achieve cooperative concurrency in Go. By leveraging Go’s built-in channel data structure, we can implement a simple message queue with just a few lines of code:

Simple Production-Consumption Model

1
2
3
4
5
6
7
8
9
10
11
12
13
type Factory struct {
queue chan string
}

func (f *Factory) Produce(msg string) {
f.queue <- msg
}

func (f *Factory) Consume() {
for msg := range f.queue {
handler(msg)
}
}

In this model, the producer is responsible for generating data and placing it into a shared buffer, while the consumer takes out and processes these data from the buffer. The existence of the buffer allows the producer and consumer to run independently, improving the system’s efficiency and responsiveness.

For example:

Batch Consumption

The message queue implemented using Channels has a flaw: getting one element comes at too high a cost, and it’s impossible to get all elements at once.

In many scenarios, we hope to be able to consume all current elements in batches. At this point, we can use a lock-based linked list to implement the message queue, while also utilizing Channels as a signal notification mechanism between coroutines:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
type ListFactory struct  {
mu sync.Mutex
queue []string
wakeup chan struct{} // size=1
}

func NewListFactory() *ListFactory {
f := new(ListFactory)
f.wakeup = make(chan struct{}, 1)
return f
}

func (f *ListFactory) Produce(msg string) {
f.mu.Lock()
f.queue = append(f.queue, msg)
if len(f.queue) == 1 {
select {
case f.wakeup <- struct{}{}:
default:
}
}
f.mu.Unlock()
}

func (f *ListFactory) Consume() {
for {
<-f.wakeup

f.mu.Lock()
cache := f.queue
f.queue = nil
f.mu.Unlock()
batchHandler(cache)
}
}

For example:

To further understand the code, we can look at some error cases.

  1. Error case 1: Deadlock
1
2
// Close the wakeup channel to signal the consumer to stop
// close(factory.wakeup)

If close(factory.wakeup) is missing, after executing the last <-f.wakeup, the following code will produce a deadlock, because it cannot execute and also cannot exit normally:

1
2
3
4
if len(f.queue) == 0 {
f.mu.Unlock()
break
}
  1. Error case 2: Not Consuming in Time
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
func (f *ListFactory) Consume() {
defer f.done.Done()

for {
f.mu.Lock()
if len(f.queue) == 0 {
f.mu.Unlock()
break
}

cache := f.queue
f.queue = nil // Clear the queue.
f.mu.Unlock()

<-f.wakeup
batchHandler(cache)
time.Sleep(1 * time.Second) // Simulate batch processing time
}
}

The above code will basically only output All tasks completed, because it detects that the queue is empty and breaks early.

Improving Consumption Efficiency

In the previous implementation, our consumer Goroutine may wake up and execute whenever there is one message in the queue. However, in some scenarios, we hope that the consumer can accumulate a certain number of messages before processing them in batches, typical examples being asynchronous log writing, data merging, etc.

The simplest approach would be to sleep for a window time to accumulate messages and then send them, but this comes at the cost of delaying the overall consumption speed by at least one window time even if there are no subsequent messages to write.

In such cases, we can use runtime.Gosched() to implement limited delay waiting:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
func (f *ListFactory) Consume() {
for {
<-f.wakeup
runtime.Gosched()
f.mu.Lock()
cache := f.queue
f.queue = nil
f.mu.Unlock()
batchHandler(cache)
}
}

func main() {
N := 32
f := new(ListFactory)
go func() {
for i := 0; i < N; i++ {
f.Produce("hello")
runtime.Gosched()
}
}()
f.Consume()
}

The Effect of runtime.Gosched(): runtime.Gosched() is a function in the Go language’s runtime package that can be simply understood as yielding the current Goroutine’s CPU time slice, allowing other Goroutines to obtain execution opportunities.

In a busy program, there will be many (assuming N) Goroutines continuously calling the Produce function, while the consumer Goroutine is only one. Assuming each Goroutine has an equal scheduling opportunity, when the consumer Goroutine obtains its next scheduling opportunity, it means that the producer Goroutine has produced at least N messages, allowing for batch consumption of at least N messages (ideally).

If this model is in a idle state, with few producer Goroutines and no other Goroutines needing to be scheduled, even if the consumer calls runtime.Gosched(), it can quickly obtain execution opportunities since there are no other Goroutines that need to be scheduled, allowing for fast consumption.

This active yielding of execution rights allows for more proactive scheduling between coroutines, thereby realizing the true “cooperation” ability of coroutines.

To further understand the code, we can look at some error cases.

1. Case 1: Deadlock

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
// Consume processes messages from the queue in batches.
func (f *ListFactory) Consume() {
defer f.done.Done()
for {
// Wait for a signal from the producer.
<-f.wakeup
// Yield CPU time to allow producer to run.
runtime.Gosched()

f.mu.Lock()
if len(f.queue) == 0 {
f.mu.Unlock()
break // Exit loop if queue is empty.
}

cache := f.queue
f.queue = nil // Clear the queue.
f.mu.Unlock()

// Process the batch of messages.
batchHandler(cache)
}
}

func main() {
N := 32 // Number of messages to produce
f := NewListFactory()

// Start the consumer in the main goroutine.
f.done.Add(1)
f.Consume()

// Start a producer goroutine.
go func() {
for i := 0; i < N*3; i++ {
f.Produce(fmt.Sprintf("Message %d", i+1))
runtime.Gosched() // Simulate production delay.
}

// Signal the consumer that no more tasks will come.
close(f.wakeup)
}()

f.done.Wait()
fmt.Println("All tasks completed.")
}

The consumer is waiting for a signal from the producer, while the producer is also waiting in the main goroutine. This mutual waiting causes a deadlock.

2. Case 2: Exit without Consuming

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
// Consume processes messages from the queue in batches.
func (f *ListFactory) Consume() {
defer f.done.Done()
for {
// Yield CPU time to allow producer to run.
runtime.Gosched()

f.mu.Lock()
if len(f.queue) == 0 {
f.mu.Unlock()
break // Exit loop if queue is empty.
}

cache := f.queue
f.queue = nil // Clear the queue.
f.mu.Unlock()

// Process the batch of messages.
batchHandler(cache)

// Wait for a signal from the producer.
<-f.wakeup
}
}

func main() {
N := 32 // Number of messages to produce
f := NewListFactory()

// Start the consumer in the main goroutine.
f.done.Add(1)
f.Consume()

// Start a producer goroutine.
go func() {
for i := 0; i < N*3; i++ {
f.Produce(fmt.Sprintf("Message %d", i+1))
runtime.Gosched() // Simulate production delay.
}

// Signal the consumer that no more tasks will come.
close(f.wakeup)
}()

f.done.Wait()
fmt.Println("All tasks completed.")
}

The runtime.Gosched() call is ineffective because there is only one main goroutine, and the producer’s goroutine has not started.

More

Recent Articles:

Random Article:


More Series Articles about You Should Know In Golang:

https://wesley-wei.medium.com/list/you-should-know-in-golang-e9491363cd9a

And I’m Wesley, delighted to share knowledge from the world of programming. 

Don’t forget to follow me for more informative content, or feel free to share this with others who may also find it beneficial. it would be a great help to me.

Give me some free applauds, highlights, or replies, and I’ll pay attention to those reactions, which will determine whether or not I continue to post this type of article.

See you in the next article. 👋

中文文章: https://programmerscareer.com/zh-cn/golang-advanced-01/
Author: Medium,LinkedIn,Twitter
Note: Originally written at https://programmerscareer.com/golang-advanced-01/ at 2025-01-03 00:59.
Copyright: BY-NC-ND 3.0

Common Optimizations You Should Know in Golang Chain Of Responsibility Pattern You Should Know in Golang(Design Patterns 06)

Comments

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×