· Techtribe team · Tutorials  · 13 min read

Concurrency vs Parallelism in Go: A Hands-On Comparison

Master the differences between concurrency and parallelism in Go through relatable examples and useful code snippets

Master the differences between concurrency and parallelism in Go through relatable examples and useful code snippets

Hi everyone! Today, we’re diving into a super important topic that every programmer should understand: concurrency vs parallelism. We’ll be talking about these concepts a lot in future videos, so it’s crucial to get a clear grasp of them now.

To make things more exciting, I’ve got a fun and easy-to-understand example lined up. Plus, if you stick around until the end, I’ll share some practical tips and examples on how to work with concurrency and parallelism in Go.

Think about cooking a meal. Imagine you’re making a three-course dinner: a salad, a main dish, and a dessert.

Concurrency is like prepping the ingredients for all three courses at the same time. You chop vegetables for the salad, then while they marinate, you start boiling water for pasta. As the water heats up, you mix ingredients for the dessert. You’re not finishing each dish one by one, but rather working on parts of each dish. You’re managing multiple tasks by switching between them, which is essentially multitasking.

The following image illustrates how a concurrent mean preparation would look like. 

Parallelism, on the other hand, is like having three friends in the kitchen, each one dedicated to making one course. One friend makes the salad, another one cooks the main dish, and the third prepares the dessert—all at the same time. Here, multiple tasks are being completed simultaneously, not just managed concurrently.

However, concurrency is more than just multitasking—it’s about structuring your workflow to handle tasks efficiently. Think of the entire dinner preparation as a single process, with each task - chopping vegetables, boiling water and mixing dessert - acting as a separate thread within this single process. These threads run independently but need to be coordinated to ensure everything is ready on time.

For example, while mixing the dessert ingredients, you need to keep an eye on the boiling water to prevent it from boiling over. This coordination is the essence of multi-threading: managing multiple independent tasks concurrently within a single process to optimize your workflow and ensure a seamless cooking process.

In programming, concurrency is about structuring your code so it can handle multiple operations at once—like you multitasking in the kitchen. It’s about managing multiple tasks that make progress without necessarily finishing them at the same time. So you won’t necessarily finish them at the same time.

Parallelism on the other hand is about executing multiple operations at the exact same time, similar to your friends each cooking a different part of the meal simultaneously.

But to ensure all tasks are completed efficiently and in sync, we need a way for these tasks to communicate their progress and a method to wait for all of them to finish. In Go, the most commonly used tools for this purpose are channels and WaitGroups.

Channels allow different tasks, or quote “threads” to communicate with each other, making sure that each part of our dinner is prepared in the right order. Meanwhile, WaitGroups provide a mechanism to wait until all these tasks are done, ensuring everything is ready at the same time.

Take a look at the following gif from level up coding, so you have a clearer vision on what happens inside your machine in both cases.

As you are seeing on this gif, in order to our code to be executed in parallel you need more cores or CPUs to process it simultaneously!

So when writing a concurrent program, your code may or may not be executed in parallel, the reality is that it’s gonna depend on the machine it’s being executed.

But we can improve this system even more. Imagine if, instead of you alone multitasking, you assign specific tasks to different threads. One thread is responsible for chopping vegetables, another for boiling water, and another for mixing the dessert. These threads need to communicate and coordinate to ensure the meal is ready simultaneously. For example, the vegetable-chopping thread should notify the salad-mixing thread once it’s done, so the salad can be assembled while the pasta is cooking.

Let’s now understand how each of this tools would look like in go. The tasks or threads would be goroutines, which are essentially lightweight threads. The process would be the currently running program. The communication would most likely be channels 

Let’s see how this would look like on a Go code. We will use a simple time.Sleep to simulate some work, but you get the idea

package main

import (
   "sync"
   "time"
)

func main() {
   now := time.Now()
   var wg sync.WaitGroup
   wg.Add(3) // Adding 3 tasks to the wait for

   go func() {
       defer wg.Done() // Marking the task as done
       chopVegetables()
   }()

   go func() {
       defer wg.Done() // Marking the task as done
       boilWater()
   }()

   go func() {
       defer wg.Done() // Marking the task as done
       mixDessert()
   }()

   // Waiting for all tasks to be done
   wg.Wait()

   println("All tasks are done in", time.Since(now).Milliseconds(), "ms")
   prepareDinner()
   println("Dinner is served in", time.Since(now).Milliseconds(), "ms")
}

func chopVegetables() {
   now := time.Now()
   println("Chopping vegetables...")
   time.Sleep(400 * time.Millisecond)
   println("Chopped vegetables in", time.Since(now).Milliseconds(), "ms")
}

func boilWater() {
   now := time.Now()
   println("Boiling water...")
   time.Sleep(200 * time.Millisecond)
   println("Boiled water in", time.Since(now).Milliseconds(), "ms")
}

func mixDessert() {
   now := time.Now()
   println("Mixing dessert...")
   time.Sleep(300 * time.Millisecond)
   println("Mixed dessert in", time.Since(now).Milliseconds(), "ms")
}

func prepareDinner() {
   now := time.Now()
   println("Preparing dinner...")
   time.Sleep(500 * time.Millisecond)
   println("Dinner is ready", time.Since(now).Milliseconds(), "ms")
}

If we run this code, we will see that the maximum amount of time that will take to run all this task simultaneously will be the time of the longest task of all.

However, the previous code isn’t using any communication mechanism, we are just printing messages in the console. Let’s modify our code so we can use channels to communicate

There are plenty of ways of doing that. I will return a string from each function and an error. The following examples are just simulating real world tasks, in a real world scenario you would have to handle errors properly especially in go, a language that you are encouraged to handle errors all the time.

package main

import (
   "fmt"
   "sync"
   "time"
)

func main() {
   start := time.Now()
   var wg sync.WaitGroup
   wg.Add(3) // Adding 3 tasks to the wait for

   // Using a channel to signal when a task is done
   done := make(chan string, 3)
   errch := make(chan error, 1)

   go func() {
       defer wg.Done() // Marking the task as done
       veg, err := chopVegetables()
       if err != nil {
           errch <- err
           return
       }
       done <- veg // Signaling that the task is done. We are sending a message to the channel
   }()

   go func() {
       defer wg.Done() // Marking the task as done
       boil, err := boilWater()
       if err != nil {
           errch <- err
           return
       }
       done <- boil // Signaling that the task is done. We are sending a message to the channel
   }()

   go func() {
       defer wg.Done() // Marking the task as done
       dessert, err := mixDessert()
       if err != nil {
           errch <- err
           return
       }
       done <- dessert // Signaling that the task is done. We are sending a message to the channel
   }()

   // Using a separate goroutine to wait for all tasks to be done and to close our communication channel
   go func() {
       wg.Wait()
       close(done)
   }()

   // Process messages from done and errch channels using select
   for completedTasks := 0; completedTasks < 3; {
       select {
       case msg := <-done:
           println(msg)
           completedTasks++
       case err := <-errch:
           if err != nil {
               println("Error while preparing dinner:", err.Error())
               println("Error took", time.Since(start).Milliseconds(), "ms to occur")
               return // here we exit the program, in a real world scenario you would handle / return the error properly
           }
       }
   }

   println("All tasks are done in", time.Since(start).Milliseconds(), "ms")
   prepareDinner()
   println("Dinner is served in", time.Since(start).Milliseconds(), "ms")
}

// the following examples are just simulating real world tasks, in a real world scenario you would have to handle errors properly
// especially in go, a language that you are almost forced to handle errors all the time
func chopVegetables() (string, error) {
   now := time.Now()
   println("Chopping vegetables...")
   time.Sleep(400 * time.Millisecond)
   vegetablePart := fmt.Sprintf("Chopped vegetables in %d ms", time.Since(now).Milliseconds())
   return vegetablePart, nil
}

func boilWater() (string, error) {
   now := time.Now()
   println("Boiling water...")
   time.Sleep(200 * time.Millisecond)
   boilWaterPart := fmt.Sprintf("Boiled water in %d ms", time.Since(now).Milliseconds())
   return boilWaterPart, nil
}

func mixDessert() (string, error) {
   now := time.Now()
   println("Mixing dessert...")
   time.Sleep(300 * time.Millisecond)
   dessertPart := fmt.Sprintf("Mixed dessert in %d ms", time.Since(now).Milliseconds())
   return dessertPart, nil
}

func prepareDinner() {
   now := time.Now()
   println("Preparing dinner...")
   time.Sleep(500 * time.Millisecond)
   println("Dinner is ready", time.Since(now).Milliseconds(), "ms")
}

If we run this code now, we will have the same output as the last one.

However, this is not the same for a number of reasons, let’s break it down for a moment.

  1. By returning the content instead of printing it, we could use it anywhere else like return in JSON or for other service

  2. By checking and returning errors, we can correctly handle than, which will make our life easier when debugging and also make our codebase more robust and reliable

  3. We could have used loops on each of the channels created to return and handle each of them, but using the select statement will allow us not only to have a more idiomatic code, but specially to handle both completion signals and errors.

So using the select statement will help us to have a more performant and easier to understand code. When an error occurs it will be returned immediately, so we don’t have to wait for all other goroutines to be finished.

Let’s test this in action, let’s return an error on the boilWater function, which is the fastest task. With that we will be able to check if it’s gonna immediately return the error and exit from our select statement.

func boilWater() (string, error) {
   now := time.Now()
   println("Boiling water...")
   time.Sleep(200 * time.Millisecond)
   boilWaterPart := fmt.Sprintf("Boiled water in %d ms", time.Since(now).Milliseconds())
   return boilWaterPart, fmt.Errorf("error while boiling water")
}

Let’s run our code

Perfect! It didn’t wait for all goroutines to finish, instead it returned the error instantly as we needed to.

Let’s see some more key concepts about low level programming that will stand you out from the crowd

Now that you have a basic understanding of what a process is and how it differs from a thread, let’s take a look at the differences from threads and goroutines.

First and foremost, goroutines are not threads, they are lightweight threads, that said let’s see some of the differences between them.

Size

The first thing we should notice is the size. Goroutines have only 1% the size of OS threads approximately! And there are a number of reasons for that, OS threads have to deal with a lot of operational systems needs, and goroutines don’t.

Because goroutines are so small, you can spawn thousands of them without significant memory overhead. This allows us to manage insanely blazingly fast systems without much hardware resources.

OS Threads have a lot more size because they need to handle capabilities and protections that are provided by the operating system, which make them more resource-intensive. Some things OS threads have to do that goroutines don’t may include:

  1. Stack size: OS threads typically have a larger stack size. Which is essentially a region of memory that stores temporary data such as function parameters, return addresses and local variables. OS threads are usually created with default size relatively large between 1 - 2 MB, it’s usually predefined and cannot easily grow beyond this limit and if it needs more stack space than initially allocated it can cause a stack overflow. Goroutines on the other hand, have a small initial stack size and it can easily be increased if needed.  

  2. Interrupt Vectors: They are used to handle interrupts in a system, they signal to the processor indicating an event that needs immediate attention, like someone pressing a button on a keyboard, the OS needs to handle this action immediately. If goroutines had interrupt vectors, it would increase their memory footprint significantly. Each interrupt vector requires memory for storing the address of the interrupt handler and any associated context or state information. 

And many more…

Hardware dependent

Os threads must manage thread states, context switches and interactions with hardware, making them dependent on the specific hardware and the OS itself. Goroutines on the other hand are managed by the go runtime. So the go runtime will abstract the underlying hardware, so goroutines do not need to directly interact with the CPU or the operating system. This abstraction allows the Go runtime to optimize the scheduling and execution of goroutines without being constrained by hardware specifics

Easy communication

Goroutines primarily communicate using channels, which are built-in to the Go language. Channels provide a simple and efficient way to pass messages and synchronize between goroutines. It’s simple, safe and the syntax for using channels is straightforward. Golang has a set of powerful tools that helps us communicate like the select statement, that allow us to handle multiple communication channels simultaneously and efficiently, we can share memory on it but it’s less common practice.

OS threads have a much more complex communication using much more locks ( mutexes ), semaphores or condition variables, also managing shared memory and synchronization can become really complex. The message passing on OS threads may include using IPC ( inter-process communication ) and involve the kernel, which will be more complex.

Context switching

Switching between goroutines is quicker than switching between OS threads, the go runtime minimizes the overhead when switching between tasks, which improves performance when running multiple goroutines. 

OS threads on the other hand are much slower because the OS has to save and restore more state information and ensure fairness and security, which increases the time taken to switch between threads.

Scheduling

In Go, the runtime handles scheduling goroutines efficiently without the need for complex prioritization, making it easier to manage concurrent tasks.

OS Threads on the other hand uses a complex scheduling model. The operating system manages thread prioritization and fairness across all running applications, which adds complexity and overhead.

Trust

The lack of trust is worth paying more attention to. The go runtime makes several assumptions, it assumes that goroutines will behave well since they are part of the same program compiled by a trusted compiler. So the go runtime avoid many of the safety checks and overhead for improving performance, so it doesn’t have to worry about the bad behavior of a goroutine, because if one of them misbehaves it’s probably just a bug on our program because all goroutines are running on the same program.

OS threads on the other hand, can’t have this luxury assumption, so they are untrusted: they were not compiled by a trusted compiler, and may not have been compiled by the same compiler, also threads could have been written by malicious people trying to break or invade our system. For this reason the OS must enforce strict rules and protections, which adds to the overhead and complexity.

That’s it for today! I hope you have learned something useful today. Here are the key topics we have learned today:

If you feel that this content was somewhat helpful, share with someone that might like it too!

Back to Blog

Related Posts

View All Posts »