top of page
  • Writer's pictureVaughn Geber

Swift Concurrency: Dispatch Queues & Combine Unleashed

Concurrency is an essential aspect of modern programming, allowing developers to perform multiple tasks simultaneously and improve the performance and responsiveness of their applications. In Swift, one of the primary ways to implement concurrency is through the use of dispatch queues. In this article, we will explore the different types of dispatch queues, Quality of Service (QoS) levels, and best practices for writing concurrent code.


Serial vs Concurrent Queues

Dispatch queues can be either serial or concurrent, depending on how they execute their tasks. Serial queues execute tasks one at a time in the order they are submitted, while concurrent queues execute tasks simultaneously, with no guarantee of the order in which they will complete.


Here is an example of how to create a serial queue in Swift:

let serialQueue = DispatchQueue(label: "com.example.serialQueue")

And here is an example of how to create a concurrent queue:

let concurrentQueue = DispatchQueue(label: "com.example.concurrentQueue", attributes: .concurrent)

When submitting tasks to a serial queue, you can be sure that they will be executed in the order they were submitted. This can be useful for tasks that depend on each other, as you can guarantee that they will be executed in the correct order.


On the other hand, submitting tasks to a concurrent queue allows them to execute simultaneously, which can improve performance when tasks are independent of each other.


Quality of Service

Another important aspect of dispatch queues is Quality of Service (QoS), which allows you to prioritize tasks based on their importance. There are five different QoS levels in Swift:

  • .userInteractive: Tasks that are user-facing and require immediate response, such as UI updates or animations.

  • .userInitiated: Tasks that are initiated by the user and require a quick response, such as opening a file or starting a network request.

  • .default: Tasks that are initiated by the application and are not time-sensitive.

  • .utility: Tasks that are not time-sensitive and can run in the background, such as file downloads or data processing.

  • .background: Tasks that are not time-sensitive and can run in the background for extended periods, such as backups or sync operations.

When creating a dispatch queue, you can specify a QoS level using the qos parameter. Here is an example of how to create a queue with a .userInteractive QoS:

let queue = DispatchQueue(label: "com.example.queue", qos: .userInteractive)

By using the appropriate QoS level for your tasks, you can ensure that they are executed in the most efficient and responsive manner possible.


Deadlocks

Deadlocks are a common issue in concurrent programming, where two or more threads are blocked waiting for each other to complete a task. This can cause your application to freeze and become unresponsive. To avoid deadlocks, you should be careful when using nested dispatch queue calls.


Here is an example of a potential deadlock:

let queue1 = DispatchQueue(label: "com.example.queue1")
let queue2 = DispatchQueue(label: "com.example.queue2")

queue1.async {
    queue2.sync {
        // Perform some task synchronously
    }
}

In this example, we have two dispatch queues, queue1 and queue2. We submit a task to queue1 asynchronously, and inside that task, we submit another task to queue2 synchronously. This means that the second task will block the first task until it completes, potentially causing a deadlock.


To avoid deadlocks, you should avoid nesting dispatch queue calls whenever possible, or use a different approach, such as asynchronous calls.


When it comes to deadlocks and Combine, the framework has the potential to introduce new deadlocking scenarios, but it also provides tools to avoid them.


One potential cause of deadlocks with Combine is when using synchronous operators, such as map or flatMap, on a concurrent dispatch queue. If the closure provided to these operators blocks the thread, it can lead to a deadlock, as the operator is waiting for the closure to complete before emitting a new value.


Here is an example of how a synchronous operator can lead to a deadlock:

let queue = DispatchQueue(label: "com.example.queue", attributes: .concurrent)
let subject = PassthroughSubject<Int, Never>()

subject
    .receive(on: queue)
    .map { value in
        // Perform a blocking operation
        return value * 2
    }
    .sink { value in
        print(value)
    }

subject.send(1)

In this example, we create a concurrent dispatch queue and a PassthroughSubject that emits a single value. We then subscribe to the subject on the concurrent queue using the receive(on:) operator, and apply a synchronous map operator that performs a blocking operation on the same queue. Finally, we handle the emitted value using the sink operator.


In this case, the map operator is waiting for the blocking operation to complete before emitting a new value, while the sink operator is waiting for the map operator to emit a value before handling it. This creates a deadlock, as both operators are waiting for each other to complete.


To avoid deadlocks in this scenario, you should either use an asynchronous operator, such as flatMap, or switch to a serial dispatch queue if synchronous operations are required. This way, the map operator will not block the thread and can emit values as soon as they are available, allowing the sink operator to handle them without waiting for the map operator to complete.


Thread Safety

Thread safety is an important consideration when working with concurrent code. It is essential to ensure that your code is thread-safe to avoid race conditions and other issues.


Here is an example of how to safely access shared data using a dispatch queue:


let queue = DispatchQueue(label: "com.example.queue")
var sharedData: Int = 0

queue.async {
    queue.sync {
        // Access shared data safely
        sharedData += 1
    }
}

In this example, we create a dispatch queue and a shared variable called sharedData. We submit a task to the queue that accesses the shared data using a synchronous call to ensure that it is accessed safely.


Combine

Swift also provides a powerful concurrency framework called Combine, which allows you to work with asynchronous data streams in a declarative way. One of the key features of Combine is the ability to subscribe to and receive data on different dispatch queues, allowing you to control where the processing of the data stream occurs.


Here is an example of how to subscribe to a data stream on one queue and receive the results on another queue using Combine:

let dataStream = Just("Hello, World!")
    .subscribe(on: DispatchQueue.global(qos: .userInteractive))
    .receive(on: DispatchQueue.main)

dataStream.sink { value in
    // Handle the received value
}

In this example, we create a data stream using the Just operator, which emits a single value and then completes. We then subscribe to the data stream on a global dispatch queue with a .userInteractive QoS using the subscribe(on:) operator, and receive the results on the main dispatch queue using the receive(on:) operator. Finally, we handle the received value using the sink operator.


Throttling and Debouncing

Throttling and debouncing are two techniques for managing the frequency of task execution. Throttling limits the number of times a task can execute within a certain time period, while debouncing delays the execution of a task until a certain amount of time has passed since the last time it was executed.


Here is an example of how to throttle the execution of a task using a dispatch queue:

let queue = DispatchQueue(label: "com.example.queue")
var lastExecutionTime: DispatchTime = .now()

queue.async {
    let now = DispatchTime.now()
    let elapsed = now.distance(to: lastExecutionTime)
    if elapsed > 1.0 {
        // Perform some task
        lastExecutionTime = now
    }
}

In this example, we create a dispatch queue and a variable to store the last execution time of the task. We then submit a task to the queue that checks the elapsed time since the last execution and only performs the task if it has been more than one second since the last execution.


Throttling and Debouncing with Combine

Throttling and debouncing are two techniques for managing the frequency of task execution or event handling in concurrent programming. Throttling limits the number of times a task can execute within a certain time period, while debouncing delays the execution of a task until a certain amount of time has passed since the last time it was executed. Combine provides operators to implement both throttling and debouncing in a declarative way.


Throttling with Combine

Throttling can be achieved using the throttle(for:scheduler:latest:) operator. Here's an example of how to throttle a stream of events using Combine:

import Combine

let publisher = PassthroughSubject<Int, Never>()

let cancellable = publisher
    .throttle(for: .seconds(1), scheduler: DispatchQueue.main, latest: true)
    .sink { value in
        print("Received value:", value)
    }

for i in 0...10 {
    publisher.send(i)
    Thread.sleep(forTimeInterval: 0.5)
}

In this example, we create a PassthroughSubject and use the throttle(for:scheduler:latest:) operator to ensure that events are emitted no more frequently than once every second. The latest parameter determines whether the latest value should be emitted at the end of the throttle period.


Debouncing with Combine

Debouncing can be achieved using the debounce(for:scheduler:) operator. Here's an example of how to debounce a stream of events using Combine:

import Combine

let publisher = PassthroughSubject<Int, Never>()

let cancellable = publisher
    .debounce(for: .seconds(1), scheduler: DispatchQueue.main)
    .sink { value in
        print("Received value:", value)
    }

for i in 0...10 {
    publisher.send(i)
    Thread.sleep(forTimeInterval: 0.5)
}

In this example, we create a PassthroughSubject and use the debounce(for:scheduler:) operator to delay the emission of events until a certain amount of time has passed since the last event was sent. In this case, the debounce period is one second. The sink operator receives the debounced events and prints the values.


In conclusion, the Combine framework offers powerful operators for managing the frequency of task execution or event handling in concurrent programming. By using the throttle(for:scheduler:latest:) and debounce(for:scheduler:) operators, you can implement throttling and debouncing techniques in a declarative way, improving the efficiency and responsiveness of your applications.


Conclusion

In conclusion, understanding the different types of dispatch queues, Quality of Service levels, and best practices such as avoiding deadlocks, ensuring thread safety, and managing task execution frequency is essential for writing efficient and robust concurrent code in Swift. Additionally, Combine provides a powerful tool for working with asynchronous data streams in a declarative way, further improving the capabilities of concurrent programming in Swift.

38 views0 comments

Comentarios


bottom of page