Skip to content

Graceful Shutdown Issue with Taskiq in K8s #380

@Bohdan-Ilchyshyn

Description

@Bohdan-Ilchyshyn

I am using Taskiq with Kubernetes and KEDA scaling, and I am encountering issues with implementing a graceful shutdown process for my workers.

Problem Description

When a pod receives a SIGTERM signal:

  • I need to stop the worker from receiving new tasks.
  • The worker should wait for currently running tasks to complete before shutting down.

I have configured the following settings:

shutdown-timeout = 70
wait-tasks-timeout = 60

However, this does not work as expected because wait-tasks-timeout is only used when the listener receives the QUEUE_DONE message. The QUEUE_DONE message is sent exclusively in the Listener.prefetcher.

Expected Behavior

Upon receiving a SIGTERM signal, the worker should:

  • Immediately stop accepting new tasks.
  • Wait for all in-progress tasks to complete within the wait-tasks-timeout duration.

Questions

  • How can I properly implement this behavior for a graceful shutdown?
  • Are there existing hooks or configurations I should use to address this issue?
  • Should QUEUE_DONE messages be triggered elsewhere to support this scenario?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions