Before getting executed, a task is always scheduled first and pushed into a queue implemented as an OrderedDict in order to keep them sorted by their addition order. A task corresponds to a node in your DAG where an action must be done such as, executing a bash shell command, a python script, kick off a spark job and so on. Now your memories about Kubenertes are fresh let’s move on Airflow executors.īasically, an executor defines how your tasks are going to be executed. ![]() In the context of Airflow and Kubernetes Executors, you can think of Kubernetes as a pool of ressources giving a simple but powerful API to dynamically launch complex deployments. Basically, in the most common Kubertenes use case (and in the case of Airflow), a Pod will run a single Docker container corresponding to a component of your application. A Pod is the smallest deployable object in Kubernetes.It encapsulates an application’s container (or multiple containers) as well as storage resources (shared volumes), a unique network IP, and options to set how the container(s) should run. In very simple terms, Kubernetes orchestrates the different containers composing your application so that they can work smoothly together.Ī very important concept to understand in Kubertenes is the concept of Pod. It orchestrates computing, networking and storage infrastructure and offers very nice features such as deployment, scaling, load balancing, logging and monitoring. ![]() So what is Kubernetes? Kubernetes is an open-source platform for managing containerized applications. Before starting to dive into the Kubernetes Executor, let me first give you a quick reminder about Kubernetes.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |