Zum Inhalt

Ansible

Polycrate bietet eine spezielle Integration mit Ansible. Der Workspace-Snapshot, der im YAML-Format exportiert und zum Polycrate-Container hinzugefügt wird, wird von Ansible automatisch verarbeitet. Als Ergebnis ist der Snapshot direkt als Top-Level-Variablen in Ansible verfügbar und kann in Playbooks und Vorlagen verwendet werden.

Das folgende Beispiel zeigt:

  • die Standardkonfiguration (block.poly) eines Blocks namens traefik
  • die vom Benutzer bereitgestellte Konfiguration für den Block in workspace.poly
  • das Ansible-Playbook, das die freigegebenen Variablen verwendet (block.config...)
  • eine Ansible-Vorlage, die die freigegebenen Variablen verwendet (templates/docker-compose.yml.j2)
  • die resultierende Datei /polycrate/docker-compose.yml, die auf einen Remote-Host übertragen wird

Abweichende Images

Im block.poly ist das Image traefik:2.6 hinterlegt, im workspace.poly dagegen traefik:2.7. In der generierten docker-compose.yml setzt Polycrate daher traefik:2.7, weil Workspace-Werte Standardwerte überschreiben.

Die Blockvariable enthält die Konfiguration des aktuellen Blocks, der durch

polycrate run traefik install

aufgerufen wird. Zusätzlich gibt es eine Variable namens workspace, die den vollständig kompilierten Arbeitsbereich einschließlich zusätzlicher Blöcke enthält, die im Workspace verfügbar sind.

Polycrate verwendet ein spezielles Ansible-Vars Plugin, um den Yaml-Snapshot einzulesen und ihn als Top-Level-Variable den Ansible-Fakten zur Verfügung zu stellen.

# block.poly

name: traefik

config:
  image: "traefik:v2.6"
  letsencrypt:
    email: ""
    resolver: letsencrypt

actions:
  - name: install
    script:
      - ansible-playbook install.yml
  - name: uninstall
    script:
      - ansible-playbook uninstall.yml
  - name: prune
    script:
      - ansible-playbook prune.yml
# workspace.poly

name: ansible-traefik-demo

blocks:
  - name: traefik
    inventory:
      from: inventory-block
    config:
      letsencrypt:
        email: info@example.com
      image: traefik:2.7
# install.yml
- name: "install"
  hosts: all
  gather_facts: yes
  tasks:
    - name: Create remote block directory
      ansible.builtin.file:
        path: "{{ item }}"
        state: directory
        mode: '0755'
      with_items:
        - "/polycrate/{{ block.name }}"

    - name: Copy compose file
      ansible.builtin.template:
        src: docker-compose.yml.j2
        dest: "/polycrate/{{ block.name }}/docker-compose.yml"

    - name: Deploy compose stack
      docker_compose:
        project_src: "/polycrate/{{ block.name }}"
        remove_orphans: true
        files:
          - docker-compose.yml
# templates/docker-compose.yml.j2

version: "3.9"

services:
  traefik:
    image: "{{ block.config.image }}"
    container_name: "traefik"
    command:
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.web.address=:80"
      - "--entrypoints.websecure.address=:443"
      - "--certificatesresolvers.{{ block.config.letsencrypt.resolver }}.acme.email={{ block.config.letsencrypt.email }}"
      - "--certificatesresolvers.{{ block.config.letsencrypt.resolver }}.acme.storage=/letsencrypt/acme.json"
      - "--certificatesresolvers.{{ block.config.letsencrypt.resolver }}.acme.tlschallenge=true"
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
      - "traefik-letsencrypt:/letsencrypt"
    networks:
      - traefik

networks:
  traefik:
    name: traefik

volumes:
  traefik-letsencrypt:
# /polycrate/traefik/docker-compose.yml

version: "3.9"

services:
  traefik:
    image: "traefik:2.7"
    container_name: "traefik"
    command:
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.web.address=:80"
      - "--entrypoints.websecure.address=:443"
      - "--certificatesresolvers.letsencrypt.acme.email=info@example.com"
      - "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"
      - "--certificatesresolvers.letsencrypt.acme.tlschallenge=true"
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
      - "traefik-letsencrypt:/letsencrypt"
    networks:
      - traefik

networks:
  traefik:
    name: traefik

volumes:
  traefik-letsencrypt:

Ansible Inventory

Polycrate kann YAML-formatierte Ansible-Inventorydateien im Artifacts-Verzeichnis eines Blocks verarbeiten. Polycrate sucht standardmäßig nach einer Datei namens inventory.yml (diese kann jedoc mit der inventory.filename-Stanza in der Blockkonfiguration überschrieben werden.

Eine Inventory-Datei kann automatisch von einem Block erstellt oder manuell bereitgestellt werden (nützlich für bereits bestehende Infrastruktur).

Die Inventories können vom besitzenden Block selbst oder von anderen Blöcken mit der Inventory-Strophe in der Blockkonfiguration verarbeitet werden:

workspace.poly

blocks:
  - name: block-a
    inventory:
      from: block-b
      filename: inventory.yml # Standardmäßig auf inventory.yml gesetzt

Dies fügt eine Umgebungsvariable (ANSIBLE_INVENTORY=Pfad/zum/Inventar/von/block-b) zum Container hinzu, die Ansible auf das richtige Inventar verweist, mit dem gearbeitet werden soll.

Das Inventar von block-b könnte so aussehen:

all:
  hosts:
    host-1:
      ansible_host: 1.2.3.4
      ansible_ssh_port: 22
      ansible_python_interpreter: "/usr/bin/python3"
      ansible_user: root
  children:
    master:
      hosts:
        host-1

Ansible für Docker

Ansible bietet umfassende Module für Docker- und Docker-Compose-Management. Mit Polycrate können Sie Docker-Container und Docker-Compose-Stacks elegant über Ansible-Playbooks verwalten.

Docker-Compose mit Ansible deployen

Das obige Traefik-Beispiel zeigt bereits einen typischen Docker-Compose-Workflow mit Ansible. Hier weitere Details:

Vollständiger Block mit Docker-Compose:

# block.poly
name: my-docker-stack
kind: dockerapp

config:
  app:
    image: myapp:latest
    port: 8080
  database:
    image: postgres:14
    user: appuser
    password: "{{ vault_db_password }}"
    name: appdb

actions:
  - name: install
    playbook: playbooks/docker-compose.yml

  - name: uninstall
    playbook: playbooks/docker-compose-down.yml

Playbook:

# playbooks/docker-compose.yml
---
- name: Deploy Docker Compose Stack
  hosts: all
  become: yes

  tasks:
    - name: Create App Directory
      file:
        path: "/opt/{{ block.name }}"
        state: directory
        mode: '0755'

    - name: Copy docker-compose.yml
      template:
        src: ../templates/docker-compose.yml.j2
        dest: "/opt/{{ block.name }}/docker-compose.yml"

    - name: Start Docker Compose Stack
      community.docker.docker_compose:
        project_src: "/opt/{{ block.name }}"
        state: present
        pull: yes
        remove_orphans: yes

Template:

# templates/docker-compose.yml.j2
version: '3.8'

services:
  web:
    image: {{ block.config.app.image }}
    ports:
      - "{{ block.config.app.port }}:80"
    environment:
      - DATABASE_URL=postgresql://{{ block.config.database.user }}:{{ block.config.database.password }}@db:5432/{{ block.config.database.name }}
    depends_on:
      - db

  db:
    image: {{ block.config.database.image }}
    environment:
      - POSTGRES_USER={{ block.config.database.user }}
      - POSTGRES_PASSWORD={{ block.config.database.password }}
      - POSTGRES_DB={{ block.config.database.name }}
    volumes:
      - postgres-data:/var/lib/postgresql/data

volumes:
  postgres-data:

Ansible für Kubernetes

Ansible bietet leistungsstarke Module für Kubernetes-Management. Mit Polycrate können Sie Kubernetes-Ressourcen elegant über Ansible-Playbooks verwalten - ob mit rohen Manifesten, Helm Charts oder dem Kubernetes-Modul.

1. Kubernetes Manifeste mit Ansible

Deployment eines Services mit Kubernetes-Manifesten:

# block.poly
name: my-k8s-app
kind: k8sapp

config:
  namespace: production
  app:
    name: myapp
    image: myapp:1.0.0
    replicas: 3
    port: 8080

actions:
  - name: install
    playbook: playbooks/install.yml

  - name: uninstall
    playbook: playbooks/uninstall.yml

Playbook:

# playbooks/install.yml
---
- name: Deploy Application to Kubernetes
  hosts: localhost
  gather_facts: no

  tasks:
    - name: Create Namespace
      kubernetes.core.k8s:
        state: present
        definition:
          apiVersion: v1
          kind: Namespace
          metadata:
            name: "{{ block.config.namespace }}"

    - name: Deploy Application from Template
      kubernetes.core.k8s:
        state: present
        namespace: "{{ block.config.namespace }}"
        template: "../templates/deployment.yml.j2"

    - name: Deploy Service from Template
      kubernetes.core.k8s:
        state: present
        namespace: "{{ block.config.namespace }}"
        template: "../templates/service.yml.j2"

    - name: Deploy Ingress from Template
      kubernetes.core.k8s:
        state: present
        namespace: "{{ block.config.namespace }}"
        template: "../templates/ingress.yml.j2"

    - name: Wait for Deployment to be ready
      kubernetes.core.k8s_info:
        kind: Deployment
        namespace: "{{ block.config.namespace }}"
        name: "{{ block.config.app.name }}"
      register: deployment
      until: deployment.resources[0].status.readyReplicas == block.config.app.replicas
      retries: 30
      delay: 10

Templates:

# templates/deployment.yml.j2
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ block.config.app.name }}
  labels:
    app: {{ block.config.app.name }}
spec:
  replicas: {{ block.config.app.replicas }}
  selector:
    matchLabels:
      app: {{ block.config.app.name }}
  template:
    metadata:
      labels:
        app: {{ block.config.app.name }}
    spec:
      containers:
      - name: {{ block.config.app.name }}
        image: {{ block.config.app.image }}
        ports:
        - containerPort: {{ block.config.app.port }}
        resources:
          requests:
            memory: "128Mi"
            cpu: "100m"
          limits:
            memory: "512Mi"
            cpu: "500m"
# templates/service.yml.j2
apiVersion: v1
kind: Service
metadata:
  name: {{ block.config.app.name }}
spec:
  selector:
    app: {{ block.config.app.name }}
  ports:
  - protocol: TCP
    port: 80
    targetPort: {{ block.config.app.port }}
  type: ClusterIP
# templates/ingress.yml.j2
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: {{ block.config.app.name }}
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  ingressClassName: nginx
  rules:
  - host: {{ block.config.app.domain }}
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: {{ block.config.app.name }}
            port:
              number: 80
  tls:
  - hosts:
    - {{ block.config.app.domain }}
    secretName: {{ block.config.app.name }}-tls

2. Helm Charts mit Ansible

Deployment via Helm:

# block.poly
name: postgres-helm
kind: k8sapp

config:
  namespace: databases
  chart:
    name: postgresql
    repo:
      name: bitnami
      url: https://charts.bitnami.com/bitnami
    version: 12.12.0
  app:
    auth:
      username: pguser
      password: "{{ vault_pg_password }}"
      database: mydb
    primary:
      persistence:
        size: 50Gi
        storageClass: fast-ssd
    replication:
      enabled: true
      replicas: 2

actions:
  - name: install
    playbook: playbooks/helm-install.yml

  - name: uninstall
    playbook: playbooks/helm-uninstall.yml

Playbook:

# playbooks/helm-install.yml
---
- name: Install Helm Chart
  hosts: localhost
  gather_facts: no

  tasks:
    - name: Add Helm Repository
      kubernetes.core.helm_repository:
        name: "{{ block.config.chart.repo.name }}"
        repo_url: "{{ block.config.chart.repo.url }}"

    - name: Create Namespace
      kubernetes.core.k8s:
        state: present
        definition:
          apiVersion: v1
          kind: Namespace
          metadata:
            name: "{{ block.config.namespace }}"

    - name: Install Helm Chart
      kubernetes.core.helm:
        name: "{{ block.name }}"
        chart_ref: "{{ block.config.chart.repo.name }}/{{ block.config.chart.name }}"
        chart_version: "{{ block.config.chart.version }}"
        release_namespace: "{{ block.config.namespace }}"
        values:
          auth:
            username: "{{ block.config.app.auth.username }}"
            password: "{{ block.config.app.auth.password }}"
            database: "{{ block.config.app.auth.database }}"
          primary:
            persistence:
              size: "{{ block.config.app.primary.persistence.size }}"
              storageClass: "{{ block.config.app.primary.persistence.storageClass }}"
          readReplicas:
            replicaCount: "{{ block.config.app.replication.replicas }}"
        wait: yes
        wait_timeout: 10m

    - name: Wait for PostgreSQL to be ready
      kubernetes.core.k8s_info:
        kind: StatefulSet
        namespace: "{{ block.config.namespace }}"
        name: "{{ block.name }}-postgresql"
      register: statefulset
      until: statefulset.resources[0].status.readyReplicas >= 1
      retries: 30
      delay: 10

3. Komplexe Multi-Resource Deployments

PostgreSQL mit Backup und Monitoring:

# block.poly
name: postgres-complete
kind: k8sapp

config:
  namespace: databases
  postgres:
    version: "14"
    storage: 100Gi
    replicas: 3
  backup:
    enabled: true
    schedule: "0 2 * * *"
    retention: 7
  monitoring:
    enabled: true
    prometheus:
      enabled: true

actions:
  - name: install
    playbook: playbooks/install-complete.yml

Playbook:

# playbooks/install-complete.yml
---
- name: Deploy Complete PostgreSQL Stack
  hosts: localhost
  gather_facts: no

  tasks:
    - name: Create Namespace
      kubernetes.core.k8s:
        state: present
        definition:
          apiVersion: v1
          kind: Namespace
          metadata:
            name: "{{ block.config.namespace }}"
            labels:
              monitoring: "true"

    - name: Deploy CloudNativePG Operator (if not exists)
      kubernetes.core.k8s:
        state: present
        src: "https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.21/releases/cnpg-1.21.0.yaml"

    - name: Deploy PostgreSQL Cluster
      kubernetes.core.k8s:
        state: present
        namespace: "{{ block.config.namespace }}"
        definition:
          apiVersion: postgresql.cnpg.io/v1
          kind: Cluster
          metadata:
            name: "{{ block.name }}"
          spec:
            instances: "{{ block.config.postgres.replicas }}"
            imageName: "ghcr.io/cloudnative-pg/postgresql:{{ block.config.postgres.version }}"
            storage:
              size: "{{ block.config.postgres.storage }}"
            monitoring:
              enabled: "{{ block.config.monitoring.enabled }}"
            backup:
              barmanObjectStore:
                destinationPath: "s3://my-backup-bucket/{{ block.name }}"
                s3Credentials:
                  accessKeyId:
                    name: backup-s3-creds
                    key: ACCESS_KEY_ID
                  secretAccessKey:
                    name: backup-s3-creds
                    key: SECRET_ACCESS_KEY
              retentionPolicy: "{{ block.config.backup.retention }}d"

    - name: Create Scheduled Backup (if enabled)
      kubernetes.core.k8s:
        state: present
        namespace: "{{ block.config.namespace }}"
        definition:
          apiVersion: postgresql.cnpg.io/v1
          kind: ScheduledBackup
          metadata:
            name: "{{ block.name }}-backup"
          spec:
            schedule: "{{ block.config.backup.schedule }}"
            backupOwnerReference: self
            cluster:
              name: "{{ block.name }}"
      when: block.config.backup.enabled

    - name: Deploy ServiceMonitor for Prometheus (if enabled)
      kubernetes.core.k8s:
        state: present
        namespace: "{{ block.config.namespace }}"
        definition:
          apiVersion: monitoring.coreos.com/v1
          kind: ServiceMonitor
          metadata:
            name: "{{ block.name }}-metrics"
          spec:
            selector:
              matchLabels:
                cnpg.io/cluster: "{{ block.name }}"
            endpoints:
            - port: metrics
      when: block.config.monitoring.prometheus.enabled

    - name: Wait for PostgreSQL Cluster to be ready
      kubernetes.core.k8s_info:
        api_version: postgresql.cnpg.io/v1
        kind: Cluster
        namespace: "{{ block.config.namespace }}"
        name: "{{ block.name }}"
      register: pg_cluster
      until: pg_cluster.resources[0].status.phase == "Cluster in healthy state"
      retries: 60
      delay: 10

4. ConfigMaps und Secrets Management

# playbooks/deploy-with-config.yml
---
- name: Deploy Application with Config
  hosts: localhost
  gather_facts: no

  tasks:
    - name: Create ConfigMap from Template
      kubernetes.core.k8s:
        state: present
        namespace: "{{ block.config.namespace }}"
        definition:
          apiVersion: v1
          kind: ConfigMap
          metadata:
            name: "{{ block.config.app.name }}-config"
          data:
            app.conf: |
              {{ lookup('template', '../templates/app.conf.j2') }}
            database.url: "postgresql://{{ block.config.database.host }}:5432/{{ block.config.database.name }}"

    - name: Create Secret from Vault
      kubernetes.core.k8s:
        state: present
        namespace: "{{ block.config.namespace }}"
        definition:
          apiVersion: v1
          kind: Secret
          metadata:
            name: "{{ block.config.app.name }}-secrets"
          type: Opaque
          stringData:
            db-password: "{{ vault_db_password }}"
            api-key: "{{ vault_api_key }}"

    - name: Deploy Application with Config
      kubernetes.core.k8s:
        state: present
        namespace: "{{ block.config.namespace }}"
        template: "../templates/deployment-with-config.yml.j2"
# templates/deployment-with-config.yml.j2
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ block.config.app.name }}
spec:
  replicas: {{ block.config.app.replicas }}
  selector:
    matchLabels:
      app: {{ block.config.app.name }}
  template:
    metadata:
      labels:
        app: {{ block.config.app.name }}
    spec:
      containers:
      - name: {{ block.config.app.name }}
        image: {{ block.config.app.image }}
        envFrom:
        - configMapRef:
            name: {{ block.config.app.name }}-config
        - secretRef:
            name: {{ block.config.app.name }}-secrets
        volumeMounts:
        - name: config
          mountPath: /etc/config
      volumes:
      - name: config
        configMap:
          name: {{ block.config.app.name }}-config

5. Multi-Cluster Deployments

Mit Polycrate können Sie mit unterschiedlichen Kubeconfigs auf mehrere Kubernetes-Cluster deployen:

# workspace.poly
name: multi-cluster-workspace

blocks:
  # Cluster 1: Production EU
  - name: prod-eu-cluster
    kind: k8scluster
    # Generiert kubeconfig nach artifacts/blocks/prod-eu-cluster/kubeconfig.yml

  # Cluster 2: Production US
  - name: prod-us-cluster
    kind: k8scluster
    # Generiert kubeconfig nach artifacts/blocks/prod-us-cluster/kubeconfig.yml

  # App deployment auf EU Cluster
  - name: app-eu
    from: my-app-base
    kubeconfig:
      from: prod-eu-cluster
    config:
      namespace: production
      region: eu

  # App deployment auf US Cluster
  - name: app-us
    from: my-app-base
    kubeconfig:
      from: prod-us-cluster
    config:
      namespace: production
      region: us

workflows:
  - name: deploy-global
    steps:
      - name: deploy-eu
        block: app-eu
        action: deploy

      - name: deploy-us
        block: app-us
        action: deploy

Ansible nutzt automatisch die richtige Kubeconfig für jeden Block.

Erweiterte Ansible-Beispiele

Ansible Vault für Secrets

# Secrets verschlüsseln
ansible-vault encrypt_string 'secret-password' --name 'vault_db_password'

# In workspace.poly referenzieren
polycrate run my-app install --ask-vault-pass
# block.poly mit Vault-Variablen
config:
  database:
    password: "{{ vault_db_password }}"
    api_key: "{{ vault_api_key }}"

Dynamische Inventories

# playbooks/create-inventory.yml
---
- name: Create Dynamic Inventory from Cloud Provider
  hosts: localhost
  gather_facts: no

  tasks:
    - name: Get EC2 Instances
      amazon.aws.ec2_instance_info:
        region: eu-central-1
        filters:
          "tag:Environment": "production"
          "instance-state-name": "running"
      register: ec2_instances

    - name: Build Inventory
      set_fact:
        dynamic_inventory:
          all:
            hosts: "{{ ec2_instances.instances | map(attribute='private_ip_address') | list }}"

    - name: Write Inventory to Artifacts
      copy:
        content: "{{ dynamic_inventory | to_nice_yaml }}"
        dest: "{{ workspace.config.artifacts_root }}/blocks/{{ block.name }}/inventory.yml"

Rollback-Playbook

# playbooks/rollback.yml
---
- name: Rollback Kubernetes Deployment
  hosts: localhost
  gather_facts: no

  tasks:
    - name: Get Deployment History
      kubernetes.core.k8s_info:
        kind: Deployment
        namespace: "{{ block.config.namespace }}"
        name: "{{ block.config.app.name }}"
      register: deployment

    - name: Rollback to Previous Revision
      kubernetes.core.k8s:
        state: present
        kind: Deployment
        namespace: "{{ block.config.namespace }}"
        name: "{{ block.config.app.name }}"
        definition:
          spec:
            template:
              metadata:
                annotations:
                  kubectl.kubernetes.io/restartedAt: "{{ ansible_date_time.iso8601 }}"
      when: deployment.resources | length > 0

    - name: Wait for Rollback
      kubernetes.core.k8s_info:
        kind: Deployment
        namespace: "{{ block.config.namespace }}"
        name: "{{ block.config.app.name }}"
      register: deployment_status
      until: deployment_status.resources[0].status.updatedReplicas == block.config.app.replicas
      retries: 30
      delay: 10

Erweiterte Docker-Beispiele

Multi-Container Anwendungen

Komplexe Multi-Container-Stacks mit Ansible orchestrieren:

# block.poly
name: microservices-docker
kind: dockerapp

config:
  app:
    frontend:
      image: myapp-frontend:latest
      port: 3000
    backend:
      image: myapp-backend:latest
      port: 8080
    database:
      image: postgres:14
      user: appuser
      password: "{{ vault_db_password }}"
      database: appdb
    redis:
      image: redis:7-alpine
    nginx:
      port: 80

actions:
  - name: install
    playbook: playbooks/docker-deploy.yml

  - name: update
    playbook: playbooks/docker-update.yml

  - name: backup
    playbook: playbooks/docker-backup.yml

  - name: uninstall
    playbook: playbooks/docker-down.yml

Deployment Playbook:

# playbooks/docker-deploy.yml
---
- name: Deploy Microservices Stack
  hosts: all
  become: yes

  tasks:
    - name: Create application directory structure
      file:
        path: "{{ item }}"
        state: directory
        mode: '0755'
      loop:
        - "/opt/{{ block.name }}"
        - "/opt/{{ block.name }}/nginx"
        - "/opt/{{ block.name }}/data/postgres"
        - "/opt/{{ block.name }}/data/redis"

    - name: Copy NGINX configuration
      template:
        src: ../templates/nginx.conf.j2
        dest: "/opt/{{ block.name }}/nginx/nginx.conf"
      notify: Reload NGINX

    - name: Copy docker-compose file
      template:
        src: ../templates/docker-compose.yml.j2
        dest: "/opt/{{ block.name }}/docker-compose.yml"

    - name: Copy environment file
      template:
        src: ../templates/env.j2
        dest: "/opt/{{ block.name }}/.env"
        mode: '0600'

    - name: Pull latest images
      community.docker.docker_compose:
        project_src: "/opt/{{ block.name }}"
        pull: yes
      register: pull_result

    - name: Start Docker Compose stack
      community.docker.docker_compose:
        project_src: "/opt/{{ block.name }}"
        state: present
        remove_orphans: yes
      register: compose_result

    - name: Wait for backend to be healthy
      uri:
        url: "http://localhost:{{ block.config.app.backend.port }}/health"
        status_code: 200
      register: result
      until: result.status == 200
      retries: 30
      delay: 5

    - name: Wait for frontend to be healthy
      uri:
        url: "http://localhost:{{ block.config.app.frontend.port }}"
        status_code: 200
      register: result
      until: result.status == 200
      retries: 30
      delay: 5

  handlers:
    - name: Reload NGINX
      community.docker.docker_compose:
        project_src: "/opt/{{ block.name }}"
        services:
          - nginx
        restarted: yes

Docker Compose Template:

# templates/docker-compose.yml.j2
version: '3.8'

services:
  frontend:
    image: {{ block.config.app.frontend.image }}
    container_name: {{ block.name }}-frontend
    ports:
      - "{{ block.config.app.frontend.port }}:3000"
    environment:
      - BACKEND_URL=http://backend:8080
      - NODE_ENV=production
    depends_on:
      - backend
    networks:
      - app-network
    restart: unless-stopped

  backend:
    image: {{ block.config.app.backend.image }}
    container_name: {{ block.name }}-backend
    ports:
      - "{{ block.config.app.backend.port }}:8080"
    environment:
      - DATABASE_URL=postgresql://{{ block.config.database.user }}:{{ block.config.database.password }}@database:5432/{{ block.config.database.database }}
      - REDIS_URL=redis://redis:6379
    depends_on:
      - database
      - redis
    networks:
      - app-network
    restart: unless-stopped

  database:
    image: {{ block.config.database.image }}
    container_name: {{ block.name }}-database
    environment:
      - POSTGRES_USER={{ block.config.database.user }}
      - POSTGRES_PASSWORD={{ block.config.database.password }}
      - POSTGRES_DB={{ block.config.database.database }}
    volumes:
      - /opt/{{ block.name }}/data/postgres:/var/lib/postgresql/data
    networks:
      - app-network
    restart: unless-stopped
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U {{ block.config.database.user }}"]
      interval: 10s
      timeout: 5s
      retries: 5

  redis:
    image: {{ block.config.redis.image }}
    container_name: {{ block.name }}-redis
    volumes:
      - /opt/{{ block.name }}/data/redis:/data
    networks:
      - app-network
    restart: unless-stopped

  nginx:
    image: nginx:alpine
    container_name: {{ block.name }}-nginx
    ports:
      - "{{ block.config.nginx.port }}:80"
    volumes:
      - /opt/{{ block.name }}/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
    depends_on:
      - frontend
      - backend
    networks:
      - app-network
    restart: unless-stopped

networks:
  app-network:
    driver: bridge

volumes:
  postgres-data:
  redis-data:

Docker Image Building mit Ansible

# playbooks/docker-build-and-push.yml
---
- name: Build and Push Docker Images
  hosts: localhost
  gather_facts: no

  tasks:
    - name: Build Frontend Image
      community.docker.docker_image:
        name: "{{ block.config.app.frontend.image }}"
        source: build
        build:
          path: "{{ workspace.root }}/src/frontend"
          dockerfile: Dockerfile
          args:
            NODE_ENV: production
        push: yes
        force_source: yes

    - name: Build Backend Image
      community.docker.docker_image:
        name: "{{ block.config.app.backend.image }}"
        source: build
        build:
          path: "{{ workspace.root }}/src/backend"
          dockerfile: Dockerfile
        push: yes
        force_source: yes

    - name: Tag images with version
      community.docker.docker_image:
        name: "{{ item.name }}"
        repository: "{{ item.name }}"
        tag: "{{ block.version }}"
        source: local
        push: yes
      loop:
        - { name: "{{ block.config.app.frontend.image }}" }
        - { name: "{{ block.config.app.backend.image }}" }

Docker Backup und Restore

# playbooks/docker-backup.yml
---
- name: Backup Docker Volumes
  hosts: all
  become: yes

  vars:
    backup_dir: "/backup/{{ block.name }}"
    backup_timestamp: "{{ ansible_date_time.epoch }}"

  tasks:
    - name: Create backup directory
      file:
        path: "{{ backup_dir }}"
        state: directory
        mode: '0700'

    - name: Stop containers for consistent backup
      community.docker.docker_compose:
        project_src: "/opt/{{ block.name }}"
        stopped: yes
      when: block.config.backup.stop_containers | default(true)

    - name: Backup PostgreSQL data
      archive:
        path: "/opt/{{ block.name }}/data/postgres"
        dest: "{{ backup_dir }}/postgres-{{ backup_timestamp }}.tar.gz"
        format: gz

    - name: Backup Redis data
      archive:
        path: "/opt/{{ block.name }}/data/redis"
        dest: "{{ backup_dir }}/redis-{{ backup_timestamp }}.tar.gz"
        format: gz

    - name: Create PostgreSQL dump
      community.docker.docker_container_exec:
        container: "{{ block.name }}-database"
        command: >
          pg_dump -U {{ block.config.database.user }}
          {{ block.config.database.database }}
        register: pg_dump

    - name: Save PostgreSQL dump
      copy:
        content: "{{ pg_dump.stdout }}"
        dest: "{{ backup_dir }}/database-{{ backup_timestamp }}.sql"

    - name: Restart containers
      community.docker.docker_compose:
        project_src: "/opt/{{ block.name }}"
        state: present
      when: block.config.backup.stop_containers | default(true)

    - name: Clean old backups (keep last 7)
      shell: |
        cd {{ backup_dir }}
        ls -t | tail -n +15 | xargs -r rm
# playbooks/docker-restore.yml
---
- name: Restore Docker Volumes
  hosts: all
  become: yes

  vars:
    backup_dir: "/backup/{{ block.name }}"

  tasks:
    - name: List available backups
      find:
        paths: "{{ backup_dir }}"
        patterns: "postgres-*.tar.gz"
      register: backup_files

    - name: Get latest backup
      set_fact:
        latest_backup: "{{ backup_files.files | sort(attribute='mtime') | last }}"

    - name: Stop Docker Compose stack
      community.docker.docker_compose:
        project_src: "/opt/{{ block.name }}"
        state: absent

    - name: Remove old data
      file:
        path: "/opt/{{ block.name }}/data/postgres"
        state: absent

    - name: Extract backup
      unarchive:
        src: "{{ latest_backup.path }}"
        dest: "/opt/{{ block.name }}/data/"
        remote_src: yes

    - name: Start Docker Compose stack
      community.docker.docker_compose:
        project_src: "/opt/{{ block.name }}"
        state: present

Docker Health Checks und Monitoring

# playbooks/docker-health-check.yml
---
- name: Check Docker Container Health
  hosts: all
  become: yes

  tasks:
    - name: Get container status
      community.docker.docker_container_info:
        name: "{{ item }}"
      register: container_info
      loop:
        - "{{ block.name }}-frontend"
        - "{{ block.name }}-backend"
        - "{{ block.name }}-database"
        - "{{ block.name }}-redis"
        - "{{ block.name }}-nginx"

    - name: Check if all containers are running
      assert:
        that:
          - item.container.State.Running
        fail_msg: "Container {{ item.item }} is not running!"
      loop: "{{ container_info.results }}"

    - name: Check container health
      assert:
        that:
          - item.container.State.Health.Status == "healthy" or item.container.State.Health is not defined
        fail_msg: "Container {{ item.item }} is unhealthy!"
      loop: "{{ container_info.results }}"

    - name: Check resource usage
      community.docker.docker_container_info:
        name: "{{ item }}"
      register: resource_info
      loop:
        - "{{ block.name }}-backend"
        - "{{ block.name }}-database"

    - name: Display resource usage
      debug:
        msg: "{{ item.container.Name }}: CPU: {{ item.container.HostConfig.CpuShares }}, Memory: {{ item.container.HostConfig.Memory }}"
      loop: "{{ resource_info.results }}"

    - name: Check disk usage of volumes
      shell: |
        du -sh /opt/{{ block.name }}/data/*
      register: disk_usage

    - name: Display disk usage
      debug:
        var: disk_usage.stdout_lines

Zero-Downtime Updates

# playbooks/docker-update.yml
---
- name: Zero-Downtime Docker Update
  hosts: all
  become: yes

  tasks:
    - name: Pull latest images
      community.docker.docker_compose:
        project_src: "/opt/{{ block.name }}"
        pull: yes
      register: pull_result

    - name: Check if images were updated
      set_fact:
        images_updated: "{{ pull_result.changed }}"

    - name: Create backup before update
      include_tasks: docker-backup.yml
      when: images_updated

    - name: Update backend (rolling)
      block:
        - name: Scale backend to 2 replicas
          community.docker.docker_compose:
            project_src: "/opt/{{ block.name }}"
            services:
              - backend
            scale:
              backend: 2

        - name: Wait for new backend to be healthy
          uri:
            url: "http://localhost:{{ block.config.app.backend.port }}/health"
            status_code: 200
          register: result
          until: result.status == 200
          retries: 30
          delay: 5

        - name: Remove old backend container
          community.docker.docker_compose:
            project_src: "/opt/{{ block.name }}"
            services:
              - backend
            scale:
              backend: 1
      when: images_updated

    - name: Update frontend (rolling)
      block:
        - name: Scale frontend to 2 replicas
          community.docker.docker_compose:
            project_src: "/opt/{{ block.name }}"
            services:
              - frontend
            scale:
              frontend: 2

        - name: Wait for new frontend to be healthy
          uri:
            url: "http://localhost:{{ block.config.app.frontend.port }}"
            status_code: 200
          register: result
          until: result.status == 200
          retries: 30
          delay: 5

        - name: Remove old frontend container
          community.docker.docker_compose:
            project_src: "/opt/{{ block.name }}"
            services:
              - frontend
            scale:
              frontend: 1
      when: images_updated

    - name: Verify all services are running
      include_tasks: docker-health-check.yml

Docker Network Management

# playbooks/docker-network-setup.yml
---
- name: Setup Docker Networks
  hosts: all
  become: yes

  tasks:
    - name: Create custom networks
      community.docker.docker_network:
        name: "{{ item.name }}"
        driver: "{{ item.driver | default('bridge') }}"
        ipam_config:
          - subnet: "{{ item.subnet }}"
        state: present
      loop: "{{ block.config.networks }}"

    - name: Connect containers to networks
      community.docker.docker_container:
        name: "{{ item.container }}"
        networks:
          - name: "{{ item.network }}"
        state: started
      loop: "{{ block.config.container_network_mapping }}"

Best Practices für Ansible mit Polycrate

1. Idempotente Playbooks schreiben

Stellen Sie sicher, dass Playbooks mehrfach ausgeführt werden können ohne unerwünschte Seiteneffekte:

- name: Ensure directory exists
  file:
    path: "/opt/{{ block.name }}"
    state: directory  # Idempotent

- name: Ensure service is running
  service:
    name: myapp
    state: started  # Idempotent

2. Fehlerbehandlung

- name: Deploy with error handling
  block:
    - name: Deploy application
      kubernetes.core.k8s:
        state: present
        template: deployment.yml.j2

  rescue:
    - name: Rollback on failure
      kubernetes.core.k8s:
        state: absent
        name: "{{ block.config.app.name }}"

    - name: Notify team
      uri:
        url: "{{ block.config.slack_webhook }}"
        method: POST
        body_format: json
        body:
          text: "Deployment of {{ block.name }} failed!"

  always:
    - name: Clean up temporary files
      file:
        path: "/tmp/{{ block.name }}"
        state: absent

3. Ansible Tags verwenden

# playbooks/install.yml
- name: Complete Installation
  hosts: all

  tasks:
    - name: Install dependencies
      tags: [dependencies, install]
      apt:
        name: "{{ item }}"
      loop: "{{ block.config.dependencies }}"

    - name: Deploy application
      tags: [deploy, install]
      include_tasks: deploy.yml

    - name: Configure monitoring
      tags: [monitoring]
      include_tasks: monitoring.yml
# Nur bestimmte Tags ausführen
polycrate run my-app install -- --tags deploy

4. Ansible Facts nutzen

- name: Use gathered facts
  hosts: all
  gather_facts: yes

  tasks:
    - name: Install packages based on OS
      package:
        name: "{{ item }}"
      loop: "{{ block.config.packages[ansible_os_family] }}"

    - name: Configure based on available memory
      template:
        src: config.j2
        dest: /etc/app/config.yml
      vars:
        memory_limit: "{{ (ansible_memtotal_mb * 0.8) | int }}Mi"

5. Secrets sicher handhaben

# Niemals Secrets direkt in block.poly
# ❌ Schlecht:
config:
  database:
    password: "mysecretpassword"

# ✅ Gut: Ansible Vault verwenden
config:
  database:
    password: "{{ vault_db_password }}"
# Vault-verschlüsselte Secrets nutzen
polycrate run my-app install --ask-vault-pass