Zum Inhalt

ohMyHelm Verwendung

Vollständige Referenz für alle Konfigurationsoptionen von ohMyHelm.

Zwei Modi: Chart vs. Helper

ohMyHelm kann in zwei Modi verwendet werden:

1. Chart-Modus (chart.enabled: true)

Erstellt ein vollständiges Kubernetes-Deployment mit Deployment/StatefulSet, Service, Ingress, etc.

myapp:
  chart:
    enabled: true
    # ... Chart-Konfiguration

2. Helper-Modus (chart.enabled: false oder nicht gesetzt)

Verwendet nur Helper-Funktionen wie Secrets, ConfigMaps, Namespaces, etc.

myhelpers:
  # chart ist nicht enabled
  secrets:
    - name: my-secret
      # ... Secret-Konfiguration

Chart-Konfiguration

Basis-Einstellungen

chart:
  enabled: true                    # Aktiviert den Chart-Modus
  statefulset: false               # false = Deployment, true = StatefulSet
  nameOverride: ""                 # Überschreibt den Namen teilweise
  fullnameOverride: "my-app"       # Überschreibt den vollständigen Namen
  applabel: true                   # Erstellt app=<RELEASENAME> Label
  replicaCount: 1                  # Anzahl der Replicas

Container-Konfiguration

chart:
  container:
    image: nginx:latest            # Container Image
    ports:
      - name: http
        containerPort: 80
        protocol: TCP

    imageConfig:
      pullPolicy: IfNotPresent     # Always, Never, IfNotPresent

    securityContext: {}            # Pod Security Context

    command: []                    # Überschreibt Entrypoint
    #  - "/bin/sh"
    #  - "-c"

    args: []                       # Überschreibt CMD
    #  - "echo hello"

Umgebungsvariablen

chart:
  container:
    # Statische Werte
    staticEnv:
      - name: DATABASE_HOST
        value: postgres.default.svc.cluster.local

      - name: DATABASE_USER
        valueFrom:
          secretKeyRef:
            name: db-secret
            key: username

    # Default-Umgebungsvariablen (werden mit Helm-Werten gerendert)
    env:
      - name: SERVICE_URL
        value: "https://{{ .Release.Name }}.example.com"

    # Extra-Umgebungsvariablen
    extraEnv: []

    # Umgebungsvariablen aus ConfigMap/Secret
    envFrom:
      - configMapRef:
          name: app-config
      - secretRef:
          name: app-secrets

Health Checks

chart:
  container:
    livenessProbe:
      initialDelaySeconds: 30
      periodSeconds: 10
      timeoutSeconds: 5
      failureThreshold: 3
      httpGet:
        path: /healthz
        port: http

    readinessProbe:
      initialDelaySeconds: 10
      periodSeconds: 5
      timeoutSeconds: 3
      failureThreshold: 3
      httpGet:
        path: /ready
        port: http

Image Pull Secrets

chart:
  imagePullSecrets:
    - name: registry-secret
    - name: another-secret

Sidecar Container

Zusätzlicher Container im gleichen Pod:

chart:
  sidecar:
    enabled: true
    image: busybox:latest
    useContainerVolume: false      # Verwendet die Volumes vom Haupt-Container

    command:
      - sh
      - -c
      - "while true; do echo hello; sleep 10; done"

    staticEnv: []
    env: []
    extraEnv: []
    envFrom: []

    livenessProbe: {}
    readinessProbe: {}

Init Container

Container, der vor dem Haupt-Container läuft:

chart:
  initContainer:
    enabled: true
    image: busybox:latest

    command:
      - sh
      - -c

    args:
      - "echo Initializing && sleep 5"

    staticEnv: []
    env: []
    extraEnv: []
    envFrom: []

Spezieller Init Container zum Warten auf Jobs:

chart:
  initContainer:
    enabled: true
    image: registry.gitlab.com/ayedocloudsolutions/ohmyhelm-job-helper:1.0.0
    args:
      - "job"
      - "migration-job"  # Wartet bis dieser Job Complete ist

Jobs

Standalone Job (Helper)

job:
  enabled: true
  name: my-migration-job
  namespace: default
  annotations: {}

  image: migrate/migrate:latest
  imageConfig:
    pullPolicy: IfNotPresent

  command:
    - migrate
  args:
    - -path=/migrations
    - -database=postgres://...
    - up

  env: []
  staticEnv: []
  extraEnv: []
  envFrom: []

  # Volumes
  extraVolumeMounts: []
  extraVolumes: []

  # Job-spezifische Settings
  restartPolicy: Never             # Never oder OnFailure
  activeDeadlineSeconds: 1200      # Timeout in Sekunden
  backoffLimit: 20                 # Anzahl Wiederholungen bei Fehler

  # Auto-Cleanup nach Abschluss
  removejob:
    enabled: true
    ttlSecondsAfterFinished: 60    # Löscht Job nach 60 Sekunden

Chart-integrierter Job

chart:
  job:
    enabled: true
    name: my-app-migration
    annotations:
      "helm.sh/hook": pre-install,pre-upgrade
      "helm.sh/hook-weight": "-5"

    #image: custom/image           # Optional, nutzt sonst container.image
    imageConfig:
      pullPolicy: IfNotPresent

    command: []
    args: []

    staticEnv: []
    env: []
    extraEnv: []
    envFrom: []

    extraVolumeMounts: []
    extraVolumes: []

    restartPolicy: Never
    activeDeadlineSeconds: 1200
    backoffLimit: 20

    removejob:
      enabled: false
      ttlSecondsAfterFinished: 60

Service

chart:
  service:
    name: my-service               # Optional, nutzt sonst fullnameOverride
    type: ClusterIP                # ClusterIP, NodePort, LoadBalancer, ExternalName

    ports:
      - port: 80                   # Service Port
        targetPort: http           # Port-Name oder Nummer
        protocol: TCP
        name: http
        #nodePort: 30080           # Nur bei type: NodePort

      - port: 443
        targetPort: https
        protocol: TCP
        name: https

Ingress

Einfacher Ingress

chart:
  ingressSimple:
    enabled: true
    host: myapp.example.com
    path: /
    tlsSecretName: myapp-tls       # Optional
    annotations:
      cert-manager.io/cluster-issuer: letsencrypt-prod
      nginx.ingress.kubernetes.io/force-ssl-redirect: "true"

Erweiterter Ingress

chart:
  ingress:
    enabled: true
    annotations:
      nginx.ingress.kubernetes.io/rewrite-target: /
      cert-manager.io/cluster-issuer: letsencrypt-prod

    tls:
      - secretName: myapp-tls
        hosts:
          - myapp.example.com
          - www.myapp.example.com

    hosts:
      - host: myapp.example.com
        http:
          paths:
            - path: /
              backend:
                serviceName: my-service
                servicePort: http

            - path: /api
              backend:
                serviceName: my-api-service
                servicePort: 8080

Volumes

StatefulSet Volumes

chart:
  statefulset: true
  statefulsetVolume:
    volumeMounts:
      - name: data
        mountPath: /var/lib/app/data
      - name: config
        mountPath: /etc/app

    volumeClaimTemplates:
      - metadata:
          name: data
          #labels:
          #  type: data
        spec:
          accessModes:
            - ReadWriteOnce
          #storageClassName: fast-ssd
          resources:
            requests:
              storage: 10Gi

    volumes:
      - name: config
        configMap:
          name: app-config

Deployment Volumes

chart:
  statefulset: false
  deploymentVolume:
    volumeMounts:
      - name: config-volume
        mountPath: /etc/config
        #subPath: app.conf
        #readOnly: true

      - name: temp-storage
        mountPath: /tmp

    volumes:
      - name: config-volume
        configMap:
          name: app-config
          #items:
          #  - key: app.conf
          #    path: app.conf

      - name: temp-storage
        emptyDir: {}

Service Account & RBAC

chart:
  serviceAccount:
    create: true
    annotations: {}
    #  eks.amazonaws.com/role-arn: arn:aws:iam::123456789:role/my-role
    name: ""                       # Leer = automatisch generiert

  rbac:
    enabled: true
    roleRules:
      - apiGroups: [""]
        resources: ["pods", "services"]
        verbs: ["get", "list", "watch"]

      - apiGroups: ["batch"]
        resources: ["jobs"]
        verbs: ["create", "delete", "get", "list"]

Autoscaling

chart:
  autoscaling:
    enabled: true
    minReplicas: 2
    maxReplicas: 10
    targetCPUUtilizationPercentage: 80
    targetMemoryUtilizationPercentage: 80

Ressourcen

chart:
  resources:
    limits:
      cpu: 1000m
      memory: 1Gi
    requests:
      cpu: 100m
      memory: 128Mi

Pod-Konfiguration

chart:
  podAnnotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "9090"

  podSecurityContext:
    runAsNonRoot: true
    runAsUser: 1000
    fsGroup: 1000

  nodeSelector:
    disktype: ssd
    environment: production

  tolerations:
    - key: "dedicated"
      operator: "Equal"
      value: "app"
      effect: "NoSchedule"

  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 100
          podAffinityTerm:
            labelSelector:
              matchExpressions:
                - key: app
                  operator: In
                  values:
                    - myapp
            topologyKey: kubernetes.io/hostname

Chart-integrierte Helper

Secrets (innerhalb Chart)

chart:
  secrets:
    - name: app-secret
      values:
        api-key: "my-secret-key"
        db-password: ""            # Wird automatisch generiert

ConfigMaps (innerhalb Chart)

chart:
  configs:
    - name: app-config
      dontOverrideOnUpdate: false  # true = Config wird bei Upgrade nicht überschrieben
      values:
        app.yaml: |
          server:
            host: 0.0.0.0
            port: 8080
          features:
            auth: enabled

Monitoring (innerhalb Chart)

chart:
  monitoring:
    - enabled: true
      name: my-app-metrics
      namespace: monitoring
      release: prometheus
      endpoints:
        - port: metrics
          interval: 30s
          path: /metrics
          scheme: http

Helper-Funktionen

Diese funktionieren unabhängig vom Chart-Modus.

Namespaces

namespaces:
  setPreInstallHook: true          # Erstellt Namespaces als pre-install Hook
  spaces:
    - name: development
    - name: staging
    - name: production

Plain Manifests

Beliebige Kubernetes-Ressourcen:

manifests:
  - apiVersion: v1
    kind: PersistentVolumeClaim
    content:
      metadata:
        name: my-pvc
        namespace: default
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 5Gi

  - apiVersion: v1
    kind: Service
    content:
      metadata:
        name: external-service
        namespace: default
      spec:
        type: ExternalName
        externalName: api.external-service.com

Image Credentials (Docker Registry Secrets)

imageCredentials:
  - name: docker-registry
    namespace: default
    registry: https://index.docker.io/v1/
    username: "myusername"
    accessToken: "mytoken"

  - name: private-registry
    namespace: default
    registry: https://registry.example.com
    username: "user"
    accessToken: "token"

Secrets (Helper)

secrets:
  - name: database-credentials
    namespace: default
    values:
      username: "dbuser"
      password: "secretpass"

  - name: api-keys
    namespace: default
    values:
      github: ""                   # Wird automatisch generiert
      gitlab: ""                   # Wird automatisch generiert
      aws: "AKIAIOSFODNN7EXAMPLE"

Bei helm upgrade werden leere Werte NICHT neu generiert - bestehende Secrets bleiben erhalten.

TLS Secrets

tlsSecrets:
  - name: myapp-tls
    namespace: default
    values:
      tls.crt: |
        -----BEGIN CERTIFICATE-----
        MIIDXTCCAkWg...
        -----END CERTIFICATE-----
      tls.key: |
        -----BEGIN PRIVATE KEY-----
        MIIEvQIBADANBg...
        -----END PRIVATE KEY-----

ConfigMaps (Helper)

configs:
  - name: app-configuration
    namespace: default
    dontOverrideOnUpdate: false
    values:
      config.yaml: |
        database:
          host: postgres.default.svc.cluster.local
          port: 5432
          name: myapp
      app.properties: |
        server.port=8080
        logging.level=INFO

  - name: nginx-config
    namespace: default
    values:
      nginx.conf: |
        server {
          listen 80;
          server_name _;
          location / {
            proxy_pass http://backend:8080;
          }
        }

Single Config (Helper)

Für eine einzelne ConfigMap:

config:
  enabled: true
  name: my-config
  namespace: default
  dontOverrideOnUpdate: false
  values:
    app.yaml: |
      setting1: value1
      setting2: value2

Ingress (Helper)

ingress:
  - name: my-ingress
    namespace: default
    annotations:
      nginx.ingress.kubernetes.io/rewrite-target: /
      cert-manager.io/cluster-issuer: letsencrypt-prod

    tls:
      - secretName: myapp-tls
        hosts:
          - myapp.example.com

    hosts:
      - host: myapp.example.com
        http:
          paths:
            - path: /
              backend:
                serviceName: my-service
                servicePort: 80

ServiceMonitor (Helper)

Für Prometheus Operator:

monitoring:
  - name: my-app-monitor
    namespace: monitoring
    release: prometheus              # Label selector für Prometheus Operator
    endpoints:
      - port: metrics
        interval: 30s
        path: /metrics
        scheme: http

      - port: health
        interval: 60s
        path: /health
        scheme: http

Vollständiges Beispiel

Kombination mehrerer Features:

# Chart.yaml
apiVersion: v2
name: full-featured-app
version: 1.0.0

dependencies:
  - name: ohmyhelm
    alias: app
    repository: https://gitlab.com/ayedocloudsolutions/ohmyhelm
    version: 1.13.0
# values.yaml
app:
  # Namespaces erstellen
  namespaces:
    spaces:
      - name: myapp

  # Image Pull Secrets
  imageCredentials:
    - name: registry-secret
      namespace: myapp
      registry: https://registry.example.com
      username: "user"
      accessToken: "token"

  # Helper Secrets
  secrets:
    - name: db-credentials
      namespace: myapp
      values:
        username: app
        password: ""  # Auto-generiert

  # Helper ConfigMaps
  configs:
    - name: app-config
      namespace: myapp
      values:
        app.yaml: |
          database:
            host: postgres
            port: 5432

  # Chart aktivieren
  chart:
    enabled: true
    statefulset: true
    fullnameOverride: "myapp"

    imagePullSecrets:
      - name: registry-secret

    # Haupt-Container
    container:
      image: registry.example.com/myapp:latest
      ports:
        - name: http
          containerPort: 8080

      env:
        - name: DB_USER
          valueFrom:
            secretKeyRef:
              name: db-credentials
              key: username
        - name: DB_PASS
          valueFrom:
            secretKeyRef:
              name: db-credentials
              key: password

      envFrom:
        - configMapRef:
            name: app-config

      livenessProbe:
        httpGet:
          path: /health
          port: http
        initialDelaySeconds: 30

      readinessProbe:
        httpGet:
          path: /ready
          port: http
        initialDelaySeconds: 10

    # Volumes
    statefulsetVolume:
      volumeMounts:
        - name: data
          mountPath: /data

      volumeClaimTemplates:
        - metadata:
            name: data
          spec:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 10Gi

    # Service
    service:
      type: ClusterIP
      ports:
        - port: 80
          targetPort: http

    # Ingress
    ingressSimple:
      enabled: true
      host: myapp.example.com
      path: /
      tlsSecretName: myapp-tls
      annotations:
        cert-manager.io/cluster-issuer: letsencrypt-prod

    # Ressourcen
    resources:
      limits:
        cpu: 1000m
        memory: 1Gi
      requests:
        cpu: 100m
        memory: 128Mi

    # Autoscaling
    autoscaling:
      enabled: true
      minReplicas: 2
      maxReplicas: 10
      targetCPUUtilizationPercentage: 80

Best Practices

  1. Verwenden Sie fullnameOverride für vorhersehbare Ressourcen-Namen
  2. Setzen Sie Ressourcen-Limits für Production-Deployments
  3. Implementieren Sie Health Checks (liveness und readiness Probes)
  4. Verwenden Sie Secrets für sensible Daten, nicht ConfigMaps
  5. Pinnen Sie Image-Tags (nicht latest in Production)
  6. Nutzen Sie Autoscaling für variable Workloads
  7. Verwenden Sie StatefulSets nur wenn wirklich nötig
  8. Testen Sie mit --dry-run --debug vor dem Deployment

Siehe auch