Gitlab ci cd 📋 Copia tutti i comandi CI/CD GitLab_ 📄 Generare GitLab CI/CD PDF Guide_HTML_TAG_119__ __HTML_TAG_120_ # GitLab CI/CD Cheatsheet # ## Installazione ### GitLab Runner Installazione _Tabella_122__ ## Verifica _Tabella_123__ --- # Comandi di base ### Runner Management _Tabella_124__ #### Pipeline Operations (GitLab CLI) _Tabella_125__ ### API-Based Pipeline Triggers _Tabella_126__ --- ## Uso avanzato ### Registrazione avanzata Runner _Tabella_127__ ### Advanced Pipeline Operations _Tabella_128__ ### Monitoraggio e debug _Tabella_129__ ### Gestione configurazione _Tabella_130__ --- ## Configurazione # Basic .gitlab-ci.yml Struttura # Define pipeline stages stages: - build - test - deploy # Global variables variables: DOCKER_DRIVER: overlay2 DATABASE_URL: "postgres://localhost/db" # Global scripts executed before each job before_script: - echo "Pipeline started at $(date)" - export PATH=$PATH:/custom/bin # Global scripts executed after each job after_script: - echo "Cleaning up..." # Basic job definition build_app: stage: build image: node:16-alpine script: - npm install - npm run build artifacts: paths: - dist/ expire_in: 1 week cache: key: ${CI_COMMIT_REF_SLUG} paths: - node_modules/ only: - main - merge_requests tags: - docker ## Configurazione avanzata della tubazione # Include external configurations include: - project: 'my-group/ci-templates' ref: main file: '/templates/.gitlab-ci-template.yml' - remote: 'https://example.com/ci-template.yml' - local: '/templates/security-scan.yml' - template: Security/SAST.gitlab-ci.yml # Workflow rules for pipeline execution workflow: rules: - if: '$CI_PIPELINE_SOURCE == "merge_request_event"' - if: '$CI_COMMIT_BRANCH == "main"' - if: '$CI_COMMIT_TAG' - when: never # Job with complex rules deploy_production: stage: deploy script: - ./deploy.sh production rules: - if: '$CI_COMMIT_BRANCH == "main"' when: manual - if: '$CI_COMMIT_TAG =~ /^v[0-9]+\.[0-9]+\.[0-9]+$/' when: on_success environment: name: production url: https://prod.example.com on_stop: stop_production ## Lavoro parallelo e matrice # Parallel execution test: stage: test parallel: 5 script: - bundle exec rspec # Matrix builds test_matrix: parallel: matrix: - NODE_VERSION: ['14', '16', '18'] OS: ['linux', 'windows'] image: node:${NODE_VERSION} script: - npm test ## # Artifacts and Cache Configuration build: stage: build script: - make build artifacts: name: "$CI_JOB_NAME-$CI_COMMIT_REF_NAME" paths: - binaries/ - build/ exclude: - binaries/**/*.tmp reports: junit: test-results.xml coverage_report: coverage_format: cobertura path: coverage/cobertura-coverage.xml expire_in: 30 days when: on_success cache: key: files: - package-lock.json paths: - node_modules/ policy: pull-push ### Docker and Services Configuration build_docker: stage: build image: docker:latest services: - docker:dind variables: DOCKER_TLS_CERTDIR: "/certs" before_script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY script: - docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA . - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA ### Dynamic Child Pipelines generate_config: stage: build script: - ./generate-ci-config.sh > generated-config.yml artifacts: paths: - generated-config.yml trigger_child: stage: deploy trigger: include: - artifact: generated-config.yml job: generate_config strategy: depend ### Runner Configuration File (config.toml) concurrent = 10 check_interval = 0 [session_server] session_timeout = 1800 [[runners]] name = "docker-runner" url = "https://gitlab.com/" token = "TOKEN" executor = "docker" [runners.custom_build_dir] [runners.cache] Type = "s3" Path = "cache" Shared = true [runners.cache.s3] ServerAddress = "s3.amazonaws.com" BucketName = "runner-cache" BucketLocation = "us-east-1" [runners.docker] tls_verify = false image = "alpine:latest" privileged = true disable_entrypoint_overwrite = false oom_kill_disable = false disable_cache = false volumes = ["/cache", "/var/run/docker.sock:/var/run/docker.sock"] shm_size = 0 --- ## Common Use Cases ### Use Case 1: Build and Test Node.js Application stages: - build - test - deploy variables: NODE_ENV: production build: stage: build image: node:16-alpine script: - npm ci - npm run build artifacts: paths: - dist/ - node_modules/ expire_in: 1 hour cache: key: ${CI_COMMIT_REF_SLUG} paths: - .npm/ test:unit: stage: test image: node:16-alpine dependencies: - build script: - npm run test:unit coverage: '/Lines\s*:\s*(\d+\.\d+)%/' artifacts: reports: junit: junit.xml coverage_report: coverage_format: cobertura path: coverage/cobertura-coverage.xml test:integration: stage: test image: node:16-alpine services: - postgres:13 variables: POSTGRES_DB: test_db POSTGRES_USER: test_user POSTGRES_PASSWORD: test_password script: - npm run test:integration ### Use Case 2: Docker Build and Push to Registry stages: - build - scan - deploy variables: DOCKER_DRIVER: overlay2 IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA build_image: stage: build image: docker:latest services: - docker:dind before_script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY script: - docker build -t $IMAGE_TAG . - docker tag $IMAGE_TAG $CI_REGISTRY_IMAGE:latest - docker push $IMAGE_TAG - docker push $CI_REGISTRY_IMAGE:latest only: - main - tags scan_image: stage: scan image: aquasec/trivy:latest script: - trivy image --severity HIGH,CRITICAL $IMAGE_TAG allow_failure: true deploy_k8s: stage: deploy image: bitnami/kubectl:latest script: - kubectl config use-context $KUBE_CONTEXT - kubectl set image deployment/myapp myapp=$IMAGE_TAG - kubectl rollout status deployment/myapp environment: name: production url: https://myapp.example.com only: - main ### Use Case 3: Multi-Environment Deployment with Manual Approval stages: - build - deploy_staging - deploy_production build: stage: build script: - ./build.sh artifacts: paths: - build/ deploy_staging: stage: deploy_staging script: - ./deploy.sh staging environment: name: staging url: https://staging.example.com on_stop: stop_staging only: - main stop_staging: stage: deploy_staging script: - ./cleanup.sh staging environment: name: staging action: stop when: manual deploy_production: stage: deploy_production script: - ./deploy.sh production environment: name: production url: https://www.example.com when: manual only: - main needs: - deploy_staging ### Use Case 4: Terraform Infrastructure Deployment stages: - validate - plan - apply variables: TF_ROOT: ${CI_PROJECT_DIR}/terraform TF_STATE_NAME: default .terraform: image: hashicorp/terraform:latest before_script: - cd ${TF_ROOT} - terraform init -backend-config="address=${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${TF_STATE_NAME}" validate: extends: .terraform stage: validate script: - terraform validate - terraform fmt -check plan: extends: .terraform stage: plan script: - terraform plan -out=plan.tfplan artifacts: paths: - ${TF_ROOT}/plan.tfplan expire_in: 1 day apply: extends: .terraform stage: apply script: - terraform apply -auto-approve plan.tfplan dependencies: - plan when: manual only: - main environment: name: production ### Use Case 5: Monorepo with Selective Job Execution workflow: rules: - if: '$CI_PIPELINE_SOURCE == "merge_request_event"' - if: '$CI_COMMIT_BRANCH == "main"' variables: FRONTEND_PATH: "apps/frontend" BACKEND_PATH: "apps/backend" .changes_frontend: &changes_frontend changes: - "${FRONTEND_PATH}/**/*" - "package.json" - ".gitlab-ci.yml" .changes_backend: &changes_backend changes: - "${BACKEND_PATH}/**/*" - "requirements.txt" - ".gitlab-ci.yml" build_frontend: stage: build image: node:16 script: - cd $FRONTEND_PATH - npm ci - npm run build rules: - <<: *changes_frontend build_backend: stage: build image: python:3.9 script: - cd $BACKEND_PATH - pip install -r requirements.txt - python -m pytest rules: - <<: *changes_backend deploy_all: stage: deploy script: - ./deploy-all.sh rules: - if: '$CI_COMMIT_BRANCH == "main"' changes: - "${FRONTEND_PATH}/**/*" - "${BACKEND_PATH}/**/*" --- # Migliori Pratiche - **Utilizza `cache`_ per le dipendenze e `artifacts`_ per le uscite di costruzione**: Cache accelera le successive operazioni di tubazione memorizzando dipendenze come `node_modules/`, mentre gli artefatti passano le uscite di costruzione tra le fasi. Non costruiscono mai artefatti. - **Attuazione delle regole del flusso di lavoro adeguate per evitare inutili operazioni di tubazione** Utilizzare `workflow:rules`_ per controllare l'esecuzione delle tubazioni, impedendo i rifiuti sui progetti MR o modifiche di documentazione. Questo consente di risparmiare risorse e di ridurre i costi. - **Tag runners and job appropriately**: Utilizzare tag specifici (_INLINE_CODE_62___, `kubernetes`_, `gpu`) per indirizzare i lavori ai corridori appropriati. Ciò garantisce che i lavori vengano eseguiti su infrastrutture con capacità richieste e previene la contention delle risorse. - **Utilizzare la parola chiave `needs`_ per le pipeline DAG**: Invece di fasi sequenziali, utilizzare __INLINE_CODE_66_ per creare grafici aciclici diretti (DAGs) che funzionano non appena le dipendenze completano, riducendo significativamente il tempo totale delle tubazioni. - **Store dati sensibili nelle variabili CI/CD, mai in codice**: Utilizzare variabili protette e mascherate per segreti come chiavi API, password e gettoni. Abilitare la protezione per limitare l'accesso solo a rami/tag protetti. - **Implementare la scansione di sicurezza presto nel gasdotto**: Include SAST, scansione della dipendenza e scansione dei container nelle fasi iniziali. Utilizzare `allow_failure: true` inizialmente per evitare di bloccare lo sviluppo mentre i team affrontano i risultati. - **Utilizza `only:changes` o `rules:changes`_ per monorepos**: Trigger job solo quando i file rilevanti cambiano, impedendo inutili build e test. Questo è fondamentale per grandi monorepos con più applicazioni. - ** Tempi di scadenza del manufatto appropriati**: Artifatti predefiniti a breve scadenza (1-7 giorni) per risparmiare i costi di archiviazione. Utilizzare __INLINE_CODE_70_ solo per artefatti di rilascio che hanno bisogno di ritenzione permanente. - **Leverage include e modelli per la configurazione DRY**: Creare modelli riutilizzabili in repository separati e includerli utilizzando `include:project` o `include:remote`_. Ciò garantisce coerenza tra i progetti. - **Monitor runner capacità e scala appropriatamente**: Traccia i tempi di coda del corridore e i tempi di attesa del lavoro. Configurare `concurrent`_ in `config.toml` sulla base delle risorse disponibili, e scale corridori orizzontalmente durante i tempi di punta. --- ## Risoluzione dei problemi _Tabella_131__ --- ## Importanti variabili CI/CD | Variable | Description | |----------|-------------| | __INLINE_CODE_100__ | Full commit SHA that triggered the pipeline | | __INLINE_CODE_101__ | First 8 characters of commit SHA | | __INLINE_CODE_102__ | Branch or tag name | | __INLINE_CODE_103__ | Lowercase branch/tag name, suitable for URLs | | __INLINE_CODE_104__ | Unique project ID in GitLab | | __INLINE_CODE_105__ | Project name | | __INLINE_CODE_106__ | Project namespace with project name | | __INLINE_CODE_107__ | Unique pipeline ID | | __INLINE_CODE_108__ | Unique job ID | | __INLINE_CODE_109__ | Token for authenticating with GitLab API | | __INLINE_CODE_110__ | GitLab Container Registry address | | __INLINE_CODE_111__ | Full image path for project's container registry | | __INLINE_CODE_112__ | Username for container registry authentication | | __INLINE_CODE_113__ | Password for container registry authentication | | `CI_ENVIRONMENT_NAME` | Nome dell'ambiente (se