Problem Statement
You need to automate the entire software delivery process from code commit to production deployment, including testing, security scanning, building containers, and deploying to Kubernetes with zero-downtime updates.
Pipeline Architecture
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
│ Code │───▶│ Test │───▶│ Build │───▶│ Staging │───▶│ Prod │
│ Commit │ │ QA │ │ Image │ │ Deploy │ │ Deploy │
└─────────┘ └─────────┘ └─────────┘ └─────────┘ └─────────┘
│ │
┌────▼────┐ ┌───▼────┐
│Security │ │ Push │
│ Scan │ │Registry│
└─────────┘ └────────┘
Prerequisites
- GitLab instance with CI/CD runners
- Container registry (GitLab Registry, Docker Hub, or private)
- Kubernetes cluster with GitLab Agent installed
- kubectl access to target clusters
Complete .gitlab-ci.yml
stages:
- test
- security
- build
- deploy-staging
- deploy-production
variables:
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: "/certs"
DOCKER_TLS_VERIFY: 1
DOCKER_CERT_PATH: "$DOCKER_TLS_CERTDIR/client"
# Application settings
APP_NAME: myapp
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
IMAGE_TAG_LATEST: $CI_REGISTRY_IMAGE:latest
# Cache dependencies between jobs
.node_cache: &node_cache
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
- .npm/
# ============================================
# TEST STAGE
# ============================================
lint:
stage: test
image: node:20-alpine
<<: *node_cache
script:
- npm ci --cache .npm --prefer-offline
- npm run lint
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
unit-tests:
stage: test
image: node:20-alpine
<<: *node_cache
script:
- npm ci --cache .npm --prefer-offline
- npm run test:unit -- --coverage
coverage: '/All files[^|]*\|[^|]*\s+([\d\.]+)/'
artifacts:
when: always
reports:
junit: junit.xml
coverage_report:
coverage_format: cobertura
path: coverage/cobertura-coverage.xml
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
integration-tests:
stage: test
image: node:20-alpine
services:
- postgres:15-alpine
- redis:7-alpine
variables:
POSTGRES_DB: testdb
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
DATABASE_URL: postgres://testuser:testpass@postgres:5432/testdb
REDIS_URL: redis://redis:6379
<<: *node_cache
script:
- npm ci --cache .npm --prefer-offline
- npm run test:integration
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
# ============================================
# SECURITY STAGE
# ============================================
sast:
stage: security
image: registry.gitlab.com/gitlab-org/security-products/analyzers/semgrep:latest
script:
- semgrep --config=auto --json --output=gl-sast-report.json || true
artifacts:
reports:
sast: gl-sast-report.json
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
dependency-scan:
stage: security
image: node:20-alpine
script:
- npm audit --json > npm-audit.json || true
- |
if [ -f npm-audit.json ]; then
HIGH_VULNS=$(cat npm-audit.json | grep -o '"high":[0-9]*' | grep -o '[0-9]*' || echo "0")
CRITICAL_VULNS=$(cat npm-audit.json | grep -o '"critical":[0-9]*' | grep -o '[0-9]*' || echo "0")
if [ "$CRITICAL_VULNS" -gt 0 ]; then
echo "Critical vulnerabilities found: $CRITICAL_VULNS"
exit 1
fi
fi
artifacts:
paths:
- npm-audit.json
when: always
allow_failure: true
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
container-scan:
stage: security
image:
name: aquasec/trivy:latest
entrypoint: [""]
script:
- trivy image --exit-code 0 --severity HIGH,CRITICAL --format json -o container-scan.json $IMAGE_TAG
artifacts:
paths:
- container-scan.json
needs:
- build-image
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
# ============================================
# BUILD STAGE
# ============================================
build-image:
stage: build
image: docker:24
services:
- docker:24-dind
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
# Build with cache
- docker pull $IMAGE_TAG_LATEST || true
- |
docker build \
--cache-from $IMAGE_TAG_LATEST \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--tag $IMAGE_TAG \
--tag $IMAGE_TAG_LATEST \
.
# Push both tags
- docker push $IMAGE_TAG
- docker push $IMAGE_TAG_LATEST
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
- if: $CI_COMMIT_TAG
# ============================================
# DEPLOYMENT STAGES
# ============================================
.deploy_template: &deploy_template
image:
name: bitnami/kubectl:latest
entrypoint: [""]
before_script:
# Configure kubectl with GitLab Agent
- kubectl config use-context $KUBE_CONTEXT
script:
- echo "Deploying $IMAGE_TAG to $KUBE_NAMESPACE"
# Apply configurations
- kubectl apply -f k8s/namespace.yml --namespace=$KUBE_NAMESPACE
- kubectl apply -f k8s/configmap.yml --namespace=$KUBE_NAMESPACE
- kubectl apply -f k8s/secrets.yml --namespace=$KUBE_NAMESPACE
# Update image in deployment
- |
sed -i "s|IMAGE_TAG|$IMAGE_TAG|g" k8s/deployment.yml
- kubectl apply -f k8s/deployment.yml --namespace=$KUBE_NAMESPACE
- kubectl apply -f k8s/service.yml --namespace=$KUBE_NAMESPACE
- kubectl apply -f k8s/ingress.yml --namespace=$KUBE_NAMESPACE
# Wait for rollout
- kubectl rollout status deployment/$APP_NAME --namespace=$KUBE_NAMESPACE --timeout=300s
after_script:
# Show deployment status
- kubectl get pods --namespace=$KUBE_NAMESPACE -l app=$APP_NAME
deploy-staging:
<<: *deploy_template
stage: deploy-staging
variables:
KUBE_NAMESPACE: staging
KUBE_CONTEXT: gitlab-agent:staging-cluster
environment:
name: staging
url: https://staging.example.com
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
deploy-production:
<<: *deploy_template
stage: deploy-production
variables:
KUBE_NAMESPACE: production
KUBE_CONTEXT: gitlab-agent:production-cluster
environment:
name: production
url: https://example.com
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
when: manual
- if: $CI_COMMIT_TAG
when: manual
needs:
- deploy-staging
Kubernetes Manifests
k8s/deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
annotations:
# Force rollout on config changes
checksum/config: "{{ include (print $.Template.BasePath \"/configmap.yaml\") . | sha256sum }}"
spec:
containers:
- name: myapp
image: IMAGE_TAG
ports:
- containerPort: 3000
envFrom:
- configMapRef:
name: myapp-config
- secretRef:
name: myapp-secrets
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
failureThreshold: 3
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 10"]
terminationGracePeriodSeconds: 30
imagePullSecrets:
- name: gitlab-registry
k8s/service.yml
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
selector:
app: myapp
ports:
- port: 80
targetPort: 3000
k8s/ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- example.com
secretName: myapp-tls
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp
port:
number: 80
Optimized Dockerfile
# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
# Install dependencies first (layer caching)
COPY package*.json ./
RUN npm ci --only=production
# Copy source and build
COPY . .
RUN npm run build
# Production stage
FROM node:20-alpine AS production
# Security: non-root user
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001
WORKDIR /app
# Copy only necessary files
COPY --from=builder --chown=nodejs:nodejs /app/dist ./dist
COPY --from=builder --chown=nodejs:nodejs /app/node_modules ./node_modules
COPY --from=builder --chown=nodejs:nodejs /app/package.json ./
USER nodejs
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1
CMD ["node", "dist/main.js"]
GitLab Agent Configuration
.gitlab/agents/production-cluster/config.yaml
gitops:
manifest_projects:
- id: mygroup/myproject
default_namespace: production
paths:
- glob: 'k8s/**/*.yml'
ci_access:
projects:
- id: mygroup/myproject
Advanced Pipeline Features
Canary Deployments
deploy-canary:
stage: deploy-production
script:
- |
# Deploy canary with 10% traffic
kubectl apply -f k8s/deployment-canary.yml --namespace=production
kubectl set image deployment/myapp-canary app=$IMAGE_TAG --namespace=production
# Wait and monitor
sleep 300
# Check error rate
ERROR_RATE=$(curl -s "prometheus:9090/api/v1/query?query=..." | jq '.data.result[0].value[1]')
if (( $(echo "$ERROR_RATE > 0.05" | bc -l) )); then
echo "Canary failed, rolling back"
kubectl delete deployment/myapp-canary --namespace=production
exit 1
fi
when: manual
Database Migrations
migrate-database:
stage: deploy-staging
image: node:20-alpine
script:
- npm ci
- npm run db:migrate
needs:
- build-image
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
changes:
- migrations/**/*
Rollback Job
rollback:
stage: deploy-production
script:
- kubectl rollout undo deployment/myapp --namespace=production
- kubectl rollout status deployment/myapp --namespace=production
when: manual
environment:
name: production
action: stop
Secrets Management
Using GitLab CI/CD Variables
- Go to Settings > CI/CD > Variables
- Add protected variables for production secrets
- Reference in
.gitlab-ci.yml:
variables:
DATABASE_URL: $PRODUCTION_DATABASE_URL
Using External Secrets Operator
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: myapp-secrets
spec:
refreshInterval: 1h
secretStoreRef:
name: vault-backend
kind: ClusterSecretStore
target:
name: myapp-secrets
data:
- secretKey: database-password
remoteRef:
key: secret/myapp/production
property: database_password
Pipeline at Scale: Senior DevOps Patterns
Security Scanning Templates
Add these templates to enable automatic vulnerability scanning:
include:
- template: Security/SAST.gitlab-ci.yml
- template: Security/Container-Scanning.gitlab-ci.yml
- template: Security/Secret-Detection.gitlab-ci.yml
This fails the pipeline if high-severity CVEs are found.
Tagging Strategy
Critical Rule: Use the commit SHA ($CICOMMITSHORT_SHA) for immutable tags. Never deploy latest to production.
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
# Bad: IMAGE_TAG: $CI_REGISTRY_IMAGE:latest
Secrets Rotation Pattern
The deployment script should recreate the registry credential secret on every deploy. This ensures that if the password rotates, the cluster recovers automatically:
script:
- kubectl delete secret regcred --ignore-not-found -n $KUBE_NAMESPACE
- kubectl create secret docker-registry regcred \
--docker-server=$CI_REGISTRY \
--docker-username=$CI_REGISTRY_USER \
--docker-password=$CI_REGISTRY_PASSWORD \
-n $KUBE_NAMESPACE
Cache Security Warning
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
Warning: Caches can be poisoned. Only allow the master branch to update the cache. Use policy: pull for feature branches.
Pipeline Best Practices Checklist
- [ ] Use multi-stage Docker builds to minimize image size
- [ ] Cache dependencies between pipeline runs
- [ ] Run tests in parallel where possible
- [ ] Include security scanning (SAST, dependency, container)
- [ ] Use GitLab environments for deployment tracking
- [ ] Implement rolling deployments with health checks
- [ ] Require manual approval for production deployments
- [ ] Store secrets securely (never in code)
- [ ] Include rollback procedures
- [ ] Monitor deployment success/failure rates
- [ ] Tag releases for easy rollback (use commit SHA, not
latest) - [ ] Use merge request pipelines for faster feedback
- [ ] Don't copy-paste YAML—use templates and inheritance
Related Wiki Articles