DEV Community

Vakul Keshav
Vakul Keshav

Posted on

Building Scalable CI/CD Pipelines with Azure DevOps, Docker, and Private NPM Packages

 Over the past few days, I designed and implemented a robust CI/CD pipeline from scratch, tackling the challenges of:

  • Integrating Docker builds with private NPM registries (Azure Artifacts)
  • Managing secure, token-based authentication inside Docker containers
  • Automating deployments for a seamless developer experience

One key challenge was handling private NPM package authentication during Docker builds without exposing sensitive tokens. After multiple iterations, I designed a scalable approach using Azure DevOps Pipelines, Azure Key Vault for secrets management, and dynamically injecting .npmrc during runtime (I will show this later).

For private npm registry i am using azure artifacts, and if you want to know how to integrate azure artifacts as npm registry then you can refer this official documentation and if you want to know how to publish your first package to azure artifacts then you can refer this.

Dockerfile for securely integrating private npm packages

When working with private NPM registries (like Azure Artifacts), integrating authentication into Docker builds can be tricky. A naive approach of passing tokens through ARG or ENV leads to token leakage in image layers, posing a significant security risk. Here's how I tackled this problem by dynamically generating the .npmrc during build-time without exposing sensitive information in the final image. If you want to know how to generate pat token in azure devops then you can refer this

# Stage 1: Build Stage
FROM node:22-alpine AS builder

WORKDIR /app
ARG NPM_AUTH_TOKEN

# Dynamically create .npmrc to authenticate with private NPM registry
RUN echo "registry=http://pkgs.dev.azure.com/{organization-name}/{project-name}/_packaging/{feed-name}/npm/registry/" > .npmrc && \
    echo "always-auth=true" >> .npmrc && \
    echo "//pkgs.dev.azure.com/{organization-name}/{project-name}/_packaging/{feed-name}/npm/registry/:username={organization-name}" >> .npmrc && \
    echo "//pkgs.dev.azure.com/{organization-name}/{project-name}/_packaging/{feed-name}/npm/registry/:_password=${NPM_AUTH_TOKEN}" >> .npmrc && \
    echo "//pkgs.dev.azure.com/{organization-name}/{project-name}/_packaging/{feed-name}/npm/registry/:email=npm requires email to be set but doesn't use the value" >> .npmrc

# Copy package files
COPY package.json ./

# Install dependencies
RUN npm install

# Copy application source code
COPY . .

# Generate Prisma Client
RUN npx prisma generate

# Clean up sensitive files
RUN rm -rf .npmrc

# Stage 2: Runtime Stage
FROM node:22-alpine

WORKDIR /app

# Copy built application from builder stage
COPY --from=builder /app /app

EXPOSE 3000

CMD ["node", "src/app.js"]

Enter fullscreen mode Exit fullscreen mode

Explanation:

  • When working with private NPM registries like Azure Artifacts, one common challenge is authenticating the Docker build process without leaking sensitive tokens into the final image. To solve this, I passed the NPM Auth Token as a build argument and generated an .npmrc file on-the-fly during the build stage, I also tried creating a .npmrc.docker file to keep the docker code clean and copy it but for some reason it was not taking the token, so i went with the current approach. After installing the dependencies, I made sure to delete the .npmrc file, ensuring that no secrets get persisted into the final runtime image.

  • To keep the image clean and secure, I used a multi-stage Docker build the first stage builds the app and installs dependencies, while the second stage only copies the necessary build artifacts. This approach ensures that devDependencies, build caches, and sensitive files never reach production, keeping the image lightweight and secure.

Automating Docker Builds & VM Deployments with Azure DevOps Pipelines (CI/CD Workflow)

  • After containerizing the application securely, the next step was to automate the entire build → push → deploy workflow using Azure DevOps Pipelines. The CI/CD pipeline I designed builds the Docker image, pushes it to Azure Container Registry (ACR), and then deploys it to an Azure Virtual Machine which also acts like self hosted agent for CD part (discussed it later).

  • One tricky part was handling environment variables securely during deployment. Instead of hardcoding them, I dynamically created a .env file on the VM during the deployment stage. The pipeline also ensures zero downtime deployments by stopping old containers, cleaning up stale files, pulling the latest image, and running it with updated configurations.

  • I'll talk about each step in a little bit

trigger:
- none

pool:
  vmImage: ubuntu-latest

variables:
- group: Backend-Auth-Variables  # Azure DevOps Variable Group for secrets

stages:
# Build Stage
- stage: Build
  displayName: Build and Push Docker Image
  jobs:
  - job: BuildAndPushImage
    displayName: Build and Push Docker Image
    steps:
    - task: Bash@3
      inputs:
        targetType: 'inline'
        script: | 
          echo "NPM_AUTH_TOKEN starts with: $(NPM-AUTH-TOKEN:0:4)..."
          docker build --build-arg NPM_AUTH_TOKEN=$(NPM-AUTH-TOKEN) -t <your-acr-name>.azurecr.io/backend-auth:$(Build.BuildId) .

    - task: Docker@2
      inputs:
        containerRegistry: 'ACR-Service-Connection'  # Azure DevOps Service Connection to ACR
        repository: 'backend-auth'
        command: 'push'

# Deploy Stage
- stage: Deploy
  displayName: Deploy to Azure VM
  dependsOn: Build
  jobs:
  - deployment: DeployApp
    displayName: SSH into VM and Deploy Container
    environment: 
      name: Azure_VM_Environment
      resourceName: backend-vm
    strategy:
      runOnce:
        deploy:
          steps:
          - task: Bash@3
            inputs:
              targetType: 'inline'
              script: |
                  #!/bin/bash
                  TARGET_DIR="$HOME/$(Build.Repository.Name)"

                  ENV_FILE_CONTENT="
                    DATABASE_URL=$(DATABASE-URL)
                    API_KEY_SERVICE_1=$(API-KEY-SERVICE-1)
                    API_KEY_SERVICE_2=$(API-KEY-SERVICE-2)
                    JWT_SECRET=$(JWT-SECRET)
                    EMAIL_USER=$(EMAIL-USER)
                    EMAIL_PASS=$(EMAIL-PASS)
                    OAUTH_CLIENT_ID=$(OAUTH-CLIENT-ID)
                    OAUTH_CLIENT_SECRET=$(OAUTH-CLIENT-SECRET)
                    FRONTEND_URL=http://localhost:3001
                    REDIS_PASSWORD=$(REDIS-PASSWORD)
                    REDIS_HOST=$(REDIS-HOST)
                    REDIS_USERNAME=$(REDIS-USERNAME)
                    REDIS_PORT=$(REDIS-PORT)"

                  DOCKER_IMAGE_NAME="<your-acr-name>.azurecr.io/backend-auth:$(Build.BuildId)"
                  CONTAINER_NAME="backend-auth-container"

                  if [ -d "$TARGET_DIR" ]; then
                      echo "Directory exists. Clearing files..."
                      rm -rf "$TARGET_DIR"/*
                  else
                      echo "Directory does not exist. Creating it..."
                      mkdir -p "$TARGET_DIR"
                  fi

                  echo "Creating .env file..."
                  echo "$ENV_FILE_CONTENT" > "$TARGET_DIR/.env"

                  if [ "$(docker ps -aq -f name=$CONTAINER_NAME)" ]; then
                    echo "Stopping and removing old container..."
                    docker rm -f $CONTAINER_NAME
                  else
                    echo "No old container found."
                  fi

                  echo "Pulling Image from ACR..."
                  docker pull $DOCKER_IMAGE_NAME

                  echo "Running new container..."
                  cd $TARGET_DIR
                  docker run -itd --name $CONTAINER_NAME --env-file .env -p 3000:3000 $DOCKER_IMAGE_NAME

Enter fullscreen mode Exit fullscreen mode

Explanation:

  1. Service Connections: Secure Access to ACR & Azure VM
    • I created a Docker Registry Service Connection in Azure DevOps to authenticate and push images to Azure Container Registry (ACR).
    • For deployment, I utilized a self-hosted agent running directly on the Azure Virtual Machine, which eliminated the need for any SSH-based service connections or additional setup complexities. The deploy stage in the pipeline simply executes Bash scripts on the same VM post-build, allowing for seamless container deployment and environment configuration without remote access overhead.
    • You can refer this for setting up self-hosted agent and docker service connection.

  1. Pipeline Execution Flow:
  • Build & Push Stage
    • A Microsoft-hosted Ubuntu agent performs the Docker build.
    • The NPM token (NPM_AUTH_TOKEN) is passed securely as a build argument to handle private package installations.
    • The built Docker image is tagged with the current Build ID and pushed to Azure Container Registry (ACR) using a Docker Service Connection.
    • I have used two tasks : Bash@3 for custom Docker build steps and Docker@2 for pushing the image to ACR.
  • Deploy Stage: VM-based Self-Hosted Deployment

    • This stage connects to a self-hosted agent (Azure VM) configured as an Azure DevOps Environment Resource.
    • It performs the following actions in sequence:
      • Creates a .env file dynamically on the VM with all sensitive configurations using Azure DevOps variables.
      • Stops and removes any existing containers.
      • Pulls the latest image from ACR.
      • Runs the new container with the updated .env configuration.

Top comments (0)