env.dev

GitHub Actions: The Complete CI/CD Guide for Developers

Master GitHub Actions CI/CD: workflow syntax, triggers, matrix builds, reusable workflows, composite actions, caching, secrets, security hardening, and performance optimization.

Last updated:

GitHub Actions is a CI/CD and workflow automation platform built directly into GitHub. It lets you build, test, and deploy code from your repository using YAML-defined workflows triggered by events like pushes, pull requests, schedules, or manual dispatch. Over 78% of developers now use CI/CD in their workflow, and GitHub Actions is the most popular platform — processing over 2 billion workflow runs per month across 14+ million repositories. Workflows run on GitHub-hosted runners (Ubuntu, Windows, macOS) or self-hosted machines, with a generous free tier of 2,000 minutes/month for public repos and 500 for private.

How Does GitHub Actions Work?

GitHub Actions revolves around three core concepts: workflows, jobs, and steps. A workflow is a YAML file in .github/workflows/ that defines automated tasks. Each workflow contains one or more jobs that run on a runner. Each job contains steps — individual commands or reusable actions.

GitHub Actions execution model
Event (push, PR, schedule, manual)
  │
  ▼
Workflow (.github/workflows/*.yml)
  │
  ├── Job 1 (runs-on: ubuntu-latest)
  │   ├── Step 1: actions/checkout@v4
  │   ├── Step 2: actions/setup-node@v4
  │   └── Step 3: run: npm test
  │
  └── Job 2 (runs-on: ubuntu-latest, needs: job1)
      ├── Step 1: actions/checkout@v4
      └── Step 2: run: npm run deploy

Jobs run in parallel by default.
Use "needs:" to create sequential dependencies.

How Do You Write Your First Workflow?

Create a file at .github/workflows/ci.yml in your repository:

.github/workflows/ci.yml
name: CI

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

permissions:
  contents: read

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 22
          cache: npm
      - run: npm ci
      - run: npm test

This workflow runs on every push and pull request to main. It checks out the code, sets up Node.js with npm caching, installs dependencies, and runs tests. That's a production-ready CI pipeline in 15 lines of YAML.

What Workflow Triggers (Events) Are Available?

The on: key defines what triggers your workflow. GitHub supports 30+ event types.

TriggerWhen it firesExample
pushCode pushed to branch/tagon: push
pull_requestPR opened, synced, or reopenedon: pull_request
scheduleCron-based scheduleon: schedule: [{cron: '0 8 * * 1'}]
workflow_dispatchManual trigger via UI/APIon: workflow_dispatch
releaseRelease publishedon: release: types: [published]
workflow_callCalled by another workflowon: workflow_call
repository_dispatchExternal webhook eventon: repository_dispatch
issue_commentComment on issue/PRon: issue_comment: types: [created]

Filtering triggers

Use branches, tags, and paths filters to narrow when workflows run:

Filtered triggers
on:
  push:
    branches: [main, 'release/**']
    paths:
      - 'src/**'
      - 'package.json'
    paths-ignore:
      - '**.md'
      - 'docs/**'
  pull_request:
    branches: [main]
    types: [opened, synchronize, reopened]

Path filters are critical for monorepos — they prevent unnecessary CI runs when unrelated files change, saving minutes and money.

What Does a Full Workflow File Look Like?

Most production workflows touch fewer than ten of the available top-level keys, but it helps to see the full anatomy in one place. The block below is the complete YAML schema for a single-file workflow with a real deploy job — name, run-name, triggers, permissions, env, concurrency, defaults, jobs with outputs, conditions, timeouts, and environments. Use it as a reference when you forget where a key lives.

Full workflow anatomy
name: Deploy                          # Display name in Actions tab
run-name: Deploy by @${{ github.actor }}  # Custom run name

on:
  push:
    branches: [main]

permissions:                          # Least-privilege GITHUB_TOKEN
  contents: read
  deployments: write

env:                                  # Workflow-level env vars
  NODE_ENV: production

concurrency:                          # Prevent parallel runs
  group: deploy-${{ github.ref }}
  cancel-in-progress: true

defaults:                             # Default shell and working dir
  run:
    shell: bash
    working-directory: ./app

jobs:
  build:
    runs-on: ubuntu-latest            # Runner environment
    timeout-minutes: 15               # Job timeout (default: 360)
    environment: production           # Deployment environment
    outputs:
      version: ${{ steps.ver.outputs.version }}

    steps:
      - uses: actions/checkout@v4     # Use a published action
        with:
          fetch-depth: 0              # Action inputs

      - name: Get version             # Step display name
        id: ver                       # Step ID for referencing outputs
        run: echo "version=$(cat version.txt)" >> "$GITHUB_OUTPUT"

      - run: npm ci                   # Inline shell command
      - run: npm run build

  deploy:
    needs: build                      # Sequential dependency
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/main'  # Conditional execution
    steps:
      - run: echo "Deploying v${{ needs.build.outputs.version }}"

What Are Actions?

Actions are reusable units of code that perform a specific task. You reference them with uses: in a step. There are 20,000+ actions on the GitHub Marketplace.

ActionPurposeUsage
actions/checkout@v4Clone repo into runnerAlmost every workflow
actions/setup-node@v4Install Node.js + cacheNode.js projects
actions/setup-python@v5Install Python + cachePython projects
actions/cache@v4Cache deps between runsSpeed up builds
actions/upload-artifact@v4Save build outputsShare between jobs
actions/download-artifact@v4Retrieve saved outputsDownstream jobs
github/codeql-action@v3Security scanningCode security
docker/build-push-action@v6Build + push imagesContainer workflows

How Does the Matrix Strategy Work?

The strategy.matrix key runs a job multiple times with different configurations — essential for testing across Node versions, operating systems, or database versions:

Matrix build
jobs:
  test:
    runs-on: ${{ matrix.os }}
    strategy:
      fail-fast: false              # Don't cancel other jobs if one fails
      matrix:
        os: [ubuntu-latest, macos-latest, windows-latest]
        node-version: [20, 22]
        exclude:
          - os: windows-latest
            node-version: 20        # Skip this combination
        include:
          - os: ubuntu-latest
            node-version: 22
            coverage: true          # Add extra variable to one combo
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: ${{ matrix.node-version }}
      - run: npm ci
      - run: npm test
      - if: matrix.coverage
        run: npm run test:coverage

This creates 5 parallel jobs (3 OS × 2 Node versions, minus 1 exclusion). Matrix builds catch platform-specific bugs before users do.

How Do Secrets and Environment Variables Work?

GitHub provides three levels of variable storage: secrets (encrypted, write-only), variables (plaintext configuration), and environments (scoped secrets + protection rules). For a deeper walkthrough of when to use each — including environment vs repository scope, fork-PR access rules, and multi-line secret handling — see the GitHub Actions secrets vs environment variables deep-dive.

Using secrets and variables
jobs:
  deploy:
    runs-on: ubuntu-latest
    environment: production           # Activate environment secrets + protection rules
    steps:
      - run: |
          curl -X POST \
            -H "Authorization: Bearer ${{ secrets.DEPLOY_TOKEN }}" \
            -H "Content-Type: application/json" \
            -d '{"ref": "${{ github.sha }}"}' \
            ${{ vars.DEPLOY_URL }}/api/deploy
FeatureSecretsVariablesEnvironments
StorageEncryptedPlaintextScoped secrets + vars
Visibility in logsAuto-maskedVisibleAuto-masked (secrets)
Access syntax${{ secrets.NAME }}${{ vars.NAME }}Same, scoped by environment
Scope levelsOrg, repoOrg, repoPer environment (staging, prod)
Protection rulesNoNoYes (reviewers, wait timer, branches)

How Do You Cache Dependencies in GitHub Actions?

Dependency installs are usually the longest single step in a CI pipeline. A cold npm ci on a real-world Node project takes 45-90 seconds; a warm cache hit drops it to 5-10 seconds. Most setup actions ship built-in caching, but for custom paths you reach for actions/cache directly. The companion GitHub Actions cheat sheet has the cache-key recipes for npm, pnpm, yarn, pip, Go modules, Cargo, Bun, and Docker BuildKit.

Built-in cache vs explicit cache
# Option 1: Built-in cache (preferred)
- uses: actions/setup-node@v4
  with:
    node-version: 22
    cache: npm                      # Automatically caches ~/.npm

# Option 2: Explicit cache (for custom paths)
- uses: actions/cache@v4
  with:
    path: |
      node_modules
      ~/.cache/turbo
    key: ${{ runner.os }}-deps-${{ hashFiles('**/pnpm-lock.yaml') }}
    restore-keys: |
      ${{ runner.os }}-deps-

Two limits to keep in mind. Each repository gets a hard 10 GB cache quota — when you exceed it, GitHub evicts entries oldest-first. Caches are also evicted after 7 days of inactivity, so a feature branch you haven't touched for a week starts cold. Engineer your keys around this: include the lockfile hash so a fresh install only happens on real dependency changes, and use restore-keys as a fallback to bootstrap from a partial match. Caches are scoped per branch with read-through from the default branch — feature branches get the main cache for free but write back into their own scope, which is why a brand new PR still benefits from a warm install.

What Are Reusable Workflows?

Reusable workflows let you define a workflow once and call it from multiple other workflows — DRY for CI/CD. They use the workflow_call trigger and support typed inputs, secrets, and outputs.

.github/workflows/reusable-test.yml (callee)
name: Reusable Test

on:
  workflow_call:
    inputs:
      node-version:
        type: number
        default: 22
    secrets:
      NPM_TOKEN:
        required: false
    outputs:
      coverage:
        description: Test coverage percentage
        value: ${{ jobs.test.outputs.coverage }}

jobs:
  test:
    runs-on: ubuntu-latest
    outputs:
      coverage: ${{ steps.cov.outputs.pct }}
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: ${{ inputs.node-version }}
      - run: npm ci
      - run: npm test
      - id: cov
        run: echo "pct=85" >> "$GITHUB_OUTPUT"
.github/workflows/ci.yml (caller)
name: CI

on: [push, pull_request]

jobs:
  test:
    uses: ./.github/workflows/reusable-test.yml    # Same repo
    with:
      node-version: 22
    secrets: inherit                                # Forward all secrets

  test-external:
    uses: org/shared-workflows/.github/workflows/test.yml@v1  # Cross-repo
    with:
      node-version: 20

Reusable workflows can nest up to 10 levels deep, with a maximum of 50 workflow calls per run.

What Are Composite Actions?

Composite actions bundle multiple steps into a single reusable action. Unlike reusable workflows, they run inline within the calling job — no separate runner. Use them for smaller, repeated step sequences:

.github/actions/setup-project/action.yml
name: Setup Project
description: Checkout, install Node.js, and install dependencies

inputs:
  node-version:
    default: '22'

runs:
  using: composite
  steps:
    - uses: actions/checkout@v4
    - uses: actions/setup-node@v4
      with:
        node-version: ${{ inputs.node-version }}
        cache: npm
    - run: npm ci
      shell: bash
Using the composite action
jobs:
  lint:
    runs-on: ubuntu-latest
    steps:
      - uses: ./.github/actions/setup-project
        with:
          node-version: '22'
      - run: npm run lint

  test:
    runs-on: ubuntu-latest
    steps:
      - uses: ./.github/actions/setup-project
      - run: npm test

How Do You Share Data Between Jobs Using Artifacts?

Jobs run on separate runners, so they don't share a filesystem. Use artifacts to pass files between jobs:

Upload and download artifacts
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: npm ci && npm run build
      - uses: actions/upload-artifact@v4
        with:
          name: dist
          path: dist/
          retention-days: 7           # Default is 90

  deploy:
    needs: build
    runs-on: ubuntu-latest
    steps:
      - uses: actions/download-artifact@v4
        with:
          name: dist
          path: dist/
      - run: ./deploy.sh dist/

CI/CD Pipeline Architectures

Every team needs a CI/CD pipeline, but the shape depends on your team size, release cadence, and risk tolerance. Below are three proven architectures — from the simplest PR-to-production flow to progressive multi-environment delivery.

Pipeline 1: Standard PR → Production

The most common pipeline for small-to-medium teams. Every PR triggers parallel CI jobs (lint, test, build). Merging to main triggers deployment. Simple, effective, and easy to debug.

Standard PR → Production Pipelineparallel jobsPR Openedpull_requestLint + Formatbiome, eslintUnit Testsvitest, jestBuildtsc, vitePreview DeployVercel / CFMerge to mainsquash mergeProd Deployenvironment: prod✓ all checks pass
.github/workflows/ci.yml — Full PR pipeline
name: CI

on:
  pull_request:
    branches: [main]

permissions:
  contents: read

concurrency:
  group: ci-${{ github.event.pull_request.number }}
  cancel-in-progress: true

jobs:
  lint:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with: { node-version: 22, cache: pnpm }
      - run: pnpm install --frozen-lockfile
      - run: pnpm run lint

  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with: { node-version: 22, cache: pnpm }
      - run: pnpm install --frozen-lockfile
      - run: pnpm test

  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with: { node-version: 22, cache: pnpm }
      - run: pnpm install --frozen-lockfile
      - run: pnpm run build
      - uses: actions/upload-artifact@v4
        with:
          name: dist
          path: dist/
.github/workflows/deploy.yml — Deploy on merge
name: Deploy

on:
  push:
    branches: [main]

permissions:
  contents: read
  deployments: write

concurrency:
  group: deploy-production
  cancel-in-progress: false        # Don't cancel in-progress deploys

jobs:
  deploy:
    runs-on: ubuntu-latest
    environment: production          # Requires approval if configured
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with: { node-version: 22, cache: pnpm }
      - run: pnpm install --frozen-lockfile
      - run: pnpm run build
      - run: npx wrangler deploy    # Or: vercel deploy --prod
        env:
          CLOUDFLARE_API_TOKEN: ${{ secrets.CF_API_TOKEN }}

When to use: single-product teams, startups, projects with fast iteration cycles. Branch protection rules enforce that CI must pass before merging.

Pipeline 2: Trunk-Based Development

The fastest path to production. Short-lived branches (hours, not weeks) merge directly to main. Feature flags gate incomplete work. Merging to main triggers an automated deploy. Used by Google, Meta, and most high-velocity teams — the 2024 DORA report found elite performers deploy more than once a day, and trunk-based development is the only Git strategy that supports that cadence without nightly merge pain.

Trunk-Based Development with Feature Flagsmainfeat/loginfix/cache-bugPR CIlint → test → buildMerge Gatebranch protection rulesPush to maintriggers deploy workflowAuto Deploystaging → canary → prodShort-lived branches merge fast. main is always deployable.Feature flags gate incomplete work — no long-lived branches.Recommended for teams shipping to production daily.

The architectural decision behind this pipeline is collapsing the wall between CI and CD. One workflow handles both push and pull_request events, and the same job graph promotes from staging to production in sequence using needs: and environment:. The whole pipeline is a directed graph, not two separate workflows that have to coordinate via workflow_run:

trunk-ci-deploy.yml — essential pattern (full file in starter examples below)
jobs:
  ci:
    runs-on: ubuntu-latest
    steps: [...]                      # lint, test, build

  deploy-staging:
    needs: ci
    if: github.ref == 'refs/heads/main'
    environment: staging              # auto-deploy on merge
    runs-on: ubuntu-latest
    steps: [...]

  deploy-production:
    needs: deploy-staging
    environment: production           # manual approval gate
    runs-on: ubuntu-latest
    steps: [...]

Trade-offs: you trade safety for speed. Without feature flags you ship half-finished UI to production behind a flag default of off; without good test coverage, bugs hit users instead of a separate QA branch. The payoff is a measured 200x lead-time reduction over Gitflow-style branches in DORA's 2023 report — and a CI bill cut roughly in half because you stop running the same checks twice (once on the feature branch, once on the merge commit).

When to use: teams deploying to production daily or more frequently. Requires good test coverage and feature flag infrastructure.

Pipeline 3: Progressive Delivery (Multi-Environment)

The safest path to production. Changes flow through multiple environments with automated quality gates at each stage. Each environment validates a different dimension — functionality, performance, and real-world traffic.

Progressive Deployment Pipeline (PR → Preview → Staging → Canary → Prod)🔍PR Previewper-PR deploy🧪Stagingauto on merge🐤Canary (5%)smoke tests🚀Productionmanual approval← rollback to previous version on failureon: pull_requeston: push (main)on: workflow_runenvironment: prodEach stage acts as a quality gate. Failures halt promotion and trigger alerts.Production requires manual approval via GitHub Environment protection rules.Recommended for teams with SLAs, where downtime has business impact.

Two architectural decisions distinguish this pipeline from Pipeline 2. First, the build runs once and is passed forward as an artifact — every environment deploys byte-for-byte the same binary, so a passing staging deploy guarantees the production deploy is running the code you tested. Second, the canary stage acts as an automated quality gate: it deploys to ~5% of traffic, watches an error-rate or SLO metric for a few minutes, and aborts the promotion if the signal regresses. The pattern looks like this:

progressive-deploy.yml — canary gate pattern
jobs:
  build:
    steps:
      - run: pnpm run build
      - uses: actions/upload-artifact@v4
        with: { name: dist, path: dist/ }

  deploy-canary:
    needs: deploy-staging
    environment: canary
    steps:
      - uses: actions/download-artifact@v4   # same artifact, every stage
      - run: npx wrangler deploy --env canary
      - name: Monitor error rate (5 min)
        run: |
          sleep 300
          ERROR_RATE=$(curl -s https://api.example.com/metrics/error-rate)
          if (( $(echo "$ERROR_RATE > 0.01" | bc -l) )); then
            exit 1                            # halt promotion
          fi

  deploy-production:
    needs: deploy-canary
    environment: production                   # manual approval
    steps: [...]

Trade-offs: wall-clock time. A canary watch window of 5-15 minutes is the floor — anything shorter is theatre, because real regressions in latency or error rate need traffic to surface. End-to-end you typically spend 20-40 minutes from merge to full production rollout. The payoff is that bad deploys hit 5% of users for 5 minutes instead of 100% of users for the time it takes someone to notice and roll back. For an e-commerce site doing $1M/day in revenue, that's the difference between a $35K incident and a $1.7K one.

When to use: teams with SLAs, regulated industries, or user-facing products where downtime has measurable business impact. The extra stages add 10-15 min but catch issues before they reach all users.

Which Pipeline Architecture Should You Choose?

FactorStandard PR → ProdTrunk-BasedProgressive Delivery
Team size1-10 devs5-50 devs10+ devs
Deploy frequencyDaily to weeklyMultiple per dayDaily with safety nets
Setup complexityLow (1-2 workflow files)Medium (feature flags)High (multiple envs)
Risk toleranceMediumLow (fast rollback)Very low (staged rollout)
Time to production5-10 min3-8 min15-30 min
Best forStartups, small teamsHigh-velocity SaaSEnterprise, regulated

Where Can You Find Real-World Workflow Examples?

Synthetic snippets only get you so far. The fastest way to learn idiomatic GitHub Actions is to read production workflows in popular open-source repos — they handle scale, edge cases, and the long tail of platform quirks you only hit in real projects. Each card below links to a currently-maintained workflow file worth studying.

Starter Templates and Sample Repositories

Copy-paste these templates or explore the linked repos for real-world examples:

How Do You Prevent Duplicate Workflow Runs?

Without a concurrency group, every push to an active PR queues a fresh workflow run, even if the previous one for that PR is still building. On a busy four-developer team that force-pushes a few times an hour, you can easily have four in-flight runs eating CI minutes for code that's already been replaced. Use concurrency to keep only the newest run alive:

Cancel outdated PR builds
concurrency:
  group: ci-${{ github.event.pull_request.number || github.ref }}
  cancel-in-progress: true

This groups runs by PR number (or branch ref for pushes) and cancels any running workflow when a new one starts in the same group. The savings are concrete: a busy private repo on the Free plan can burn through its 2,000-minute monthly cap in under two weeks without it, while the same repo with concurrency enabled typically lands at 30-50% lower usage. For deploy workflows, set cancel-in-progress: false instead — you don't want to interrupt a production rollout midway through, even if a newer commit lands.

How Do Permissions and GITHUB_TOKEN Work?

Every workflow run gets a GITHUB_TOKEN with configurable permissions. Always follow least-privilege:

Explicit permissions
# Workflow-level (applies to all jobs)
permissions:
  contents: read
  pull-requests: write
  issues: read

# Or per-job (overrides workflow-level)
jobs:
  lint:
    permissions:
      contents: read
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: npm run lint

  comment:
    permissions:
      pull-requests: write
    runs-on: ubuntu-latest
    steps:
      - run: gh pr comment ${{ github.event.number }} --body "All checks passed"
        env:
          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

Set the repository default to read in Settings → Actions → General → Workflow permissions. GitHub itself made read-only the default for new repositories in February 2023, after a string of 2021-2022 token-stealing incidents — most notably the Codecov bash uploader compromise — taught the platform that a write-by-default token is a privilege most workflows never need. If you inherited a repo created before that change, the default is still read/write; the single most-impactful security change you can make is flipping that toggle and explicitly granting writes per workflow.

Security Best Practices

GitHub Actions has been the target of several large-scale supply-chain attacks. The tj-actions/changed-files compromise in March 2025 affected an estimated 23,000+ repositories: an attacker pushed a malicious commit and retagged every release from v1 through v45 to point at it, dumping every CI runner's secrets into workflow logs. The mitigation list below is built around the lessons from that incident and the 2021 Codecov breach — every line is something you can do today.

  • Pin actions to full commit SHAs — tags can be moved or compromised. Use uses: actions/checkout@<full-sha> in production. This is the single mitigation that would have neutralized the tj-actions attack.
  • Never use structured data as secrets — JSON/XML/YAML secrets cannot be properly redacted in logs
  • Avoid pull_request_target unless you understand the security implications — it runs with write access on code from forks
  • Don't use self-hosted runners for public repos — fork PRs can execute arbitrary code on your runner
  • Mask sensitive output — use echo "::add-mask::$VALUE" for dynamically generated secrets
  • Audit third-party actions — review source code before using, especially for actions that handle secrets
  • Use environments for deployments — protection rules enforce required reviewers, wait timers, and branch restrictions

Performance Optimization

For most teams, the bottleneck isn't compute speed — runners are fast — it's wasted work. A typical untuned Node.js pipeline spends 60-80% of its wall-clock time on npm install, repeated across three or four jobs that each re-install the same dependencies from scratch. Pull the lever on caching plus job parallelism and you can usually take a 12-minute pipeline down to 3-4 minutes without touching the test suite. The numbers below are what we measure on real-world Node and Python projects.

TechniqueImpactHow
Dependency caching50-80% faster installscache: npm in setup-node or actions/cache
Concurrency + cancel-in-progress30-50% fewer minutesconcurrency group per PR/branch
Path filtersSkip irrelevant runspaths: / paths-ignore: in on: push
Matrix fail-fast: falseFull test coverageDon't cancel other matrix jobs on failure
Smaller runnersLower costUse ubuntu-latest over macos-latest when possible
Timeout limitsPrevent runaway jobstimeout-minutes: 15 on each job
Parallel jobsFaster pipelineSplit lint, test, build into separate jobs

How Do You Debug Failed Workflows?

When a workflow fails, use these techniques to diagnose the issue:

  • Enable debug logging — set repo secret ACTIONS_STEP_DEBUG to true for verbose step output
  • Re-run with debug — click "Re-run jobs" → "Enable debug logging" in the Actions UI
  • Use gh CLI gh run view --log-failed shows only failed step logs
  • Add diagnostic steps — temporarily add run: env | sort or run: cat $GITHUB_EVENT_PATH to inspect context
  • Check runner status gh run view <run-id> shows which runner was assigned and its status
Useful gh CLI commands for debugging
# List recent workflow runs
gh run list --limit 5

# View a specific run
gh run view <run-id>

# Watch a run in real-time
gh run watch <run-id>

# View only the failed logs
gh run view <run-id> --log-failed

# Re-run failed jobs
gh run rerun <run-id> --failed

# Trigger a workflow manually
gh workflow run ci.yml --ref main

What Is the Difference Between Reusable Workflows and Composite Actions?

FeatureReusable WorkflowComposite Action
Runs onSeparate runnerSame job (inline)
Triggerworkflow_calluses: in a step
Can have jobs?Yes (multiple)No (steps only)
Secrets accessExplicit pass or inheritInherits from caller job
Environment supportYesNo
Nesting depthUp to 10 levelsUnlimited (but keep it shallow)
Best forFull pipelines (CI, deploy)Setup steps, small reusable units

Rule of thumb: use reusable workflows for big shared pipelines (build + test matrix, deploy). Use composite actions for smaller repeated steps (project setup, formatting, packaging).

References

Bookmark the GitHub Actions cheat sheet for syntax, contexts, expressions, and cache-key recipes you can copy in seconds, or grab the YAML cheat sheet when you're tired of debugging indentation errors in your workflow files.

Was this helpful?

Frequently Asked Questions

What is the difference between push and pull_request triggers?

The push trigger fires when commits are pushed to a branch. The pull_request trigger fires when a PR is opened, synchronized (new commits pushed), or reopened. pull_request runs in the context of the merge result and has limited secret access for fork PRs.

How much does GitHub Actions cost?

Public repositories get unlimited free minutes. Private repos get 500 free minutes/month on the Free plan, 3,000 on Team, and 50,000 on Enterprise. Linux runners cost $0.008/min, macOS $0.08/min (10x), and Windows $0.016/min (2x). Self-hosted runners have no per-minute charge.

What is the difference between secrets and variables?

Secrets are encrypted and write-only — you can set them but never read them back in the UI. They are automatically masked in logs. Variables are plaintext configuration values visible in the UI and in logs. Use secrets for API tokens, passwords, and keys. Use variables for non-sensitive config like URLs, feature flags, or version numbers.

How do I run a workflow only on specific file changes?

Use path filters in your trigger: on: push: paths: ['src/**', 'package.json']. You can also use paths-ignore to exclude files like documentation. Path filters work with both push and pull_request events. For monorepos, this prevents unnecessary CI runs when unrelated packages change.

Can I run GitHub Actions locally?

Yes, use the open-source tool "act" (github.com/nektos/act). It runs workflows locally using Docker containers that simulate GitHub runners. It supports most features but cannot perfectly replicate GitHub-hosted runner environments. It is useful for fast iteration during workflow development.

How do I pass data between jobs?

For small values (strings, numbers), use job outputs: write to $GITHUB_OUTPUT in a step, declare the output in the job's outputs map, then read it in downstream jobs via needs.job-id.outputs.name. For files, use actions/upload-artifact and actions/download-artifact to pass build outputs, test results, or other files between jobs.

Should I pin actions to tags or commit SHAs?

Pin to full commit SHAs for production workflows. Tags like @v4 can be moved or compromised by the action author, meaning a tag you trusted yesterday could point to different code today. SHAs are immutable. Use Dependabot or Renovate to automatically update pinned SHAs when new versions are released. The tj-actions/changed-files compromise in March 2025 — which exposed secrets in 23,000+ repos — was only possible because nearly all callers used mutable tags instead of SHAs.

Stay up to date

Get notified about new guides, tools, and cheatsheets.