* feat: use our own actions in our workflows * fix: add missing inputs to validate-inputs, refactor node * chore: cr comment fixes * fix: update-validators formatting * chore: update validators, add tests, conventions * feat: validate severity with severity_enum * feat: add 10 generic validators to improve input validation coverage Add comprehensive validation system improvements across multiple phases: Phase 2A - Quick Wins: - Add multi_value_enum validator for 2-10 value enumerations - Add exit_code_list validator for Unix/Linux exit codes (0-255) - Refactor coverage_driver to use multi_value_enum Phase 2B - High-Value Validators: - Add key_value_list validator with shell injection prevention - Add path_list validator with path traversal and glob support Quick Wins - Additional Enums: - Add network_mode validator for Docker network modes - Add language_enum validator for language detection - Add framework_mode validator for PHP framework modes - Update boolean pattern to include 'push' Phase 2C - Specialized Validators: - Add json_format validator for JSON syntax validation - Add cache_config validator for Docker BuildKit cache configs Improvements: - All validators include comprehensive security checks - Pattern-based validation with clear error messages - 23 new test methods with edge case coverage - Update special case mappings for 20+ inputs - Fix build-args mapping test expectation Coverage impact: 22 actions now at 100% validation (88% → 92%) Test suite: 762 → 785 tests (+23 tests, all passing) * chore: regenerate rules.yml with improved validator coverage Regenerate validation rules for all actions with new validators: - compress-images: 86% → 100% (+1 input: ignore-paths) - docker-build: 63% → 100% (+4 inputs: cache configs, platform-build-args) - docker-publish: 73% → 100% (+1 input: build-args) - language-version-detect: 67% → 100% (+1 input: language) - php-tests: 89% (fixed framework→framework_mode mapping) - prettier-lint: 86% → 100% (+2 inputs: file-pattern, plugins) - security-scan: 86% (maintained coverage) Overall: 23 of 25 actions now at 100% validation coverage (92%) * fix: address PR #377 review comments - Add | None type annotations to 6 optional parameters (PEP 604) - Standardize injection pattern: remove @# from comma_separated_list validator (@ and # are not shell injection vectors, allows npm scoped packages) - Remove dead code: unused value expression in key_value_list validator - Update tests to reflect injection pattern changes
GitHub Actions Testing Framework
A comprehensive testing framework for validating GitHub Actions in this monorepo using ShellSpec and Python-based input validation.
🚀 Quick Start
# Run all tests
make test
# Run only unit tests
make test-unit
# Run tests for specific action
make test-action ACTION=node-setup
# Run with coverage reporting
make test-coverage
Prerequisites
# Install ShellSpec (testing framework)
curl -fsSL https://github.com/shellspec/shellspec/releases/latest/download/shellspec-dist.tar.gz | tar -xz
sudo make -C shellspec-* install
# Install nektos/act (optional, for integration tests)
brew install act # macOS
# or: curl https://raw.githubusercontent.com/nektos/act/master/install.sh | sudo bash
📁 Framework Overview
Architecture
The testing framework uses a multi-level testing strategy:
- Unit Tests - Fast validation of action logic, inputs, and outputs using Python validation
- Integration Tests - Test actions in realistic workflow environments
- External Usage Tests - Validate actions work as
ivuorinen/actions/action-name@main
Technology Stack
- Primary Framework: ShellSpec - BDD testing for shell scripts
- Validation: Python-based input validation via
validate-inputs/validator.py - Local Execution: nektos/act - Run GitHub Actions locally
- CI Integration: GitHub Actions workflows
Directory Structure
_tests/
├── README.md # This documentation
├── run-tests.sh # Main test runner script
├── unit/ # Unit tests by action
│ ├── spec_helper.sh # ShellSpec helper with validation functions
│ ├── version-file-parser/ # Example unit tests
│ ├── node-setup/ # Example unit tests
│ └── ... # One directory per action
├── framework/ # Core testing utilities
│ ├── setup.sh # Test environment setup
│ ├── utils.sh # Common testing functions
│ ├── validation.py # Python validation utilities
│ └── fixtures/ # Test fixtures
├── integration/ # Integration tests
│ ├── workflows/ # Test workflows for nektos/act
│ ├── external-usage/ # External reference tests
│ └── action-chains/ # Multi-action workflow tests
├── coverage/ # Coverage reports
└── reports/ # Test execution reports
✍️ Writing Tests
Basic Unit Test Structure
#!/usr/bin/env shellspec
# _tests/unit/my-action/validation.spec.sh
Describe "my-action validation"
ACTION_DIR="my-action"
ACTION_FILE="$ACTION_DIR/action.yml"
Context "when validating required inputs"
It "accepts valid input"
When call validate_input_python "my-action" "input-name" "valid-value"
The status should be success
End
It "rejects invalid input"
When call validate_input_python "my-action" "input-name" "invalid@value"
The status should be failure
End
End
Context "when validating boolean inputs"
It "accepts true"
When call validate_input_python "my-action" "dry-run" "true"
The status should be success
End
It "accepts false"
When call validate_input_python "my-action" "dry-run" "false"
The status should be success
End
It "rejects invalid boolean"
When call validate_input_python "my-action" "dry-run" "maybe"
The status should be failure
End
End
End
Integration Test Example
# _tests/integration/workflows/my-action-test.yml
name: Test my-action Integration
on: workflow_dispatch
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Test action locally
id: test-local
uses: ./my-action
with:
required-input: 'test-value'
- name: Validate outputs
run: |
echo "Output: ${{ steps.test-local.outputs.result }}"
[[ -n "${{ steps.test-local.outputs.result }}" ]] || exit 1
- name: Test external reference
uses: ivuorinen/actions/my-action@main
with:
required-input: 'test-value'
🛠️ Testing Functions
Primary Validation Function
The framework provides one main validation function that uses the Python validation system:
validate_input_python
Tests input validation using the centralized Python validator:
validate_input_python "action-name" "input-name" "test-value"
Examples:
# Boolean validation
validate_input_python "pre-commit" "dry-run" "true" # success
validate_input_python "pre-commit" "dry-run" "false" # success
validate_input_python "pre-commit" "dry-run" "maybe" # failure
# Version validation
validate_input_python "node-setup" "node-version" "18.0.0" # success
validate_input_python "node-setup" "node-version" "v1.2.3" # success
validate_input_python "node-setup" "node-version" "invalid" # failure
# Token validation
validate_input_python "npm-publish" "npm-token" "ghp_123..." # success
validate_input_python "npm-publish" "npm-token" "invalid" # failure
# Docker validation
validate_input_python "docker-build" "image-name" "myapp" # success
validate_input_python "docker-build" "tag" "v1.0.0" # success
# Path validation (security)
validate_input_python "pre-commit" "config-file" "config.yml" # success
validate_input_python "pre-commit" "config-file" "../etc/pass" # failure
# Injection detection
validate_input_python "common-retry" "command" "echo test" # success
validate_input_python "common-retry" "command" "rm -rf /; " # failure
Helper Functions from spec_helper.sh
# Setup/cleanup
setup_default_inputs "action-name" "input-name" # Set required defaults
cleanup_default_inputs "action-name" "input-name" # Clean up defaults
shellspec_setup_test_env "test-name" # Setup test environment
shellspec_cleanup_test_env "test-name" # Cleanup test environment
# Mock execution
shellspec_mock_action_run "action-dir" key1 value1 key2 value2
shellspec_validate_action_output "expected-key" "expected-value"
# Action metadata
validate_action_yml "action.yml" # Validate YAML structure
get_action_inputs "action.yml" # Get action inputs
get_action_outputs "action.yml" # Get action outputs
get_action_name "action.yml" # Get action name
Complete Action Validation Example
Describe "comprehensive-action validation"
ACTION_DIR="comprehensive-action"
ACTION_FILE="$ACTION_DIR/action.yml"
Context "when validating all input types"
It "validates boolean inputs"
When call validate_input_python "$ACTION_DIR" "verbose" "true"
The status should be success
When call validate_input_python "$ACTION_DIR" "verbose" "false"
The status should be success
When call validate_input_python "$ACTION_DIR" "verbose" "invalid"
The status should be failure
End
It "validates numeric inputs"
When call validate_input_python "$ACTION_DIR" "max-retries" "3"
The status should be success
When call validate_input_python "$ACTION_DIR" "max-retries" "999"
The status should be failure
End
It "validates version inputs"
When call validate_input_python "$ACTION_DIR" "tool-version" "1.0.0"
The status should be success
When call validate_input_python "$ACTION_DIR" "tool-version" "v1.2.3-rc.1"
The status should be success
End
It "validates security patterns"
When call validate_input_python "$ACTION_DIR" "command" "echo test"
The status should be success
When call validate_input_python "$ACTION_DIR" "command" "rm -rf /; "
The status should be failure
End
End
Context "when validating action structure"
It "has valid YAML structure"
When call validate_action_yml "$ACTION_FILE"
The status should be success
End
End
End
🎯 Testing Patterns by Action Type
Setup Actions (node-setup, php-version-detect, etc.)
Focus on version detection and environment setup:
Context "when detecting versions"
It "detects version from config files"
When call validate_input_python "node-setup" "node-version" "18.0.0"
The status should be success
End
It "accepts default version"
When call validate_input_python "python-version-detect" "default-version" "3.11"
The status should be success
End
End
Linting Actions (eslint-fix, prettier-fix, etc.)
Focus on file processing and security:
Context "when processing files"
It "validates working directory"
When call validate_input_python "eslint-fix" "working-directory" "."
The status should be success
End
It "rejects path traversal"
When call validate_input_python "eslint-fix" "working-directory" "../etc"
The status should be failure
End
It "validates boolean flags"
When call validate_input_python "eslint-fix" "fix-only" "true"
The status should be success
End
End
Build Actions (docker-build, go-build, etc.)
Focus on build configuration:
Context "when building"
It "validates image name"
When call validate_input_python "docker-build" "image-name" "myapp"
The status should be success
End
It "validates tag format"
When call validate_input_python "docker-build" "tag" "v1.0.0"
The status should be success
End
It "validates platforms"
When call validate_input_python "docker-build" "platforms" "linux/amd64,linux/arm64"
The status should be success
End
End
Publishing Actions (npm-publish, docker-publish, etc.)
Focus on credentials and registry validation:
Context "when publishing"
It "validates token format"
When call validate_input_python "npm-publish" "npm-token" "ghp_123456789012345678901234567890123456"
The status should be success
End
It "rejects invalid token"
When call validate_input_python "npm-publish" "npm-token" "invalid-token"
The status should be failure
End
It "validates version"
When call validate_input_python "npm-publish" "package-version" "1.0.0"
The status should be success
End
End
🔧 Running Tests
Command Line Interface
# Basic usage
./_tests/run-tests.sh [OPTIONS] [ACTION_NAME...]
# Examples
./_tests/run-tests.sh # All tests, all actions
./_tests/run-tests.sh -t unit # Unit tests only
./_tests/run-tests.sh -a node-setup # Specific action
./_tests/run-tests.sh -t integration docker-build # Integration tests for docker-build
./_tests/run-tests.sh --format json --coverage # JSON output with coverage
Options
| Option | Description |
|---|---|
-t, --type TYPE |
Test type: unit, integration, e2e, all |
-a, --action ACTION |
Filter by action name pattern |
-j, --jobs JOBS |
Number of parallel jobs (default: 4) |
-c, --coverage |
Enable coverage reporting |
-f, --format FORMAT |
Output format: console, json, junit |
-v, --verbose |
Enable verbose output |
-h, --help |
Show help message |
Make Targets
make test # Run all tests
make test-unit # Unit tests only
make test-integration # Integration tests only
make test-coverage # Tests with coverage
make test-action ACTION=name # Test specific action
🤝 Contributing Tests
Adding Tests for New Actions
-
Create Unit Test Directory
mkdir -p _tests/unit/new-action -
Write Unit Tests
# _tests/unit/new-action/validation.spec.sh #!/usr/bin/env shellspec Describe "new-action validation" ACTION_DIR="new-action" ACTION_FILE="$ACTION_DIR/action.yml" Context "when validating inputs" It "validates required input" When call validate_input_python "new-action" "required-input" "value" The status should be success End End End -
Create Integration Test
# _tests/integration/workflows/new-action-test.yml # (See integration test example above) -
Test Your Tests
make test-action ACTION=new-action
Pull Request Checklist
- Tests use
validate_input_pythonfor input validation - All test types pass locally (
make test) - Integration test workflow created
- Security testing included for user inputs
- Tests are independent and isolated
- Proper cleanup in test teardown
- Documentation updated if needed
💡 Best Practices
1. Use validate_input_python for All Input Testing
✅ Good:
When call validate_input_python "my-action" "verbose" "true"
The status should be success
❌ Avoid:
# Don't manually test validation - use the Python validator
export INPUT_VERBOSE="true"
python3 validate-inputs/validator.py
2. Group Related Validations
✅ Good:
Context "when validating configuration"
It "accepts valid boolean"
When call validate_input_python "my-action" "dry-run" "true"
The status should be success
End
It "accepts valid version"
When call validate_input_python "my-action" "tool-version" "1.0.0"
The status should be success
End
End
3. Always Include Security Testing
✅ Always include:
It "rejects command injection"
When call validate_input_python "common-retry" "command" "rm -rf /; "
The status should be failure
End
It "rejects path traversal"
When call validate_input_python "pre-commit" "config-file" "../etc/passwd"
The status should be failure
End
4. Write Descriptive Test Names
✅ Good:
It "accepts valid semantic version format"
It "rejects version with invalid characters"
It "falls back to default when no version file exists"
❌ Avoid:
It "validates input"
It "works correctly"
5. Keep Tests Independent
- Each test should work in isolation
- Don't rely on test execution order
- Clean up after each test
- Use proper setup/teardown
🔍 Framework Features
Test Environment Setup
The framework automatically sets up test environments via spec_helper.sh:
# Automatic setup on load
- GitHub Actions environment variables
- Temporary directories
- Mock GITHUB_OUTPUT files
- Default required inputs for actions
# Available variables
$PROJECT_ROOT # Repository root
$TEST_ROOT # _tests/ directory
$FRAMEWORK_DIR # _tests/framework/
$FIXTURES_DIR # _tests/framework/fixtures/
$TEMP_DIR # Temporary test directory
$GITHUB_OUTPUT # Mock outputs file
$GITHUB_ENV # Mock environment file
Python Validation Integration
All input validation uses the centralized Python validation system from validate-inputs/:
- Convention-based automatic validation
- 9 specialized validators (Boolean, Version, Token, Numeric, File, Network, Docker, Security, CodeQL)
- Custom validator support per action
- Injection and security pattern detection
🚨 Troubleshooting
Common Issues
"ShellSpec command not found"
# Install ShellSpec globally
curl -fsSL https://github.com/shellspec/shellspec/releases/latest/download/shellspec-dist.tar.gz | tar -xz
sudo make -C shellspec-* install
"act command not found"
# Install nektos/act (macOS)
brew install act
# Install nektos/act (Linux)
curl https://raw.githubusercontent.com/nektos/act/master/install.sh | sudo bash
Tests timeout
# Increase timeout for slow operations
export TEST_TIMEOUT=300
Permission denied on test scripts
# Make test scripts executable
find _tests/ -name "*.sh" -exec chmod +x {} \;
Debugging Tests
-
Enable Verbose Mode
./_tests/run-tests.sh -v -
Run Single Test
shellspec _tests/unit/my-action/validation.spec.sh -
Enable Debug Mode
export SHELLSPEC_DEBUG=1 shellspec _tests/unit/my-action/validation.spec.sh -
Check Test Output
# Test results stored in _tests/reports/ cat _tests/reports/unit/my-action.txt
📚 Resources
- ShellSpec Documentation
- nektos/act Documentation
- GitHub Actions Documentation
- Testing GitHub Actions Best Practices
- validate-inputs Documentation
Framework Development
Framework File Structure
_tests/
├── unit/
│ └── spec_helper.sh # ShellSpec configuration and helpers
├── framework/
│ ├── setup.sh # Test environment initialization
│ ├── utils.sh # Common utility functions
│ ├── validation.py # Python validation helpers
│ └── fixtures/ # Test fixtures
└── integration/
├── workflows/ # Integration test workflows
├── external-usage/ # External reference tests
└── action-chains/ # Multi-action tests
Available Functions
From spec_helper.sh (_tests/unit/spec_helper.sh):
validate_input_python(action, input_name, value)- Main validation functionsetup_default_inputs(action, input_name)- Set default required inputscleanup_default_inputs(action, input_name)- Clean up default inputsshellspec_setup_test_env(name)- Setup test environmentshellspec_cleanup_test_env(name)- Cleanup test environmentshellspec_mock_action_run(action_dir, ...)- Mock action executionshellspec_validate_action_output(key, value)- Validate outputs
From utils.sh (_tests/framework/utils.sh):
validate_action_yml(file)- Validate action YAMLget_action_inputs(file)- Extract action inputsget_action_outputs(file)- Extract action outputsget_action_name(file)- Get action nametest_input_validation(dir, name, value, expected)- Test inputtest_action_outputs(dir)- Test action outputstest_external_usage(dir)- Test external usage
Last Updated: October 15, 2025