feat: fixes, tweaks, new actions, linting (#186)

* feat: fixes, tweaks, new actions, linting
* fix: improve docker publish loops and dotnet parsing (#193)
* fix: harden action scripts and version checks (#191)
* refactor: major repository restructuring and security enhancements

Add comprehensive development infrastructure:
- Add Makefile with automated documentation generation, formatting, and linting tasks
- Add TODO.md tracking self-containment progress and repository improvements
- Add .nvmrc for consistent Node.js version management
- Create python-version-detect-v2 action for enhanced Python detection

Enhance all GitHub Actions with standardized patterns:
- Add consistent token handling across 27 actions using standardized input patterns
- Implement bash error handling (set -euo pipefail) in all shell steps
- Add comprehensive input validation for path traversal and command injection protection
- Standardize checkout token authentication to prevent rate limiting
- Remove relative action dependencies to ensure external usability

Rewrite security workflow for PR-focused analysis:
- Transform security-suite.yml to PR-only security analysis workflow
- Remove scheduled runs, repository issue management, and Slack notifications
- Implement smart comment generation showing only sections with content
- Add GitHub Actions permission diff analysis and new action detection
- Integrate OWASP, Semgrep, and TruffleHog for comprehensive PR security scanning

Improve version detection and dependency management:
- Simplify version detection actions to use inline logic instead of shared utilities
- Fix Makefile version detection fallback to properly return 'main' when version not found
- Update all external action references to use SHA-pinned versions
- Remove deprecated run.sh in favor of Makefile automation

Update documentation and project standards:
- Enhance CLAUDE.md with self-containment requirements and linting standards
- Update README.md with improved action descriptions and usage examples
- Standardize code formatting with updated .editorconfig and .prettierrc.yml
- Improve GitHub templates for issues and security reporting

This refactoring ensures all 40 actions are fully self-contained and can be used independently when
referenced as ivuorinen/actions/action-name@main, addressing the critical requirement for external
usability while maintaining comprehensive security analysis and development automation.

* feat: add automated action catalog generation system

- Create generate_listing.cjs script for comprehensive action catalog
- Add package.json with development tooling and npm scripts
- Implement automated README.md catalog section with --update flag
- Generate markdown reference-style links for all 40 actions
- Add categorized tables with features, language support matrices
- Replace static reference links with auto-generated dynamic links
- Enable complete automation of action documentation maintenance

* feat: enhance actions with improved documentation and functionality

- Add comprehensive README files for 12 actions with usage examples
- Implement new utility actions (go-version-detect, dotnet-version-detect)
- Enhance node-setup with extensive configuration options
- Improve error handling and validation across all actions
- Update package.json scripts for better development workflow
- Expand TODO.md with detailed roadmap and improvement plans
- Standardize action structure with consistent inputs/outputs

* feat: add comprehensive output handling across all actions

- Add standardized outputs to 15 actions that previously had none
- Implement consistent snake_case naming convention for all outputs
- Add build status and test results outputs to build actions
- Add files changed and status outputs to lint/fix actions
- Add test execution metrics to php-tests action
- Add stale/closed counts to stale action
- Add release URLs and IDs to github-release action
- Update documentation with output specifications
- Mark comprehensive output handling task as complete in TODO.md

* feat: implement shared cache strategy across all actions

- Add caching to 10 actions that previously had none (Node.js, .NET, Python, Go)
- Standardize 4 existing actions to use common-cache instead of direct actions/cache
- Implement consistent cache-hit optimization to skip installations when cache available
- Add language-specific cache configurations with appropriate key files
- Create unified caching approach using ivuorinen/actions/common-cache@main
- Fix YAML syntax error in php-composer action paths parameter
- Update TODO.md to mark shared cache strategy as complete

* feat: implement comprehensive retry logic for network operations

- Create new common-retry action for standardized retry patterns with configurable strategies
- Add retry logic to 9 actions missing network retry capabilities
- Implement exponential backoff, custom timeouts, and flexible error handling
- Add max-retries input parameter to all network-dependent actions (Node.js, .NET, Python, Go)
- Standardize existing retry implementations to use common-retry utility
- Update action catalog to include new common-retry action (41 total actions)
- Update documentation with retry configuration examples and parameters
- Mark retry logic implementation as complete in TODO.md roadmap

* feat: enhance Node.js support with Corepack and Bun

- Add Corepack support for automatic package manager version management
- Add Bun package manager support across all Node.js actions
- Improve Yarn Berry/PnP support with .yarnrc.yml detection
- Add Node.js feature detection (ESM, TypeScript, frameworks)
- Update package manager detection priority and lockfile support
- Enhance caching with package-manager-specific keys
- Update eslint, prettier, and biome actions for multi-package-manager support

* fix: resolve critical runtime issues across multiple actions

- Fix token validation by removing ineffective literal string comparisons
- Add missing @microsoft/eslint-formatter-sarif dependency for SARIF output
- Fix Bash variable syntax errors in username and changelog length checks
- Update Dockerfile version regex to handle tags with suffixes (e.g., -alpine)
- Simplify version selection logic with single grep command
- Fix command execution in retry action with proper bash -c wrapper
- Correct step output references using .outcome instead of .outputs.outcome
- Add missing step IDs for version detection actions
- Include go.mod in cache key files for accurate invalidation
- Require minor version in all version regex patterns
- Improve Bun installation security by verifying script before execution
- Replace bc with sort -V for portable PHP version comparison
- Remove non-existent pre-commit output references

These fixes ensure proper runtime behavior, improved security, and better
cross-platform compatibility across all affected actions.

* fix: resolve critical runtime and security issues across actions

- Fix biome-fix files_changed calculation using git diff instead of git status delta
- Fix compress-images output description and add absolute path validation
- Remove csharp-publish token default and fix token fallback in push commands
- Add @microsoft/eslint-formatter-sarif to all package managers in eslint-check
- Fix eslint-check command syntax by using variable assignment
- Improve node-setup Bun installation security and remove invalid frozen-lockfile flag
- Fix pre-commit token validation by removing ineffective literal comparison
- Fix prettier-fix token comparison and expand regex for all GitHub token types
- Add version-file-parser regex validation safety and fix csproj wildcard handling

These fixes address security vulnerabilities, runtime errors, and functional issues
to ensure reliable operation across all affected GitHub Actions.

* feat: enhance Docker actions with advanced multi-architecture support

Major enhancement to Docker build and publish actions with comprehensive
multi-architecture capabilities and enterprise-grade features.

Added features:
- Advanced buildx configuration (version control, cache modes, build contexts)
- Auto-detect platforms for dynamic architecture discovery
- Performance optimizations with enhanced caching strategies
- Security scanning with Trivy and image signing with Cosign
- SBOM generation in multiple formats with validation
- Verbose logging and dry-run modes for debugging
- Platform-specific build args and fallback mechanisms

Enhanced all Docker actions:
- docker-build: Core buildx features and multi-arch support
- docker-publish-gh: GitHub Packages with security features
- docker-publish-hub: Docker Hub with scanning and signing
- docker-publish: Orchestrator with unified configuration

Updated documentation across all modified actions.

* fix: resolve documentation generation placeholder issue

Fixed Makefile and package.json to properly replace placeholder tokens in generated documentation, ensuring all README files show correct repository paths instead of ***PROJECT***@***VERSION***.

* chore: simplify github token validation
* chore(lint): optional yamlfmt, config and fixes
* feat: use relative `uses` names

* feat: comprehensive testing infrastructure and Python validation system

- Migrate from tests/ to _tests/ directory structure with ShellSpec framework
- Add comprehensive validation system with Python-based input validation
- Implement dual testing approach (ShellSpec + pytest) for complete coverage
- Add modern Python tooling (uv, ruff, pytest-cov) and dependencies
- Create centralized validation rules with automatic generation system
- Update project configuration and build system for new architecture
- Enhance documentation to reflect current testing capabilities

This establishes a robust foundation for action validation and testing
with extensive coverage across all GitHub Actions in the repository.

* chore: remove Dockerfile for now
* chore: code review fixes

* feat: comprehensive GitHub Actions restructuring and tooling improvements

This commit represents a major restructuring of the GitHub Actions monorepo
with improved tooling, testing infrastructure, and comprehensive PR #186
review implementation.

## Major Changes

### 🔧 Development Tooling & Configuration
- **Shellcheck integration**: Exclude shellspec test files from linting
  - Updated .pre-commit-config.yaml to exclude _tests/*.sh from shellcheck/shfmt
  - Modified Makefile shellcheck pattern to skip shellspec files
  - Updated CLAUDE.md documentation with proper exclusion syntax
- **Testing infrastructure**: Enhanced Python validation framework
  - Fixed nested if statements and boolean parameter issues in validation.py
  - Improved code quality with explicit keyword arguments
  - All pre-commit hooks now passing

### 🏗️ Project Structure & Documentation
- **Added Serena AI integration** with comprehensive project memories:
  - Project overview, structure, and technical stack documentation
  - Code style conventions and completion requirements
  - Comprehensive PR #186 review analysis and implementation tracking
- **Enhanced configuration**: Updated .gitignore, .yamlfmt.yml, pyproject.toml
- **Improved testing**: Added integration workflows and enhanced test specs

### 🚀 GitHub Actions Improvements (30+ actions updated)
- **Centralized validation**: Updated 41 validation rule files
- **Enhanced actions**: Improvements across all action categories:
  - Setup actions (node-setup, version detectors)
  - Utility actions (version-file-parser, version-validator)
  - Linting actions (biome, eslint, terraform-lint-fix major refactor)
  - Build/publish actions (docker-build, npm-publish, csharp-*)
  - Repository management actions

### 📝 Documentation Updates
- **README consistency**: Updated version references across action READMEs
- **Enhanced documentation**: Improved action descriptions and usage examples
- **CLAUDE.md**: Updated with current tooling and best practices

## Technical Improvements
- **Security enhancements**: Input validation and sanitization improvements
- **Performance optimizations**: Streamlined action logic and dependencies
- **Cross-platform compatibility**: Better Windows/macOS/Linux support
- **Error handling**: Improved error reporting and user feedback

## Files Changed
- 100 files changed
- 13 new Serena memory files documenting project state
- 41 validation rules updated for consistency
- 30+ GitHub Actions and READMEs improved
- Core tooling configuration enhanced

* feat: comprehensive GitHub Actions improvements and PR review fixes

Major Infrastructure Improvements:
- Add comprehensive testing framework with 17+ ShellSpec validation tests
- Implement Docker-based testing tools with automated test runner
- Add CodeRabbit configuration for automated code reviews
- Restructure documentation and memory management system
- Update validation rules for 25+ actions with enhanced input validation
- Modernize CI/CD workflows and testing infrastructure

Critical PR Review Fixes (All Issues Resolved):
- Fix double caching in node-setup (eliminate redundant cache operations)
- Optimize shell pipeline in version-file-parser (single awk vs complex pipeline)
- Fix GitHub expression interpolation in prettier-check cache keys
- Resolve terraform command order issue (validation after setup)
- Add missing flake8-sarif dependency for Python SARIF output
- Fix environment variable scope in pr-lint (export to GITHUB_ENV)

Performance & Reliability:
- Eliminate duplicate cache operations saving CI time
- Improve shell script efficiency with optimized parsing
- Fix command execution dependencies preventing runtime failures
- Ensure proper dependency installation for all linting tools
- Resolve workflow conditional logic issues

Security & Quality:
- All input validation rules updated with latest security patterns
- Cross-platform compatibility improvements maintained
- Comprehensive error handling and retry logic preserved
- Modern development tooling and best practices adopted

This commit addresses 100% of actionable feedback from PR review analysis,
implements comprehensive testing infrastructure, and maintains high code
quality standards across all 41 GitHub Actions.

* feat: enhance expression handling and version parsing

- Fix node-setup force-version expression logic for proper empty string handling
- Improve version-file-parser with secure regex validation and enhanced Python detection
- Add CodeRabbit configuration for CalVer versioning and README review guidance

* feat(validate-inputs): implement modular validation system

- Add modular validator architecture with specialized validators
- Implement base validator classes for different input types
- Add validators: boolean, docker, file, network, numeric, security, token, version
- Add convention mapper for automatic input validation
- Add comprehensive documentation for the validation system
- Implement PCRE regex support and injection protection

* feat(validate-inputs): add validation rules for all actions

- Add YAML validation rules for 42 GitHub Actions
- Auto-generated rules with convention mappings
- Include metadata for validation coverage and quality indicators
- Mark rules as auto-generated to prevent manual edits

* test(validate-inputs): add comprehensive test suite for validators

- Add unit tests for all validator modules
- Add integration tests for the validation system
- Add fixtures for version test data
- Test coverage for boolean, docker, file, network, numeric, security, token, and version validators
- Add tests for convention mapper and registry

* feat(tools): add validation scripts and utilities

- Add update-validators.py script for auto-generating rules
- Add benchmark-validator.py for performance testing
- Add debug-validator.py for troubleshooting
- Add generate-tests.py for test generation
- Add check-rules-not-manually-edited.sh for CI validation
- Add fix-local-action-refs.py tool for fixing action references

* feat(actions): add CustomValidator.py files for specialized validation

- Add custom validators for actions requiring special validation logic
- Implement validators for docker, go, node, npm, php, python, terraform actions
- Add specialized validation for compress-images, common-cache, common-file-check
- Implement version detection validators with language-specific logic
- Add validation for build arguments, architectures, and version formats

* test: update ShellSpec test framework for Python validation

- Update all validation.spec.sh files to use Python validator
- Add shared validation_core.py for common test utilities
- Remove obsolete bash validation helpers
- Update test output expectations for Python validator format
- Add codeql-analysis test suite
- Refactor framework utilities for Python integration
- Remove deprecated test files

* feat(actions): update action.yml files to use validate-inputs

- Replace inline bash validation with validate-inputs action
- Standardize validation across all 42 actions
- Add new codeql-analysis action
- Update action metadata and branding
- Add validation step as first step in composite actions
- Maintain backward compatibility with existing inputs/outputs

* ci: update GitHub workflows for enhanced security and testing

- Add new codeql-new.yml workflow
- Update security scanning workflows
- Enhance dependency review configuration
- Update test-actions workflow for new validation system
- Improve workflow permissions and security settings
- Update action versions to latest SHA-pinned releases

* build: update build configuration and dependencies

- Update Makefile with new validation targets
- Add Python dependencies in pyproject.toml
- Update npm dependencies and scripts
- Enhance Docker testing tools configuration
- Add targets for validator updates and local ref fixes
- Configure uv for Python package management

* chore: update linting and documentation configuration

- Update EditorConfig settings for consistent formatting
- Enhance pre-commit hooks configuration
- Update prettier and yamllint ignore patterns
- Update gitleaks security scanning rules
- Update CodeRabbit review configuration
- Update CLAUDE.md with latest project standards and rules

* docs: update Serena memory files and project metadata

- Remove obsolete PR-186 memory files
- Update project overview with current architecture
- Update project structure documentation
- Add quality standards and communication guidelines
- Add modular validator architecture documentation
- Add shellspec testing framework documentation
- Update project.yml with latest configuration

* feat: moved rules.yml to same folder as action, fixes

* fix(validators): correct token patterns and fix validator bugs

- Fix GitHub classic PAT pattern: ghp_ + 36 chars = 40 total
- Fix GitHub fine-grained PAT pattern: github_pat_ + 71 chars = 82 total
- Initialize result variable in convention_mapper to prevent UnboundLocalError
- Fix empty URL validation in network validator to return error
- Add GitHub expression check to docker architectures validator
- Update docker-build CustomValidator parallel-builds max to 16

* test(validators): fix test fixtures and expectations

- Fix token lengths in test data: github_pat 71 chars, ghp/gho 36 chars
- Update integration tests with correct token lengths
- Fix file validator test to expect absolute paths rejected for security
- Rename TestGenerator import to avoid pytest collection warning
- Update custom validator tests with correct input names
- Change docker-build tests: platforms->architectures, tags->tag
- Update docker-publish tests to match new registry enum validation

* test(shellspec): fix token lengths in test helpers and specs

- Fix default token lengths in spec_helper.sh to use correct 40-char format
- Update csharp-publish default tokens in 4 locations
- Update codeql-analysis default tokens in 2 locations
- Fix codeql-analysis test tokens to correct lengths (40 and 82 chars)
- Fix npm-publish fine-grained token test to use 82-char format

* feat(actions): add permissions documentation and environment variable usage

- Add permissions comments to all action.yml files documenting required GitHub permissions
- Convert direct input usage to environment variables in shell steps for security
- Add validation steps with proper error handling
- Update input descriptions and add security notes where applicable
- Ensure all actions follow consistent patterns for input validation

* chore(workflows): update GitHub Actions workflow versions

- Update workflow action versions to latest
- Improve workflow consistency and maintainability

* docs(security): add comprehensive security policy

- Document security features and best practices
- Add vulnerability reporting process
- Include audit history and security testing information

* docs(memory): add GitHub workflow reference documentation

- Add GitHub Actions workflow commands reference
- Add GitHub workflow expressions guide
- Add secure workflow usage patterns and best practices

* chore: token optimization, code style conventions
* chore: cr fixes
* fix: trivy reported Dockerfile problems
* fix(security): more security fixes
* chore: dockerfile and make targets for publishing
* fix(ci): add creds to test-actions workflow
* fix: security fix and checkout step to codeql-new
* chore: test fixes
* fix(security): codeql detected issues
* chore: code review fixes, ReDos protection
* style: apply MegaLinter fixes
* fix(ci): missing packages read permission
* fix(ci): add missing working directory setting
* chore: linting, add validation-regex to use regex_pattern
* chore: code review fixes
* chore(deps): update actions
* fix(security): codeql fixes
* chore(cr): apply cr comments
* chore: improve POSIX compatibility
* chore(cr): apply cr comments
* fix: codeql warning in Dockerfile, build failures
* chore(cr): apply cr comments
* fix: docker-testing-tools/Dockerfile
* chore(cr): apply cr comments
* fix(docker): update testing-tools image for GitHub Actions compatibility
* chore(cr): apply cr comments
* feat: add more tests, fix issues
* chore: fix codeql issues, update actions
* chore(cr): apply cr comments
* fix: integration tests
* chore: deduplication and fixes
* style: apply MegaLinter fixes
* chore(cr): apply cr comments
* feat: dry-run mode for generate-tests
* fix(ci): kcov installation
* chore(cr): apply cr comments
* chore(cr): apply cr comments
* chore(cr): apply cr comments
* chore(cr): apply cr comments, simplify action testing, use uv
* fix: run-tests.sh action counting
* chore(cr): apply cr comments
* chore(cr): apply cr comments
This commit is contained in:
2025-10-14 13:37:58 +03:00
committed by GitHub
parent d3cc8d4790
commit 78fdad69e5
353 changed files with 55370 additions and 1714 deletions

239
_tests/framework/setup.sh Executable file
View File

@@ -0,0 +1,239 @@
#!/usr/bin/env bash
# Test environment setup utilities
# Provides common setup functions for GitHub Actions testing
set -euo pipefail
# Global test configuration
export GITHUB_ACTIONS=true
export GITHUB_WORKSPACE="${GITHUB_WORKSPACE:-$(pwd)}"
export GITHUB_REPOSITORY="${GITHUB_REPOSITORY:-ivuorinen/actions}"
export GITHUB_SHA="${GITHUB_SHA:-fake-sha}"
export GITHUB_REF="${GITHUB_REF:-refs/heads/main}"
export GITHUB_TOKEN="${GITHUB_TOKEN:-ghp_fake_token_for_testing}"
# Test framework directories
TEST_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
FRAMEWORK_DIR="${TEST_ROOT}/framework"
FIXTURES_DIR="${FRAMEWORK_DIR}/fixtures"
MOCKS_DIR="${FRAMEWORK_DIR}/mocks"
# Export directories for use by other scripts
export FIXTURES_DIR MOCKS_DIR
# Only create TEMP_DIR if not already set
if [ -z "${TEMP_DIR:-}" ]; then
TEMP_DIR=$(mktemp -d) || exit 1
fi
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Logging functions
log_info() {
echo -e "${BLUE}[INFO]${NC} $*" >&2
}
log_success() {
echo -e "${GREEN}[SUCCESS]${NC} $*" >&2
}
log_warning() {
echo -e "${YELLOW}[WARNING]${NC} $*" >&2
}
log_error() {
echo -e "${RED}[ERROR]${NC} $*" >&2
}
# Setup test environment
setup_test_env() {
local test_name="${1:-unknown}"
log_info "Setting up test environment for: $test_name"
# Create temporary directory for test
export TEST_TEMP_DIR="${TEMP_DIR}/${test_name}"
mkdir -p "$TEST_TEMP_DIR"
# Create fake GitHub workspace
export TEST_WORKSPACE="${TEST_TEMP_DIR}/workspace"
mkdir -p "$TEST_WORKSPACE"
# Setup fake GitHub outputs
export GITHUB_OUTPUT="${TEST_TEMP_DIR}/github-output"
export GITHUB_ENV="${TEST_TEMP_DIR}/github-env"
export GITHUB_PATH="${TEST_TEMP_DIR}/github-path"
export GITHUB_STEP_SUMMARY="${TEST_TEMP_DIR}/github-step-summary"
# Initialize output files
touch "$GITHUB_OUTPUT" "$GITHUB_ENV" "$GITHUB_PATH" "$GITHUB_STEP_SUMMARY"
# Change to test workspace
cd "$TEST_WORKSPACE"
log_success "Test environment setup complete"
}
# Cleanup test environment
cleanup_test_env() {
local test_name="${1:-unknown}"
log_info "Cleaning up test environment for: $test_name"
if [[ -n ${TEST_TEMP_DIR:-} && -d $TEST_TEMP_DIR ]]; then
# Check if current directory is inside TEST_TEMP_DIR
local current_dir
current_dir="$(pwd)"
if [[ "$current_dir" == "$TEST_TEMP_DIR"* ]]; then
cd "$GITHUB_WORKSPACE" || cd /tmp || true
fi
rm -rf "$TEST_TEMP_DIR"
log_success "Test environment cleanup complete"
fi
}
# Cleanup framework temp directory
cleanup_framework_temp() {
if [[ -n ${TEMP_DIR:-} && -d $TEMP_DIR ]]; then
# Check if current directory is inside TEMP_DIR
local current_dir
current_dir="$(pwd)"
if [[ "$current_dir" == "$TEMP_DIR"* ]]; then
cd "$GITHUB_WORKSPACE" || cd /tmp || true
fi
rm -rf "$TEMP_DIR"
log_info "Framework temp directory cleaned up"
fi
}
# Create a mock GitHub repository structure
create_mock_repo() {
local repo_type="${1:-node}"
case "$repo_type" in
"node")
create_mock_node_repo
;;
"php" | "python" | "go" | "dotnet")
log_error "Unsupported repo type: $repo_type. Only 'node' is currently supported."
return 1
;;
*)
log_warning "Unknown repo type: $repo_type, defaulting to node"
create_mock_node_repo
;;
esac
}
# Create mock Node.js repository
create_mock_node_repo() {
cat >package.json <<EOF
{
"name": "test-project",
"version": "1.0.0",
"engines": {
"node": ">=18.0.0"
},
"scripts": {
"test": "npm test",
"lint": "eslint .",
"build": "npm run build"
},
"devDependencies": {
"eslint": "^8.0.0"
}
}
EOF
echo "node_modules/" >.gitignore
mkdir -p src
echo 'console.log("Hello, World!");' >src/index.js
}
# Removed unused mock repository functions:
# create_mock_php_repo, create_mock_python_repo, create_mock_go_repo, create_mock_dotnet_repo
# Only create_mock_node_repo is used and kept below
# Validate action outputs
validate_action_output() {
local expected_key="$1"
local expected_value="$2"
local output_file="${3:-$GITHUB_OUTPUT}"
if grep -q "^${expected_key}=${expected_value}$" "$output_file"; then
log_success "Output validation passed: $expected_key=$expected_value"
return 0
else
log_error "Output validation failed: $expected_key=$expected_value not found"
log_error "Actual outputs:"
cat "$output_file" >&2
return 1
fi
}
# Removed unused function: run_action_step
# Check if required tools are available
check_required_tools() {
local tools=("git" "jq" "curl" "python3" "tar" "make")
local missing_tools=()
for tool in "${tools[@]}"; do
if ! command -v "$tool" >/dev/null 2>&1; then
missing_tools+=("$tool")
fi
done
if [[ ${#missing_tools[@]} -gt 0 ]]; then
log_error "Missing required tools: ${missing_tools[*]}"
return 1
fi
if [[ -z ${SHELLSPEC_VERSION:-} ]]; then
log_success "All required tools are available"
fi
return 0
}
# Initialize testing framework
init_testing_framework() {
# Use file-based lock to prevent multiple initialization across ShellSpec processes
local lock_file="${TEMP_DIR}/.framework_initialized"
if [[ -f "$lock_file" ]]; then
return 0
fi
# Silent initialization in ShellSpec environment to avoid output interference
if [[ -z ${SHELLSPEC_VERSION:-} ]]; then
log_info "Initializing GitHub Actions Testing Framework"
fi
# Check requirements
check_required_tools
# Temporary directory already created by mktemp above
# Note: Cleanup trap removed to avoid conflicts with ShellSpec
# Individual tests should call cleanup_test_env when needed
# Mark as initialized with file lock
touch "$lock_file"
export TESTING_FRAMEWORK_INITIALIZED=1
if [[ -z ${SHELLSPEC_VERSION:-} ]]; then
log_success "Testing framework initialized"
fi
}
# Export all functions for use in tests
export -f setup_test_env cleanup_test_env cleanup_framework_temp create_mock_repo
export -f create_mock_node_repo validate_action_output check_required_tools
export -f log_info log_success log_warning log_error
export -f init_testing_framework

352
_tests/framework/utils.sh Executable file
View File

@@ -0,0 +1,352 @@
#!/usr/bin/env bash
# Common testing utilities for GitHub Actions
# Provides helper functions for testing action behavior
set -euo pipefail
# Source setup utilities
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
# shellcheck source=_tests/framework/setup.sh
source "${SCRIPT_DIR}/setup.sh"
# Action testing utilities
validate_action_yml() {
local action_file="$1"
local quiet_mode="${2:-false}"
if [[ ! -f $action_file ]]; then
[[ $quiet_mode == "false" ]] && log_error "Action file not found: $action_file"
return 1
fi
# Check if it's valid YAML
if ! yq eval '.' "$action_file" >/dev/null 2>&1; then
# Compute path relative to this script for CWD independence
local utils_dir
utils_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
if ! uv run "$utils_dir/../shared/validation_core.py" --validate-yaml "$action_file" 2>/dev/null; then
[[ $quiet_mode == "false" ]] && log_error "Invalid YAML in action file: $action_file"
return 1
fi
fi
[[ $quiet_mode == "false" ]] && log_success "Action YAML is valid: $action_file"
return 0
}
# Extract action metadata using Python validation module
get_action_inputs() {
local action_file="$1"
local script_dir
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
uv run "$script_dir/../shared/validation_core.py" --inputs "$action_file"
}
get_action_outputs() {
local action_file="$1"
local script_dir
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
uv run "$script_dir/../shared/validation_core.py" --outputs "$action_file"
}
get_action_name() {
local action_file="$1"
local script_dir
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
uv run "$script_dir/../shared/validation_core.py" --name "$action_file"
}
# Test input validation using Python validation module
test_input_validation() {
local action_dir="$1"
local input_name="$2"
local test_value="$3"
local expected_result="${4:-success}" # success or failure
# Normalize action_dir to absolute path before setup_test_env changes working directory
action_dir="$(cd "$action_dir" && pwd)"
log_info "Testing input validation: $input_name = '$test_value'"
# Setup test environment
setup_test_env "input-validation-${input_name}"
# Use Python validation module via CLI
local script_dir
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
local result="success"
# Call validation_core CLI with proper argument passing (no injection risk)
if ! uv run "$script_dir/../shared/validation_core.py" --validate "$action_dir" "$input_name" "$test_value" 2>&1; then
result="failure"
fi
# Check result matches expectation
if [[ $result == "$expected_result" ]]; then
log_success "Input validation test passed: $input_name"
cleanup_test_env "input-validation-${input_name}"
return 0
else
log_error "Input validation test failed: $input_name (expected: $expected_result, got: $result)"
cleanup_test_env "input-validation-${input_name}"
return 1
fi
}
# Removed: create_validation_script, create_python_validation_script,
# convert_github_expressions_to_env_vars, needs_python_validation, python_validate_input
# These functions are no longer needed as we use Python validation directly
# Test action outputs
test_action_outputs() {
local action_dir="$1"
shift
# Normalize action_dir to absolute path before setup_test_env changes working directory
action_dir="$(cd "$action_dir" && pwd)"
log_info "Testing action outputs for: $(basename "$action_dir")"
# Setup test environment
setup_test_env "output-test-$(basename "$action_dir")"
create_mock_repo "node"
# Set up inputs
while [[ $# -gt 1 ]]; do
local key="$1"
local value="$2"
# Convert dashes to underscores and uppercase for environment variable names
local env_key="${key//-/_}"
local env_key_upper
env_key_upper=$(echo "$env_key" | tr '[:lower:]' '[:upper:]')
export "INPUT_${env_key_upper}"="$value"
shift 2
done
# Run the action (simplified simulation)
local action_file="${action_dir}/action.yml"
local action_name
action_name=$(get_action_name "$action_file")
log_info "Simulating action: $action_name"
# For now, we'll create mock outputs based on the action definition
local outputs
outputs=$(get_action_outputs "$action_file")
# Create mock outputs
while IFS= read -r output; do
if [[ -n $output ]]; then
echo "${output}=mock-value-$(date +%s)" >>"$GITHUB_OUTPUT"
fi
done <<<"$outputs"
# Validate outputs exist
local test_passed=true
while IFS= read -r output; do
if [[ -n $output ]]; then
if ! grep -q "^${output}=" "$GITHUB_OUTPUT"; then
log_error "Missing output: $output"
test_passed=false
else
log_success "Output found: $output"
fi
fi
done <<<"$outputs"
cleanup_test_env "output-test-$(basename "$action_dir")"
if [[ $test_passed == "true" ]]; then
log_success "Output test passed for: $(basename "$action_dir")"
return 0
else
log_error "Output test failed for: $(basename "$action_dir")"
return 1
fi
}
# Test external usage pattern
test_external_usage() {
local action_name="$1"
log_info "Testing external usage pattern for: $action_name"
# Create test workflow that uses external reference
local test_workflow_dir="${TEST_ROOT}/integration/workflows"
mkdir -p "$test_workflow_dir"
local workflow_file="${test_workflow_dir}/${action_name}-external-test.yml"
cat >"$workflow_file" <<EOF
name: External Usage Test - $action_name
on:
workflow_dispatch:
push:
paths:
- '$action_name/**'
jobs:
test-external-usage:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Test external usage
uses: ivuorinen/actions/${action_name}@main
with:
# Default inputs for testing
EOF
# Add common test inputs based on action type
case "$action_name" in
*-setup | *-version-detect)
echo " # Version detection action - no additional inputs needed" >>"$workflow_file"
;;
*-lint* | *-fix)
# shellcheck disable=SC2016
echo ' token: ${{ github.token }}' >>"$workflow_file"
;;
*-publish | *-build)
# shellcheck disable=SC2016
echo ' token: ${{ github.token }}' >>"$workflow_file"
;;
*)
echo " # Generic test inputs" >>"$workflow_file"
;;
esac
log_success "Created external usage test workflow: $workflow_file"
return 0
}
# Performance test utilities
measure_action_time() {
local action_dir="$1"
shift
# Normalize action_dir to absolute path for consistent behavior
action_dir="$(cd "$action_dir" && pwd)"
log_info "Measuring execution time for: $(basename "$action_dir")"
local start_time
start_time=$(date +%s%N)
# Run the action test
test_action_outputs "$action_dir" "$@"
local result=$?
local end_time
end_time=$(date +%s%N)
local duration_ns=$((end_time - start_time))
local duration_ms=$((duration_ns / 1000000))
log_info "Action execution time: ${duration_ms}ms"
# Store performance data
echo "$(basename "$action_dir"),${duration_ms}" >>"${TEST_ROOT}/reports/performance.csv"
return $result
}
# Batch test runner
run_action_tests() {
local action_dir="$1"
local test_type="${2:-all}" # all, unit, integration, outputs
# Normalize action_dir to absolute path for consistent behavior
action_dir="$(cd "$action_dir" && pwd)"
local action_name
action_name=$(basename "$action_dir")
log_info "Running $test_type tests for: $action_name"
local test_results=()
# Handle "all" type by running all test types
if [[ $test_type == "all" ]]; then
# Run unit tests
log_info "Running unit tests..."
if validate_action_yml "${action_dir}/action.yml"; then
test_results+=("unit:PASS")
else
test_results+=("unit:FAIL")
fi
# Run output tests
log_info "Running output tests..."
if test_action_outputs "$action_dir"; then
test_results+=("outputs:PASS")
else
test_results+=("outputs:FAIL")
fi
# Run integration tests
log_info "Running integration tests..."
if test_external_usage "$action_name"; then
test_results+=("integration:PASS")
else
test_results+=("integration:FAIL")
fi
else
# Handle individual test types
case "$test_type" in
"unit")
log_info "Running unit tests..."
if validate_action_yml "${action_dir}/action.yml"; then
test_results+=("unit:PASS")
else
test_results+=("unit:FAIL")
fi
;;
"outputs")
log_info "Running output tests..."
if test_action_outputs "$action_dir"; then
test_results+=("outputs:PASS")
else
test_results+=("outputs:FAIL")
fi
;;
"integration")
log_info "Running integration tests..."
if test_external_usage "$action_name"; then
test_results+=("integration:PASS")
else
test_results+=("integration:FAIL")
fi
;;
esac
fi
# Report results
log_info "Test results for $action_name:"
for result in "${test_results[@]}"; do
local test_name="${result%:*}"
local status="${result#*:}"
if [[ $status == "PASS" ]]; then
log_success " $test_name: $status"
else
log_error " $test_name: $status"
fi
done
# Check if all tests passed
if [[ ! " ${test_results[*]} " =~ " FAIL" ]]; then
log_success "All tests passed for: $action_name"
return 0
else
log_error "Some tests failed for: $action_name"
return 1
fi
}
# Export all functions
export -f validate_action_yml get_action_inputs get_action_outputs get_action_name
export -f test_input_validation test_action_outputs test_external_usage measure_action_time run_action_tests

885
_tests/framework/validation.py Executable file
View File

@@ -0,0 +1,885 @@
#!/usr/bin/env python3
"""
GitHub Actions Validation Module
This module provides advanced validation capabilities for GitHub Actions testing,
specifically handling PCRE regex patterns with lookahead/lookbehind assertions
that are not supported in bash's basic regex engine.
Features:
- PCRE-compatible regex validation using Python's re module
- GitHub token format validation with proper lookahead support
- Input sanitization and security validation
- Complex pattern detection and validation
"""
from __future__ import annotations
from pathlib import Path
import re
import sys
import yaml # pylint: disable=import-error
class ActionValidator:
"""Handles validation of GitHub Action inputs using Python regex engine."""
# Common regex patterns that require PCRE features
COMPLEX_PATTERNS = {
"lookahead": r"\(\?\=",
"lookbehind": r"\(\?\<=",
"negative_lookahead": r"\(\?\!",
"named_groups": r"\(\?P<\w+>",
"conditional": r"\(\?\(",
}
# Standardized token patterns (resolved GitHub documentation discrepancies)
# Fine-grained PATs are 50-255 characters with underscores (github_pat_[A-Za-z0-9_]{50,255})
TOKEN_PATTERNS = {
"classic": r"^gh[efpousr]_[a-zA-Z0-9]{36}$",
"fine_grained": r"^github_pat_[A-Za-z0-9_]{50,255}$", # 50-255 chars with underscores
"installation": r"^ghs_[a-zA-Z0-9]{36}$",
"npm_classic": r"^npm_[a-zA-Z0-9]{40,}$", # NPM classic tokens
}
def __init__(self):
"""Initialize the validator."""
def is_complex_pattern(self, pattern: str) -> bool:
"""
Check if a regex pattern requires PCRE features not supported in bash.
Args:
pattern: The regex pattern to check
Returns:
True if pattern requires PCRE features, False otherwise
"""
for regex in self.COMPLEX_PATTERNS.values():
if re.search(regex, pattern):
return True
return False
def validate_github_token(self, token: str, action_dir: str = "") -> tuple[bool, str]:
"""
Validate GitHub token format using proper PCRE patterns.
Args:
token: The token to validate
action_dir: The action directory (for context-specific validation)
Returns:
Tuple of (is_valid, error_message)
"""
# Actions that require tokens shouldn't accept empty values
action_name = Path(action_dir).name
if action_name in ["csharp-publish", "eslint-fix", "pr-lint", "pre-commit"]:
if not token or token.strip() == "":
return False, "Token cannot be empty"
# Other actions may accept empty tokens (they'll use defaults)
elif not token or token.strip() == "":
return True, ""
# Check for GitHub Actions expression (should be allowed)
if token == "${{ github.token }}" or (token.startswith("${{") and token.endswith("}}")):
return True, ""
# Check for environment variable reference (e.g., $GITHUB_TOKEN)
if re.match(r"^\$[A-Za-z_][A-Za-z0-9_]*$", token):
return True, ""
# Check against all known token patterns
for pattern in self.TOKEN_PATTERNS.values():
if re.match(pattern, token):
return True, ""
return (
False,
"Invalid token format. Expected: gh[efpousr]_* (36 chars), "
"github_pat_[A-Za-z0-9_]* (50-255 chars), ghs_* (36 chars), or npm_* (40+ chars)",
)
def validate_namespace_with_lookahead(self, namespace: str) -> tuple[bool, str]:
"""
Validate namespace using the original lookahead pattern from csharp-publish.
Args:
namespace: The namespace to validate
Returns:
Tuple of (is_valid, error_message)
"""
if not namespace or namespace.strip() == "":
return False, "Namespace cannot be empty"
# Original pattern: ^[a-zA-Z0-9]([a-zA-Z0-9]|-(?=[a-zA-Z0-9])){0,38}$
# This ensures hyphens are only allowed when followed by alphanumeric characters
pattern = r"^[a-zA-Z0-9]([a-zA-Z0-9]|-(?=[a-zA-Z0-9])){0,38}$"
if re.match(pattern, namespace):
return True, ""
return (
False,
"Invalid namespace format. Must be 1-39 characters, "
"alphanumeric and hyphens, no trailing hyphens",
)
def validate_input_pattern(self, input_value: str, pattern: str) -> tuple[bool, str]:
"""
Validate an input value against a regex pattern using Python's re module.
Args:
input_value: The value to validate
pattern: The regex pattern to match against
Returns:
Tuple of (is_valid, error_message)
"""
try:
if re.match(pattern, input_value):
return True, ""
return False, f"Value '{input_value}' does not match required pattern: {pattern}"
except re.error as e:
return False, f"Invalid regex pattern: {pattern} - {e!s}"
def validate_security_patterns(self, input_value: str) -> tuple[bool, str]:
"""
Check for common security injection patterns.
Args:
input_value: The value to validate
Returns:
Tuple of (is_valid, error_message)
"""
# Allow empty values for most inputs (they're often optional)
if not input_value or input_value.strip() == "":
return True, ""
# Common injection patterns
injection_patterns = [
r";\s*(rm|del|format|shutdown|reboot)",
r"&&\s*(rm|del|format|shutdown|reboot)",
r"\|\s*(rm|del|format|shutdown|reboot)",
r"`[^`]*`", # Command substitution
r"\$\([^)]*\)", # Command substitution
# Path traversal only dangerous when combined with commands
r"\.\./.*;\s*(rm|del|format|shutdown|reboot)",
r"\\\.\\\.\\.*;\s*(rm|del|format|shutdown|reboot)",
]
for pattern in injection_patterns:
if re.search(pattern, input_value, re.IGNORECASE):
return False, f"Potential security injection pattern detected: {pattern}"
return True, ""
def extract_validation_patterns(action_file: str) -> dict[str, list[str]]:
"""
Extract validation patterns from an action.yml file.
Args:
action_file: Path to the action.yml file
Returns:
Dictionary mapping input names to their validation patterns
"""
patterns = {}
try:
with Path(action_file).open(encoding="utf-8") as f:
content = f.read()
# Look for validation patterns in the shell scripts
validation_block_match = re.search(
r"- name:\s*Validate\s+Inputs.*?run:\s*\|(.+?)(?=- name:|$)",
content,
re.DOTALL | re.IGNORECASE,
)
if validation_block_match:
validation_script = validation_block_match.group(1)
# Extract regex patterns from the validation script
regex_matches = re.findall(
r'\[\[\s*["\']?\$\{\{\s*inputs\.(\w+(?:-\w+)*)\s*\}\}["\']?\s*=~\s*(.+?)\]\]',
validation_script,
re.DOTALL | re.IGNORECASE,
)
for input_name, pattern in regex_matches:
# Clean up the pattern
pattern = pattern.strip().strip("\"'")
if input_name not in patterns:
patterns[input_name] = []
patterns[input_name].append(pattern)
except Exception as e: # pylint: disable=broad-exception-caught
print(f"Error extracting patterns from {action_file}: {e}", file=sys.stderr)
return patterns
def get_input_property(action_file: str, input_name: str, property_check: str) -> str: # pylint: disable=too-many-return-statements
"""
Get a property of an input from an action.yml file.
This function replaces the functionality of check_input.py.
Args:
action_file: Path to the action.yml file
input_name: Name of the input to check
property_check: Property to check (required, optional, default, description, all_optional)
Returns:
- For 'required': 'required' or 'optional'
- For 'optional': 'optional' or 'required'
- For 'default': the default value or 'no-default'
- For 'description': the description or 'no-description'
- For 'all_optional': 'none' if no required inputs, else comma-separated list of
required inputs
"""
try:
with Path(action_file).open(encoding="utf-8") as f:
data = yaml.safe_load(f)
inputs = data.get("inputs", {})
input_data = inputs.get(input_name, {})
if property_check in ["required", "optional"]:
is_required = input_data.get("required") in [True, "true"]
if property_check == "required":
return "required" if is_required else "optional"
# optional
return "optional" if not is_required else "required"
if property_check == "default":
default_value = input_data.get("default", "")
return str(default_value) if default_value else "no-default"
if property_check == "description":
description = input_data.get("description", "")
return description if description else "no-description"
if property_check == "all_optional":
# Check if all inputs are optional (none are required)
required_inputs = [k for k, v in inputs.items() if v.get("required") in [True, "true"]]
return "none" if not required_inputs else ",".join(required_inputs)
return f"unknown-property-{property_check}"
except Exception as e: # pylint: disable=broad-exception-caught
return f"error: {e}"
def get_action_inputs(action_file: str) -> list[str]:
"""
Get all input names from an action.yml file.
This function replaces the bash version in utils.sh.
Args:
action_file: Path to the action.yml file
Returns:
List of input names
"""
try:
with Path(action_file).open(encoding="utf-8") as f:
data = yaml.safe_load(f)
inputs = data.get("inputs", {})
return list(inputs.keys())
except Exception:
return []
def get_action_outputs(action_file: str) -> list[str]:
"""
Get all output names from an action.yml file.
This function replaces the bash version in utils.sh.
Args:
action_file: Path to the action.yml file
Returns:
List of output names
"""
try:
with Path(action_file).open(encoding="utf-8") as f:
data = yaml.safe_load(f)
outputs = data.get("outputs", {})
return list(outputs.keys())
except Exception:
return []
def get_action_name(action_file: str) -> str:
"""
Get the action name from an action.yml file.
This function replaces the bash version in utils.sh.
Args:
action_file: Path to the action.yml file
Returns:
Action name or "Unknown" if not found
"""
try:
with Path(action_file).open(encoding="utf-8") as f:
data = yaml.safe_load(f)
return data.get("name", "Unknown")
except Exception:
return "Unknown"
def _show_usage():
"""Show usage information and exit."""
print("Usage:")
print(
" Validation mode: python3 validation.py <action_dir> <input_name> <input_value> "
"[expected_result]",
)
print(
" Property mode: python3 validation.py --property <action_file> <input_name> <property>",
)
print(" List inputs: python3 validation.py --inputs <action_file>")
print(" List outputs: python3 validation.py --outputs <action_file>")
print(" Get name: python3 validation.py --name <action_file>")
sys.exit(1)
def _parse_property_mode():
"""Parse property mode arguments."""
if len(sys.argv) != 5:
print(
"Property mode usage: python3 validation.py --property <action_file> "
"<input_name> <property>",
)
print("Properties: required, optional, default, description, all_optional")
sys.exit(1)
return {
"mode": "property",
"action_file": sys.argv[2],
"input_name": sys.argv[3],
"property": sys.argv[4],
}
def _parse_single_file_mode(mode_name):
"""Parse modes that take a single action file argument."""
if len(sys.argv) != 3:
print(f"{mode_name.title()} mode usage: python3 validation.py --{mode_name} <action_file>")
sys.exit(1)
return {
"mode": mode_name,
"action_file": sys.argv[2],
}
def _parse_validation_mode():
"""Parse validation mode arguments."""
if len(sys.argv) < 4:
print(
"Validation mode usage: python3 validation.py <action_dir> <input_name> "
"<input_value> [expected_result]",
)
print("Expected result: 'success' or 'failure' (default: auto-detect)")
sys.exit(1)
return {
"mode": "validation",
"action_dir": sys.argv[1],
"input_name": sys.argv[2],
"input_value": sys.argv[3],
"expected_result": sys.argv[4] if len(sys.argv) > 4 else None,
}
def _parse_command_line_args():
"""Parse and validate command line arguments."""
if len(sys.argv) < 2:
_show_usage()
mode_arg = sys.argv[1]
if mode_arg == "--property":
return _parse_property_mode()
if mode_arg in ["--inputs", "--outputs", "--name"]:
return _parse_single_file_mode(mode_arg[2:]) # Remove '--' prefix
return _parse_validation_mode()
def _resolve_action_file_path(action_dir: str) -> str:
"""Resolve the path to the action.yml file."""
action_dir_path = Path(action_dir)
if not action_dir_path.is_absolute():
# If relative, assume we're in _tests/framework and actions are at ../../
script_dir = Path(__file__).resolve().parent
project_root = script_dir.parent.parent
return str(project_root / action_dir / "action.yml")
return f"{action_dir}/action.yml"
def _validate_docker_build_input(input_name: str, input_value: str) -> tuple[bool, str]:
"""Handle special validation for docker-build inputs."""
if input_name == "build-args" and input_value == "":
return True, ""
# All other docker-build inputs pass through centralized validation
return True, ""
# Validation function registry
def _validate_boolean(input_value: str, input_name: str) -> tuple[bool, str]:
"""Validate boolean input."""
if input_value.lower() not in ["true", "false"]:
return False, f"Input '{input_name}' must be 'true' or 'false'"
return True, ""
def _validate_docker_architectures(input_value: str) -> tuple[bool, str]:
"""Validate docker architectures format."""
if input_value and not re.match(r"^[a-zA-Z0-9/_,.-]+$", input_value):
return False, f"Invalid docker architectures format: {input_value}"
return True, ""
def _validate_registry(input_value: str, action_name: str) -> tuple[bool, str]:
"""Validate registry format."""
if action_name == "docker-publish":
if input_value not in ["dockerhub", "github", "both"]:
return False, "Invalid registry value. Must be 'dockerhub', 'github', or 'both'"
elif input_value and not re.match(r"^[\w.-]+(:\d+)?$", input_value):
return False, f"Invalid registry format: {input_value}"
return True, ""
def _validate_file_path(input_value: str) -> tuple[bool, str]:
"""Validate file path format."""
if input_value and re.search(r"[;&|`$()]", input_value):
return False, f"Potential injection detected in file path: {input_value}"
if input_value and not re.match(r"^[a-zA-Z0-9._/,~-]+$", input_value):
return False, f"Invalid file path format: {input_value}"
return True, ""
def _validate_backoff_strategy(input_value: str) -> tuple[bool, str]:
"""Validate backoff strategy."""
if input_value not in ["linear", "exponential", "fixed"]:
return False, "Invalid backoff strategy. Must be 'linear', 'exponential', or 'fixed'"
return True, ""
def _validate_shell_type(input_value: str) -> tuple[bool, str]:
"""Validate shell type."""
if input_value not in ["bash", "sh"]:
return False, "Invalid shell type. Must be 'bash' or 'sh'"
return True, ""
def _validate_docker_image_name(input_value: str) -> tuple[bool, str]:
"""Validate docker image name format."""
if input_value and not re.match(
r"^[a-z0-9]+((\.|_|__|-+)[a-z0-9]+)*(/[a-z0-9]+((\.|_|__|-+)[a-z0-9]+)*)*$",
input_value,
):
return False, f"Invalid docker image name format: {input_value}"
return True, ""
def _validate_docker_tag(input_value: str) -> tuple[bool, str]:
"""Validate docker tag format."""
if input_value:
tags = [tag.strip() for tag in input_value.split(",")]
for tag in tags:
if not re.match(r"^[a-zA-Z0-9]([a-zA-Z0-9._-]*[a-zA-Z0-9])?$", tag):
return False, f"Invalid docker tag format: {tag}"
return True, ""
def _validate_docker_password(input_value: str) -> tuple[bool, str]:
"""Validate docker password."""
if input_value and len(input_value) < 8:
return False, "Docker password must be at least 8 characters long"
return True, ""
def _validate_go_version(input_value: str) -> tuple[bool, str]:
"""Validate Go version format."""
if input_value in ["stable", "latest"]:
return True, ""
if input_value and not re.match(r"^v?\d+\.\d+(\.\d+)?", input_value):
return False, f"Invalid Go version format: {input_value}"
return True, ""
def _validate_timeout_with_unit(input_value: str) -> tuple[bool, str]:
"""Validate timeout with unit format."""
if input_value and not re.match(r"^\d+[smh]$", input_value):
return False, "Invalid timeout format. Use format like '5m', '300s', or '1h'"
return True, ""
def _validate_linter_list(input_value: str) -> tuple[bool, str]:
"""Validate linter list format."""
if input_value and re.search(r",\s+", input_value):
return False, "Invalid linter list format. Use comma-separated values without spaces"
return True, ""
def _validate_version_types(input_value: str) -> tuple[bool, str]:
"""Validate semantic/calver/flexible version formats."""
if input_value.lower() == "latest":
return True, ""
if input_value.startswith("v"):
return False, f"Version should not start with 'v': {input_value}"
if not re.match(r"^\d+\.\d+(\.\d+)?", input_value):
return False, f"Invalid version format: {input_value}"
return True, ""
def _validate_file_pattern(input_value: str) -> tuple[bool, str]:
"""Validate file pattern format."""
if input_value and ("../" in input_value or "\\..\\" in input_value):
return False, f"Path traversal not allowed in file patterns: {input_value}"
if input_value and input_value.startswith("/"):
return False, f"Absolute paths not allowed in file patterns: {input_value}"
if input_value and re.search(r"[;&|`$()]", input_value):
return False, f"Potential injection detected in file pattern: {input_value}"
return True, ""
def _validate_report_format(input_value: str) -> tuple[bool, str]:
"""Validate report format."""
if input_value not in ["json", "sarif"]:
return False, "Invalid report format. Must be 'json' or 'sarif'"
return True, ""
def _validate_plugin_list(input_value: str) -> tuple[bool, str]:
"""Validate plugin list format."""
if input_value and re.search(r"[;&|`$()]", input_value):
return False, f"Potential injection detected in plugin list: {input_value}"
return True, ""
def _validate_prefix(input_value: str) -> tuple[bool, str]:
"""Validate prefix format."""
if input_value and re.search(r"[;&|`$()]", input_value):
return False, f"Potential injection detected in prefix: {input_value}"
return True, ""
def _validate_terraform_version(input_value: str) -> tuple[bool, str]:
"""Validate terraform version format."""
if input_value and input_value.lower() == "latest":
return True, ""
if input_value and input_value.startswith("v"):
return False, f"Terraform version should not start with 'v': {input_value}"
if input_value and not re.match(r"^\d+\.\d+(\.\d+)?", input_value):
return False, f"Invalid terraform version format: {input_value}"
return True, ""
def _validate_php_extensions(input_value: str) -> tuple[bool, str]:
"""Validate PHP extensions format."""
if input_value and re.search(r"[;&|`$()@#]", input_value):
return False, f"Potential injection detected in PHP extensions: {input_value}"
if input_value and not re.match(r"^[a-zA-Z0-9_,\s]+$", input_value):
return False, f"Invalid PHP extensions format: {input_value}"
return True, ""
def _validate_coverage_driver(input_value: str) -> tuple[bool, str]:
"""Validate coverage driver."""
if input_value not in ["none", "xdebug", "pcov", "xdebug3"]:
return False, "Invalid coverage driver. Must be 'none', 'xdebug', 'pcov', or 'xdebug3'"
return True, ""
# Validation registry mapping types to functions and their argument requirements
VALIDATION_REGISTRY = {
"boolean": (_validate_boolean, "input_name"),
"docker_architectures": (_validate_docker_architectures, "value_only"),
"registry": (_validate_registry, "action_name"),
"file_path": (_validate_file_path, "value_only"),
"backoff_strategy": (_validate_backoff_strategy, "value_only"),
"shell_type": (_validate_shell_type, "value_only"),
"docker_image_name": (_validate_docker_image_name, "value_only"),
"docker_tag": (_validate_docker_tag, "value_only"),
"docker_password": (_validate_docker_password, "value_only"),
"go_version": (_validate_go_version, "value_only"),
"timeout_with_unit": (_validate_timeout_with_unit, "value_only"),
"linter_list": (_validate_linter_list, "value_only"),
"semantic_version": (_validate_version_types, "value_only"),
"calver_version": (_validate_version_types, "value_only"),
"flexible_version": (_validate_version_types, "value_only"),
"file_pattern": (_validate_file_pattern, "value_only"),
"report_format": (_validate_report_format, "value_only"),
"plugin_list": (_validate_plugin_list, "value_only"),
"prefix": (_validate_prefix, "value_only"),
"terraform_version": (_validate_terraform_version, "value_only"),
"php_extensions": (_validate_php_extensions, "value_only"),
"coverage_driver": (_validate_coverage_driver, "value_only"),
}
def _load_validation_rules(action_dir: str) -> tuple[dict, bool]:
"""Load validation rules for an action."""
action_name = Path(action_dir).name
script_dir = Path(__file__).resolve().parent
project_root = script_dir.parent.parent
rules_file = project_root / "validate-inputs" / "rules" / f"{action_name}.yml"
if not rules_file.exists():
return {}, False
try:
with Path(rules_file).open(encoding="utf-8") as f:
return yaml.safe_load(f), True
except Exception as e: # pylint: disable=broad-exception-caught
print(f"Warning: Could not load centralized rules for {action_name}: {e}", file=sys.stderr)
return {}, False
def _get_validation_type(input_name: str, rules_data: dict) -> str | None:
"""Get validation type for an input from rules."""
conventions = rules_data.get("conventions", {})
overrides = rules_data.get("overrides", {})
# Check overrides first, then conventions
if input_name in overrides:
return overrides[input_name]
if input_name in conventions:
return conventions[input_name]
return None
def _validate_with_centralized_rules(
input_name: str,
input_value: str,
action_dir: str,
validator: ActionValidator,
) -> tuple[bool, str, bool]:
"""Validate input using centralized validation rules."""
rules_data, rules_loaded = _load_validation_rules(action_dir)
if not rules_loaded:
return True, "", False
action_name = Path(action_dir).name
required_inputs = rules_data.get("required_inputs", [])
# Check if input is required and empty
if input_name in required_inputs and (not input_value or input_value.strip() == ""):
return False, f"Required input '{input_name}' cannot be empty", True
validation_type = _get_validation_type(input_name, rules_data)
if validation_type is None:
return True, "", False
# Handle special validator-based types
if validation_type == "github_token":
token_valid, token_error = validator.validate_github_token(input_value, action_dir)
return token_valid, token_error, True
if validation_type == "namespace_with_lookahead":
ns_valid, ns_error = validator.validate_namespace_with_lookahead(input_value)
return ns_valid, ns_error, True
# Use registry for other validation types
if validation_type in VALIDATION_REGISTRY:
validate_func, arg_type = VALIDATION_REGISTRY[validation_type]
if arg_type == "value_only":
is_valid, error_msg = validate_func(input_value)
elif arg_type == "input_name":
is_valid, error_msg = validate_func(input_value, input_name)
elif arg_type == "action_name":
is_valid, error_msg = validate_func(input_value, action_name)
else:
return False, f"Unknown validation argument type: {arg_type}", True
return is_valid, error_msg, True
return True, "", True
def _validate_special_inputs(
input_name: str,
input_value: str,
action_dir: str,
validator: ActionValidator,
) -> tuple[bool, str, bool]:
"""Handle special input validation cases."""
action_name = Path(action_dir).name
if action_name == "docker-build":
is_valid, error_message = _validate_docker_build_input(input_name, input_value)
return is_valid, error_message, True
if input_name == "token" and action_name in [
"csharp-publish",
"eslint-fix",
"pr-lint",
"pre-commit",
]:
# Special handling for GitHub tokens
token_valid, token_error = validator.validate_github_token(input_value, action_dir)
return token_valid, token_error, True
if input_name == "namespace" and action_name == "csharp-publish":
# Special handling for namespace with lookahead
ns_valid, ns_error = validator.validate_namespace_with_lookahead(input_value)
return ns_valid, ns_error, True
return True, "", False
def _validate_with_patterns(
input_name: str,
input_value: str,
patterns: dict,
validator: ActionValidator,
) -> tuple[bool, str, bool]:
"""Validate input using extracted patterns."""
if input_name not in patterns:
return True, "", False
for pattern in patterns[input_name]:
pattern_valid, pattern_error = validator.validate_input_pattern(
input_value,
pattern,
)
if not pattern_valid:
return False, pattern_error, True
return True, "", True
def _handle_test_mode(expected_result: str, *, is_valid: bool) -> None:
"""Handle test mode output and exit."""
if (expected_result == "success" and is_valid) or (
expected_result == "failure" and not is_valid
):
sys.exit(0) # Test expectation met
sys.exit(1) # Test expectation not met
def _handle_validation_mode(*, is_valid: bool, error_message: str) -> None:
"""Handle validation mode output and exit."""
if is_valid:
print("VALID")
sys.exit(0)
print(f"INVALID: {error_message}")
sys.exit(1)
def _handle_property_mode(args: dict) -> None:
"""Handle property checking mode."""
result = get_input_property(args["action_file"], args["input_name"], args["property"])
print(result)
def _handle_inputs_mode(args: dict) -> None:
"""Handle inputs listing mode."""
inputs = get_action_inputs(args["action_file"])
for input_name in inputs:
print(input_name)
def _handle_outputs_mode(args: dict) -> None:
"""Handle outputs listing mode."""
outputs = get_action_outputs(args["action_file"])
for output_name in outputs:
print(output_name)
def _handle_name_mode(args: dict) -> None:
"""Handle name getting mode."""
name = get_action_name(args["action_file"])
print(name)
def _perform_validation_steps(args: dict) -> tuple[bool, str]:
"""Perform all validation steps and return result."""
# Resolve action file path
action_file = _resolve_action_file_path(args["action_dir"])
# Initialize validator and extract patterns
validator = ActionValidator()
patterns = extract_validation_patterns(action_file)
# Perform security validation (always performed)
security_valid, security_error = validator.validate_security_patterns(args["input_value"])
if not security_valid:
return False, security_error
# Perform input-specific validation
# Check centralized rules first
is_valid, error_message, has_validation = _validate_with_centralized_rules(
args["input_name"],
args["input_value"],
args["action_dir"],
validator,
)
# If no centralized validation, check special input cases
if not has_validation:
is_valid, error_message, has_validation = _validate_special_inputs(
args["input_name"],
args["input_value"],
args["action_dir"],
validator,
)
# If no special validation, try pattern-based validation
if not has_validation:
is_valid, error_message, has_validation = _validate_with_patterns(
args["input_name"],
args["input_value"],
patterns,
validator,
)
return is_valid, error_message
def _handle_validation_mode_main(args: dict) -> None:
"""Handle validation mode from main function."""
is_valid, error_message = _perform_validation_steps(args)
# Handle output based on mode
if args["expected_result"]:
_handle_test_mode(args["expected_result"], is_valid=is_valid)
_handle_validation_mode(is_valid=is_valid, error_message=error_message)
def main():
"""Command-line interface for the validation module."""
args = _parse_command_line_args()
# Dispatch to appropriate mode handler
mode_handlers = {
"property": _handle_property_mode,
"inputs": _handle_inputs_mode,
"outputs": _handle_outputs_mode,
"name": _handle_name_mode,
"validation": _handle_validation_mode_main,
}
if args["mode"] in mode_handlers:
mode_handlers[args["mode"]](args)
else:
print(f"Unknown mode: {args['mode']}")
sys.exit(1)
if __name__ == "__main__":
main()