Files
actions/_tests/run-tests.sh
Ismo Vuorinen ab371bdebf feat: simplify actions (#353)
* feat: first pass simplification

* refactor: simplify actions repository structure

Major simplification reducing actions from 44 to 30:

Consolidations:
- Merge biome-check + biome-fix → biome-lint (mode: check/fix)
- Merge eslint-check + eslint-fix → eslint-lint (mode: check/fix)
- Merge prettier-check + prettier-fix → prettier-lint (mode: check/fix)
- Merge 5 version-detect actions → language-version-detect (language param)

Removals:
- common-file-check, common-retry (better served by external tools)
- docker-publish-gh, docker-publish-hub (consolidated into docker-publish)
- github-release (redundant with existing tooling)
- set-git-config (no longer needed)
- version-validator (functionality moved to language-version-detect)

Fixes:
- Rewrite docker-publish to use official Docker actions directly
- Update validate-inputs example (eslint-fix → eslint-lint)
- Update tests and documentation for new structure

Result: ~6,000 lines removed, cleaner action catalog, maintained functionality.

* refactor: complete action simplification and cleanup

Remove deprecated actions and update remaining actions:

Removed:
- common-file-check, common-retry: utility actions
- docker-publish-gh, docker-publish-hub: replaced by docker-publish wrapper
- github-release, version-validator, set-git-config: no longer needed
- Various version-detect actions: replaced by language-version-detect

Updated:
- docker-publish: rewrite as simple wrapper using official Docker actions
- validate-inputs: update example (eslint-fix → eslint-lint)
- Multiple actions: update configurations and remove deprecated dependencies
- Tests: update integration/unit tests for new structure
- Documentation: update README, remove test for deleted actions

Configuration updates:
- Linter configs, ignore files for new structure
- Makefile, pyproject.toml updates

* fix: enforce POSIX compliance in GitHub workflows

Convert all workflow shell scripts to POSIX-compliant sh:

Critical fixes:
- Replace bash with sh in all shell declarations
- Replace [[ with [ for test conditions
- Replace == with = for string comparisons
- Replace set -euo pipefail with set -eu
- Split compound AND conditions into separate [ ] tests

Files updated:
- .github/workflows/test-actions.yml (7 shell declarations, 10 test operators)
- .github/workflows/security-suite.yml (set -eu)
- .github/workflows/action-security.yml (2 shell declarations)
- .github/workflows/pr-lint.yml (3 shell declarations)
- .github/workflows/issue-stats.yml (1 shell declaration)

Ensures compatibility with minimal sh implementations and aligns with
CLAUDE.md standards requiring POSIX shell compliance across all scripts.

All tests pass: 764 pytest tests, 100% coverage.

* fix: add missing permissions for private repository support

Add critical permissions to pr-lint workflow for private repositories:

Workflow-level permissions:
+ packages: read - Access private npm/PyPI/Composer packages

Job-level permissions:
+ packages: read - Access private packages during dependency installation
+ checks: write - Create and update check runs

Fixes failures when:
- Installing private npm packages from GitHub Packages
- Installing private Composer dependencies
- Installing private Python packages
- Creating status checks with github-script

Valid permission scopes per actionlint:
actions, attestations, checks, contents, deployments, discussions,
id-token, issues, models, packages, pages, pull-requests,
repository-projects, security-events, statuses

Note: "workflows" and "metadata" are NOT valid permission scopes
(they are PAT-only scopes or auto-granted respectively).

* docs: update readmes

* fix: replace bash-specific 'source' with POSIX '.' command

Replace all occurrences of 'source' with '.' (dot) for POSIX compliance:

Changes in python-lint-fix/action.yml:
- Line 165: source .venv/bin/activate → . .venv/bin/activate
- Line 179: source .venv/bin/activate → . .venv/bin/activate
- Line 211: source .venv/bin/activate → . .venv/bin/activate

Also fixed bash-specific test operator:
- Line 192: [[ "$FAIL_ON_ERROR" == "true" ]] → [ "$FAIL_ON_ERROR" = "true" ]

The 'source' command is bash-specific. POSIX sh uses '.' (dot) to source files.
Both commands have identical functionality but '.' is portable across all
POSIX-compliant shells.

* security: fix code injection vulnerability in docker-publish

Fix CodeQL code injection warning (CWE-094, CWE-095, CWE-116):

Issue: inputs.context was used directly in GitHub Actions expression
without sanitization at line 194, allowing potential code injection
by external users.

Fix: Use environment variable indirection to prevent expression injection:
- Added env.BUILD_CONTEXT to capture inputs.context
- Changed context parameter to use ${{ env.BUILD_CONTEXT }}

Environment variables are evaluated after expression compilation,
preventing malicious code execution during workflow parsing.

Security Impact: Medium severity (CVSS 5.0)
Identified by: GitHub Advanced Security (CodeQL)
Reference: https://github.com/ivuorinen/actions/pull/353#pullrequestreview-3481935924

* security: prevent credential persistence in pr-lint checkout

Add persist-credentials: false to checkout step to mitigate untrusted
checkout vulnerability. This prevents GITHUB_TOKEN from being accessible
to potentially malicious PR code.

Fixes: CodeQL finding CWE-829 (untrusted checkout on privileged workflow)

* fix: prevent security bot from overwriting unrelated comments

Replace broad string matching with unique HTML comment marker for
identifying bot-generated comments. Previously, any comment containing
'Security Analysis' or '🔐 GitHub Actions Permissions' would be
overwritten, causing data loss.

Changes:
- Add unique marker: <!-- security-analysis-bot-comment -->
- Prepend marker to generated comment body
- Update comment identification to use marker only
- Add defensive null check for comment.body

This fixes critical data loss bug where user comments could be
permanently overwritten by the security analysis bot.

Follows same proven pattern as test-actions.yml coverage comments.

* improve: show concise permissions diff instead of full blocks

Replace verbose full-block permissions diff with line-by-line changes.
Now shows only added/removed permissions, making output much more
readable.

Changes:
- Parse permissions into individual lines
- Compare old vs new to identify actual changes
- Show only removed (-) and added (+) lines in diff
- Collapse unchanged permissions into details section (≤3 items)
- Show count summary for many unchanged permissions (>3 items)

Example output:
  Before: 30+ lines showing entire permissions block
  After: 3-5 lines showing only what changed

This addresses user feedback that permissions changes were too verbose.

* security: add input validation and trust model documentation

Add comprehensive security validation for docker-publish action to prevent
code injection attacks (CWE-094, CWE-116).

Changes:
- Add validation for context input (reject absolute paths, warn on URLs)
- Add validation for dockerfile input (reject absolute/URL paths)
- Document security trust model in README
- Add best practices for secure usage
- Explain validation rules and threat model

Prevents malicious actors from:
- Building from arbitrary file system locations
- Fetching Dockerfiles from untrusted remote sources
- Executing code injection through build context manipulation

Addresses: CodeRabbit review comments #2541434325, #2541549615
Fixes: GitHub Advanced Security code injection findings

* security: replace unmaintained nick-fields/retry with step-security/retry

Replace nick-fields/retry with step-security/retry across all 4 actions:
- csharp-build/action.yml
- php-composer/action.yml
- go-build/action.yml
- ansible-lint-fix/action.yml

The nick-fields/retry action has security vulnerabilities and low maintenance.
step-security/retry is a drop-in replacement with full API compatibility.

All inputs (timeout_minutes, max_attempts, command, retry_wait_seconds) are
compatible. Using SHA-pinned version for security.

Addresses CodeRabbit review comment #2541549598

* test: add is_input_required() helper function

Add helper function to check if an action input is required, reducing
duplication across test suites.

The function:
- Takes action_file and input_name as parameters
- Uses validation_core.py to query the 'required' property
- Returns 0 (success) if input is required
- Returns 1 (failure) if input is optional

This DRY improvement addresses CodeRabbit review comment #2541549572

* feat: add mode validation convention mapping

Add "mode" to the validation conventions mapping for lint actions
(eslint-lint, biome-lint, prettier-lint).

Note: The update-validators script doesn't currently recognize "string"
as a validator type, so mode validation coverage remains at 93%. The
actions already have inline validation for mode (check|fix), so this is
primarily for improving coverage metrics.

Addresses part of CodeRabbit review comment #2541549570
(validation coverage improvement)

* docs: fix CLAUDE.md action counts and add missing action

- Update action count from 31 to 29 (line 42)
- Add missing 'action-versioning' to Utilities category (line 74)

Addresses CodeRabbit review comments #2541553130 and #2541553110

* docs: add security considerations to docker-publish

Add security documentation to both action.yml header and README.md:
- Trust model explanation
- Input validation details for context and dockerfile
- Attack prevention information
- Best practices for secure usage

The documentation was previously removed when README was autogenerated.
Now documented in both places to ensure it persists.

* fix: correct step ID reference in docker-build

Fix incorrect step ID reference in platforms output:
- Changed steps.platforms.outputs.built to steps.detect-platforms.outputs.platforms
- The step is actually named 'detect-platforms' not 'platforms'
- Ensures output correctly references the detect-platforms step defined at line 188

* fix: ensure docker-build platforms output is always available

Make detect-platforms step unconditional to fix broken output contract.

The platforms output (line 123) references steps.detect-platforms.outputs.platforms,
but the step only ran when auto-detect-platforms was true (default: false).
This caused undefined output in most cases.

Changes:
- Remove 'if' condition from detect-platforms step
- Step now always runs and always produces platforms output
- When auto-detect is false: outputs configured architectures
- When auto-detect is true: outputs detected platforms or falls back to architectures
- Add '|| true' to grep to prevent errors when no platforms detected

Fixes CodeRabbit review comment #2541824904

* security: remove env var indirection in docker-publish BUILD_CONTEXT

Remove BUILD_CONTEXT env var indirection to address GitHub Advanced Security alert.

The inputs.context is validated at lines 137-147 (rejects absolute paths, warns on URLs)
before being used, so the env var indirection is unnecessary and triggers false positive
code injection warnings.

Changes:
- Remove BUILD_CONTEXT env var (line 254)
- Use inputs.context directly (line 256 → 254)
- Input validation remains in place (lines 137-147)

Fixes GitHub Advanced Security code injection alerts (comments #2541405269, #2541522320)

* feat: implement mode_enum validator for lint actions

Add mode_enum validator to validate mode inputs in linting actions.

Changes to conventions.py:
- Add 'mode_enum' to exact_matches mapping (line 215)
- Add 'mode_enum' to PHP-specific validators list (line 560)
- Implement _validate_mode_enum() method (lines 642-660)
  - Validates mode values against ['check', 'fix']
  - Returns clear error messages for invalid values

Updated rules.yml files:
- biome-lint: Add mode: mode_enum convention
- eslint-lint: Add mode: mode_enum convention
- prettier-lint: Add mode: mode_enum convention
- All rules.yml: Fix YAML formatting with yamlfmt

This addresses PR #353 comment #2541522326 which reported that mode validation
was being skipped due to unrecognized 'string' type, reducing coverage to 93%.

Tested with biome-lint action - correctly rejects invalid values and accepts
valid 'check' and 'fix' values.

* docs: update action count from 29 to 30 in CLAUDE.md

Update two references to action count in CLAUDE.md:
- Line 42: repository_overview memory description
- Line 74: Repository Structure section header

The repository has 30 actions total (29 listed + validate-inputs).

Addresses PR #353 comment #2541549588.

* docs: use pinned version ref in language-version-detect README

Change usage example from @main to @v2025 for security best practices.

Using pinned version refs (instead of @main) ensures:
- Predictable behavior across workflow runs
- Protection against breaking changes
- Better security through immutable references

Follows repository convention documented in main README and CLAUDE.md.

Addresses PR #353 comment #2541549588.

* refactor: remove deprecated add-snippets input from codeql-analysis

Remove add-snippets input which has been deprecated by GitHub's CodeQL action
and no longer has any effect.

Changes:
- Remove add-snippets input definition (lines 93-96)
- Remove reference in init step (line 129)
- Remove reference in analyze step (line 211)
- Regenerate README and rules.yml

This is a non-breaking change since:
- Default was 'false' (minimal usage expected)
- GitHub's action already ignores this parameter
- Aligns with recent repository simplification efforts

* feat: add mode_enum validator and update rules

Add mode_enum validator support for lint actions and regenerate all validation rules:

Validator Changes:
- Add mode_enum to action_overrides for biome-lint, eslint-lint, prettier-lint
- Remove deprecated add-snippets from codeql-analysis overrides

Rules Updates:
- All 29 action rules.yml files regenerated with consistent YAML formatting
- biome-lint, eslint-lint, prettier-lint now validate mode input (check/fix)
- Improved coverage for lint actions (79% → 83% for biome, 93% for eslint, 79% for prettier)

Documentation:
- Fix language-version-detect README to use @v2025 (not @main)
- Remove outdated docker-publish security docs (now handled by official actions)

This completes PR #353 review feedback implementation.

* fix: replace bash-specific $'\n' with POSIX-compliant printf

Replace non-POSIX $'\n' syntax in tag building loop with printf-based
approach that works in any POSIX shell.

Changed:
- Line 216: tags="${tags}"$'\n'"${image}:${tag}"
+ Line 216: tags="$(printf '%s\n%s' "$tags" "${image}:${tag}")"

This ensures docker-publish/action.yml runs correctly on systems using
/bin/sh instead of bash.
2025-11-19 15:42:06 +02:00

758 lines
22 KiB
Bash
Executable File

#!/usr/bin/env bash
# GitHub Actions Testing Framework - Main Test Runner
# Executes tests across all levels: unit, integration, and e2e
set -euo pipefail
# Script directory and test root
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
TEST_ROOT="$SCRIPT_DIR"
# Source framework utilities
# shellcheck source=_tests/framework/setup.sh
source "${TEST_ROOT}/framework/setup.sh"
# Configuration
DEFAULT_TEST_TYPE="all"
DEFAULT_ACTION_FILTER=""
PARALLEL_JOBS=4
COVERAGE_ENABLED=true
REPORT_FORMAT="console"
# Usage information
usage() {
cat <<EOF
GitHub Actions Testing Framework
Usage: $0 [OPTIONS] [ACTION_NAME...]
OPTIONS:
-t, --type TYPE Test type: unit, integration, e2e, all (default: all)
-a, --action ACTION Filter by specific action name
-j, --jobs JOBS Number of parallel jobs (default: 4)
-c, --coverage Enable coverage reporting (default: true)
--no-coverage Disable coverage reporting
-f, --format FORMAT Report format: console, json, junit, sarif (default: console)
-v, --verbose Enable verbose output
-h, --help Show this help message
EXAMPLES:
$0 # Run all tests for all actions
$0 -t unit # Run only unit tests
$0 -a node-setup # Test only node-setup action
$0 -t integration docker-build # Integration tests for docker-build
$0 --format json --coverage # Full tests with JSON output and coverage
$0 --format sarif # Generate SARIF report for security scanning
TEST TYPES:
unit - Fast unit tests for action validation and logic
integration - Integration tests using nektos/act or workflows
e2e - End-to-end tests with complete workflows
all - All test types (default)
EOF
}
# Parse command line arguments
parse_args() {
local test_type="$DEFAULT_TEST_TYPE"
local action_filter="$DEFAULT_ACTION_FILTER"
local actions=()
while [[ $# -gt 0 ]]; do
case $1 in
-t | --type)
if [[ $# -lt 2 ]]; then
echo "Error: $1 requires an argument" >&2
usage
exit 1
fi
test_type="$2"
shift 2
;;
-a | --action)
if [[ $# -lt 2 ]]; then
echo "Error: $1 requires an argument" >&2
usage
exit 1
fi
action_filter="$2"
shift 2
;;
-j | --jobs)
if [[ $# -lt 2 ]]; then
echo "Error: $1 requires an argument" >&2
usage
exit 1
fi
PARALLEL_JOBS="$2"
shift 2
;;
-c | --coverage)
COVERAGE_ENABLED=true
shift
;;
--no-coverage)
COVERAGE_ENABLED=false
shift
;;
-f | --format)
if [[ $# -lt 2 ]]; then
echo "Error: $1 requires an argument" >&2
usage
exit 1
fi
REPORT_FORMAT="$2"
shift 2
;;
-v | --verbose)
set -x
shift
;;
-h | --help)
usage
exit 0
;;
--)
shift
actions+=("$@")
break
;;
-*)
log_error "Unknown option: $1"
usage
exit 1
;;
*)
actions+=("$1")
shift
;;
esac
done
# Export for use in other functions
export TEST_TYPE="$test_type"
export ACTION_FILTER="$action_filter"
TARGET_ACTIONS=("${actions[@]+"${actions[@]}"}")
}
# Discover available actions
discover_actions() {
local actions=()
if [[ ${#TARGET_ACTIONS[@]} -gt 0 ]]; then
# Use provided actions
actions=("${TARGET_ACTIONS[@]}")
elif [[ -n $ACTION_FILTER ]]; then
# Filter by pattern
while IFS= read -r action_dir; do
local action_name
action_name=$(basename "$action_dir")
if [[ $action_name == *"$ACTION_FILTER"* ]]; then
actions+=("$action_name")
fi
done < <(find "${TEST_ROOT}/.." -mindepth 2 -maxdepth 2 -type f -name "action.yml" -exec dirname {} \; | sort)
else
# All actions
while IFS= read -r action_dir; do
local action_name
action_name=$(basename "$action_dir")
actions+=("$action_name")
done < <(find "${TEST_ROOT}/.." -mindepth 2 -maxdepth 2 -type f -name "action.yml" -exec dirname {} \; | sort)
fi
log_info "Discovered ${#actions[@]} actions to test: ${actions[*]}"
printf '%s\n' "${actions[@]}"
}
# Check if required tools are available
check_dependencies() {
# Check for ShellSpec
if ! command -v shellspec >/dev/null 2>&1; then
log_warning "ShellSpec not found, attempting to install..."
install_shellspec
fi
# Check for act (if running integration tests)
if [[ $TEST_TYPE == "integration" || $TEST_TYPE == "all" ]]; then
if ! command -v act >/dev/null 2>&1; then
log_warning "nektos/act not found, integration tests will be limited"
fi
fi
# Check for coverage tools (if enabled)
if [[ $COVERAGE_ENABLED == "true" ]]; then
if ! command -v kcov >/dev/null 2>&1; then
log_warning "kcov not found - coverage will use alternative methods"
fi
fi
log_success "Dependency check completed"
}
# Install ShellSpec if not available
install_shellspec() {
log_info "Installing ShellSpec testing framework..."
local shellspec_version="0.28.1"
local install_dir="${HOME}/.local"
# Download and install ShellSpec (download -> verify SHA256 -> extract -> install)
local tarball
tarball="$(mktemp /tmp/shellspec-XXXXXX.tar.gz)"
# Pinned SHA256 checksum for ShellSpec 0.28.1
# Source: https://github.com/shellspec/shellspec/archive/refs/tags/0.28.1.tar.gz
local checksum="400d835466429a5fe6c77a62775a9173729d61dd43e05dfa893e8cf6cb511783"
# Ensure cleanup of the downloaded file
# Use ${tarball:-} to handle unbound variable when trap fires after function returns
cleanup() {
rm -f "${tarball:-}"
}
trap cleanup EXIT
log_info "Downloading ShellSpec ${shellspec_version} to ${tarball}..."
if ! curl -fsSL -o "$tarball" "https://github.com/shellspec/shellspec/archive/refs/tags/${shellspec_version}.tar.gz"; then
log_error "Failed to download ShellSpec ${shellspec_version}"
exit 1
fi
# Compute SHA256 in a portable way
local actual_sha
if command -v sha256sum >/dev/null 2>&1; then
actual_sha="$(sha256sum "$tarball" | awk '{print $1}')"
elif command -v shasum >/dev/null 2>&1; then
actual_sha="$(shasum -a 256 "$tarball" | awk '{print $1}')"
else
log_error "No SHA256 utility available (sha256sum or shasum required) to verify download"
exit 1
fi
if [[ "$actual_sha" != "$checksum" ]]; then
log_error "Checksum mismatch for ShellSpec ${shellspec_version} (expected ${checksum}, got ${actual_sha})"
exit 1
fi
log_info "Checksum verified for ShellSpec ${shellspec_version}, extracting..."
if ! tar -xzf "$tarball" -C /tmp/; then
log_error "Failed to extract ShellSpec archive"
exit 1
fi
if ! (cd "/tmp/shellspec-${shellspec_version}" && make install PREFIX="$install_dir"); then
log_error "ShellSpec make install failed"
exit 1
fi
# Add to PATH if not already there
if [[ ":$PATH:" != *":${install_dir}/bin:"* ]]; then
export PATH="${install_dir}/bin:$PATH"
# Append to shell rc only in non-CI environments
if [[ -z "${CI:-}" ]]; then
if ! grep -qxF "export PATH=\"${install_dir}/bin:\$PATH\"" ~/.bashrc 2>/dev/null; then
echo "export PATH=\"${install_dir}/bin:\$PATH\"" >>~/.bashrc
fi
fi
fi
if command -v shellspec >/dev/null 2>&1; then
log_success "ShellSpec installed successfully"
# Clear the trap now that we've succeeded to prevent unbound variable error on script exit
trap - EXIT
rm -f "$tarball"
else
log_error "Failed to install ShellSpec"
exit 1
fi
}
# Run unit tests
run_unit_tests() {
local actions=("$@")
local failed_tests=()
local passed_tests=()
log_info "Running unit tests for ${#actions[@]} actions..."
# Create test results directory
mkdir -p "${TEST_ROOT}/reports/unit"
for action in "${actions[@]}"; do
local unit_test_dir="${TEST_ROOT}/unit/${action}"
if [[ -d $unit_test_dir ]]; then
log_info "Running unit tests for: $action"
# Run ShellSpec tests
local test_result=0
local output_file="${TEST_ROOT}/reports/unit/${action}.txt"
# Run shellspec and capture both exit code and output
# Note: ShellSpec returns non-zero exit codes for warnings (101) and other conditions
# We need to check the actual output to determine if tests failed
# Pass action name relative to --default-path (_tests/unit) for proper spec_helper loading
(cd "$TEST_ROOT/.." && shellspec \
--format documentation \
"$action") >"$output_file" 2>&1 || true
# Parse the output to determine if tests actually failed
# Look for the summary line which shows "X examples, Y failures"
if grep -qE "[0-9]+ examples?, 0 failures?" "$output_file" && ! grep -q "Fatal error occurred" "$output_file"; then
log_success "Unit tests passed: $action"
passed_tests+=("$action")
else
# Check if there were actual failures (not just warnings)
if grep -qE "[0-9]+ examples?, [1-9][0-9]* failures?" "$output_file"; then
log_error "Unit tests failed: $action"
failed_tests+=("$action")
test_result=1
else
# No summary line found, treat as passed if no fatal errors
if ! grep -q "Fatal error occurred" "$output_file"; then
log_success "Unit tests passed: $action"
passed_tests+=("$action")
else
log_error "Unit tests failed: $action"
failed_tests+=("$action")
test_result=1
fi
fi
fi
# Show summary if verbose or on failure
if [[ $test_result -ne 0 || ${BASHOPTS:-} == *"xtrace"* || $- == *x* ]]; then
echo "--- Test output for $action ---"
cat "$output_file"
echo "--- End test output ---"
fi
else
log_warning "No unit tests found for: $action"
fi
done
# Report results
log_info "Unit test results:"
log_success " Passed: ${#passed_tests[@]} actions"
if [[ ${#failed_tests[@]} -gt 0 ]]; then
log_error " Failed: ${#failed_tests[@]} actions (${failed_tests[*]})"
return 1
fi
return 0
}
# Run integration tests using nektos/act
run_integration_tests() {
local actions=("$@")
local failed_tests=()
local passed_tests=()
log_info "Running integration tests for ${#actions[@]} actions..."
# Create test results directory
mkdir -p "${TEST_ROOT}/reports/integration"
for action in "${actions[@]}"; do
local workflow_file="${TEST_ROOT}/integration/workflows/${action}-test.yml"
if [[ -f $workflow_file ]]; then
log_info "Running integration test workflow for: $action"
# Run with act if available, otherwise skip
if command -v act >/dev/null 2>&1; then
local output_file="${TEST_ROOT}/reports/integration/${action}.txt"
# Create temp directory for artifacts
local artifacts_dir
artifacts_dir=$(mktemp -d) || exit 1
if act workflow_dispatch \
-W "$workflow_file" \
--container-architecture linux/amd64 \
--artifact-server-path "$artifacts_dir" \
-P ubuntu-latest=catthehacker/ubuntu:act-latest \
>"$output_file" 2>&1; then
log_success "Integration tests passed: $action"
passed_tests+=("$action")
else
log_error "Integration tests failed: $action"
failed_tests+=("$action")
# Show output on failure
echo "--- Integration test output for $action ---"
cat "$output_file"
echo "--- End integration test output ---"
fi
# Clean up artifacts directory
rm -rf "$artifacts_dir"
else
log_warning "Skipping integration test for $action (act not available)"
fi
else
log_warning "No integration test workflow found for: $action"
fi
done
# Report results
log_info "Integration test results:"
log_success " Passed: ${#passed_tests[@]} actions"
if [[ ${#failed_tests[@]} -gt 0 ]]; then
log_error " Failed: ${#failed_tests[@]} actions (${failed_tests[*]})"
return 1
fi
return 0
}
# Generate test coverage report
generate_coverage_report() {
if [[ $COVERAGE_ENABLED != "true" ]]; then
return 0
fi
log_info "Generating coverage report..."
local coverage_dir="${TEST_ROOT}/coverage"
mkdir -p "$coverage_dir"
# This is a simplified coverage implementation
# In practice, you'd integrate with kcov or similar tools
# Count tested vs total actions (count directories with action.yml files, excluding hidden/internal dirs and node_modules)
local project_root
project_root="$(cd "${TEST_ROOT}/.." && pwd)"
local total_actions
total_actions=$(find "$project_root" -mindepth 2 -maxdepth 2 -type f -name "action.yml" 2>/dev/null | wc -l | tr -d ' ')
# Count actions that have unit tests (by checking if validation.spec.sh exists)
local tested_actions
tested_actions=$(find "${TEST_ROOT}/unit" -mindepth 2 -maxdepth 2 -type f -name "validation.spec.sh" 2>/dev/null | wc -l | tr -d ' ')
local coverage_percent
if [[ $total_actions -gt 0 ]]; then
coverage_percent=$(((tested_actions * 100) / total_actions))
else
coverage_percent=0
fi
cat >"${coverage_dir}/summary.json" <<EOF
{
"total_actions": $total_actions,
"tested_actions": $tested_actions,
"coverage_percent": $coverage_percent,
"generated_at": "$(date -u +"%Y-%m-%dT%H:%M:%SZ")"
}
EOF
log_success "Coverage report generated: ${coverage_percent}% ($tested_actions/$total_actions actions)"
}
# Generate test report
generate_test_report() {
log_info "Generating test report in format: $REPORT_FORMAT"
local report_dir="${TEST_ROOT}/reports"
mkdir -p "$report_dir"
case "$REPORT_FORMAT" in
"json")
generate_json_report
;;
"junit")
log_warning "JUnit report format not yet implemented, using JSON instead"
generate_json_report
;;
"sarif")
generate_sarif_report
;;
"console" | *)
generate_console_report
;;
esac
}
# Generate JSON test report
generate_json_report() {
local report_file="${TEST_ROOT}/reports/test-results.json"
cat >"$report_file" <<EOF
{
"test_run": {
"timestamp": "$(date -u +"%Y-%m-%dT%H:%M:%SZ")",
"type": "$TEST_TYPE",
"action_filter": "$ACTION_FILTER",
"parallel_jobs": $PARALLEL_JOBS,
"coverage_enabled": $COVERAGE_ENABLED
},
"results": {
"unit_tests": $(find "${TEST_ROOT}/reports/unit" -name "*.txt" 2>/dev/null | wc -l | tr -d ' '),
"integration_tests": $(find "${TEST_ROOT}/reports/integration" -name "*.txt" 2>/dev/null | wc -l | tr -d ' ')
}
}
EOF
log_success "JSON report generated: $report_file"
}
# Generate SARIF test report
generate_sarif_report() {
# Check for jq availability
if ! command -v jq >/dev/null 2>&1; then
log_warning "jq not found, skipping SARIF report generation"
return 0
fi
local report_file="${TEST_ROOT}/reports/test-results.sarif"
local run_id
run_id="github-actions-test-$(date +%s)"
local timestamp
timestamp="$(date -u +"%Y-%m-%dT%H:%M:%SZ")"
# Initialize SARIF structure using jq to ensure proper escaping
jq -n \
--arg run_id "$run_id" \
--arg timestamp "$timestamp" \
--arg test_type "$TEST_TYPE" \
'{
"$schema": "https://json.schemastore.org/sarif-2.1.0.json",
"version": "2.1.0",
"runs": [
{
"automationDetails": {
"id": $run_id
},
"tool": {
"driver": {
"name": "GitHub Actions Testing Framework",
"version": "1.0.0",
"informationUri": "https://github.com/ivuorinen/actions",
"rules": []
}
},
"results": [],
"invocations": [
{
"executionSuccessful": true,
"startTimeUtc": $timestamp,
"arguments": ["--type", $test_type, "--format", "sarif"]
}
]
}
]
}' >"$report_file"
# Parse test results and add SARIF findings
local results_array="[]"
local rules_array="[]"
# Process unit test failures
if [[ -d "${TEST_ROOT}/reports/unit" ]]; then
for test_file in "${TEST_ROOT}/reports/unit"/*.txt; do
if [[ -f "$test_file" ]]; then
local action_name
action_name=$(basename "$test_file" .txt)
# Check if test failed by looking for actual failures in the summary line
if grep -qE "[0-9]+ examples?, [1-9][0-9]* failures?" "$test_file" || grep -q "Fatal error occurred" "$test_file"; then
# Extract failure details
local failure_message
failure_message=$(grep -E "(Fatal error|failure|FAILED)" "$test_file" | head -1 || echo "Test failed")
# Add rule if not exists
if ! echo "$rules_array" | jq -e '.[] | select(.id == "test-failure")' >/dev/null 2>&1; then
rules_array=$(echo "$rules_array" | jq '. + [{
"id": "test-failure",
"name": "TestFailure",
"shortDescription": {"text": "Test execution failed"},
"fullDescription": {"text": "A unit or integration test failed during execution"},
"defaultConfiguration": {"level": "error"}
}]')
fi
# Add result using jq --arg to safely escape dynamic strings
results_array=$(echo "$results_array" | jq \
--arg failure_msg "$failure_message" \
--arg action_name "$action_name" \
'. + [{
"ruleId": "test-failure",
"level": "error",
"message": {"text": $failure_msg},
"locations": [{
"physicalLocation": {
"artifactLocation": {"uri": ($action_name + "/action.yml")},
"region": {"startLine": 1, "startColumn": 1}
}
}]
}]')
fi
fi
done
fi
# Process integration test failures similarly
if [[ -d "${TEST_ROOT}/reports/integration" ]]; then
for test_file in "${TEST_ROOT}/reports/integration"/*.txt; do
if [[ -f "$test_file" ]]; then
local action_name
action_name=$(basename "$test_file" .txt)
if grep -qE "FAILED|ERROR|error:" "$test_file"; then
local failure_message
failure_message=$(grep -E "(FAILED|ERROR|error:)" "$test_file" | head -1 || echo "Integration test failed")
# Add integration rule if not exists
if ! echo "$rules_array" | jq -e '.[] | select(.id == "integration-failure")' >/dev/null 2>&1; then
rules_array=$(echo "$rules_array" | jq '. + [{
"id": "integration-failure",
"name": "IntegrationFailure",
"shortDescription": {"text": "Integration test failed"},
"fullDescription": {"text": "An integration test failed during workflow execution"},
"defaultConfiguration": {"level": "warning"}
}]')
fi
# Add result using jq --arg to safely escape dynamic strings
results_array=$(echo "$results_array" | jq \
--arg failure_msg "$failure_message" \
--arg action_name "$action_name" \
'. + [{
"ruleId": "integration-failure",
"level": "warning",
"message": {"text": $failure_msg},
"locations": [{
"physicalLocation": {
"artifactLocation": {"uri": ($action_name + "/action.yml")},
"region": {"startLine": 1, "startColumn": 1}
}
}]
}]')
fi
fi
done
fi
# Update SARIF file with results and rules
local temp_file
temp_file=$(mktemp)
jq --argjson rules "$rules_array" --argjson results "$results_array" \
'.runs[0].tool.driver.rules = $rules | .runs[0].results = $results' \
"$report_file" >"$temp_file" && mv "$temp_file" "$report_file"
log_success "SARIF report generated: $report_file"
}
# Generate console test report
generate_console_report() {
echo ""
echo "========================================"
echo " GitHub Actions Test Framework Report"
echo "========================================"
echo "Test Type: $TEST_TYPE"
echo "Timestamp: $(date)"
echo "Coverage Enabled: $COVERAGE_ENABLED"
echo ""
if [[ -d "${TEST_ROOT}/reports/unit" ]]; then
local unit_tests
unit_tests=$(find "${TEST_ROOT}/reports/unit" -name "*.txt" 2>/dev/null | wc -l | tr -d ' ')
printf "%-25s %4s\n" "Unit Tests Run:" "$unit_tests"
fi
if [[ -d "${TEST_ROOT}/reports/integration" ]]; then
local integration_tests
integration_tests=$(find "${TEST_ROOT}/reports/integration" -name "*.txt" 2>/dev/null | wc -l | tr -d ' ')
printf "%-25s %4s\n" "Integration Tests Run:" "$integration_tests"
fi
if [[ -f "${TEST_ROOT}/coverage/summary.json" ]]; then
local coverage
coverage=$(jq -r '.coverage_percent' "${TEST_ROOT}/coverage/summary.json" 2>/dev/null || echo "N/A")
if [[ "$coverage" =~ ^[0-9]+$ ]]; then
printf "%-25s %4s%%\n" "Test Coverage:" "$coverage"
else
printf "%-25s %s\n" "Test Coverage:" "$coverage"
fi
fi
echo "========================================"
}
# Main test execution function
main() {
log_info "Starting GitHub Actions Testing Framework"
# Parse arguments
parse_args "$@"
# Initialize framework
init_testing_framework
# Check dependencies
check_dependencies
# Discover actions to test
local actions=()
while IFS= read -r action; do
actions+=("$action")
done < <(discover_actions)
if [[ ${#actions[@]} -eq 0 ]]; then
log_error "No actions found to test"
exit 1
fi
# Run tests based on type
local test_failed=false
case "$TEST_TYPE" in
"unit")
if ! run_unit_tests "${actions[@]}"; then
test_failed=true
fi
;;
"integration")
if ! run_integration_tests "${actions[@]}"; then
test_failed=true
fi
;;
"e2e")
log_warning "E2E tests not yet implemented"
;;
"all")
if ! run_unit_tests "${actions[@]}"; then
test_failed=true
fi
if ! run_integration_tests "${actions[@]}"; then
test_failed=true
fi
;;
*)
log_error "Unknown test type: $TEST_TYPE"
exit 1
;;
esac
# Generate coverage report
generate_coverage_report
# Generate test report
generate_test_report
# Final status
if [[ $test_failed == "true" ]]; then
log_error "Some tests failed"
exit 1
else
log_success "All tests passed!"
exit 0
fi
}
# Run main function if script is executed directly
if [[ ${BASH_SOURCE[0]} == "${0}" ]]; then
main "$@"
fi