Files
f2b/cmd/cmd_parallel_operations_test.go
Ismo Vuorinen 605f2b9580 refactor: linting, simplification and fixes (#119)
* refactor: consolidate test helpers and reduce code duplication

- Fix prealloc lint issue in cmd_logswatch_test.go
- Add validateIPAndJails helper to consolidate IP/jail validation
- Add WithTestRunner/WithTestSudoChecker helpers for cleaner test setup
- Replace setupBasicMockResponses duplicates with StandardMockSetup
- Add SetupStandardResponses/SetupJailResponses to MockRunner
- Delegate cmd context helpers to fail2ban implementations
- Document context wrapper pattern in context_helpers.go

* refactor: consolidate duplicate code patterns across cmd and fail2ban packages

Add helper functions to reduce code duplication found by dupl:

- safeCloseFile/safeCloseReader: centralize file close error logging
- createTimeoutContext: consolidate timeout context creation pattern
- withContextCheck: wrap context cancellation checks
- recordOperationMetrics: unify metrics recording for commands/clients

Also includes Phase 1 consolidations:
- copyBuckets helper for metrics snapshots
- Table-driven context extraction in logging
- processWithValidation helper for IP processors

* refactor: consolidate LoggerInterface by embedding LoggerEntry

Both interfaces had identical method signatures. LoggerInterface now
embeds LoggerEntry to eliminate code duplication.

* refactor: consolidate test framework helpers and fix test patterns

- Add checkJSONFieldValue and failMissingJSONField helpers to reduce
  duplication in JSON assertion methods
- Add ParallelTimeout to default test config
- Fix test to use WithTestRunner inside test loop for proper mock scoping

* refactor: unify ban/unban operations with OperationType pattern

Introduce OperationType struct to consolidate duplicate ban/unban logic:
- Add ProcessOperation and ProcessOperationWithContext generic functions
- Add ProcessOperationParallel and ProcessOperationParallelWithContext
- Existing ProcessBan*/ProcessUnban* functions now delegate to generic versions
- Reduces ~120 lines of duplicate code between ban and unban operations

* refactor: consolidate time parsing cache pattern

Add ParseWithLayout method to BoundedTimeCache that consolidates the
cache-lookup-parse-store pattern. FastTimeCache and TimeParsingCache
now delegate to this method instead of duplicating the logic.

* refactor: consolidate command execution patterns in fail2ban

- Add validateCommandExecution helper for command/argument validation
- Add runWithTimerContext helper for timed runner operations
- Add executeIPActionWithContext to unify BanIP/UnbanIP implementations
- Reduces duplicate validation and execution boilerplate

* refactor: consolidate logrus adapter with embedded loggerCore

Introduce loggerCore type that provides the 8 standard logging methods
(Debug, Info, Warn, Error, Debugf, Infof, Warnf, Errorf). Both
logrusAdapter and logrusEntryAdapter now embed this type, eliminating
16 duplicate method implementations.

* refactor: consolidate path validation patterns

- Add validateConfigPathWithFallback helper in cmd/config_utils.go
  for the validate-or-fallback-with-logging pattern
- Add validateClientPath helper in fail2ban/helpers.go for client
  path validation delegation

* fix: add context cancellation checks to wrapper functions

- wrapWithContext0/1/2 now check ctx.Err() before invoking wrapped function
- WithCommand now validates and trims empty command strings

* refactor: extract formatLatencyBuckets for deterministic metrics output

Add formatLatencyBuckets helper that writes latency bucket distribution
with sorted keys for deterministic output, eliminating duplicate
formatting code for command and client latency buckets.

* refactor: add generic setNestedMapValue helper for mock configuration

Add setNestedMapValue[T] generic helper that consolidates the repeated
pattern of mutex-protected nested map initialization and value setting
used by SetBanError, SetBanResult, SetUnbanError, and SetUnbanResult.

* fix: use cmd.Context() for signal propagation and correct mock status

- ExecuteIPCommand now uses cmd.Context() instead of context.Background()
  to inherit Cobra's signal cancellation
- MockRunner.SetupJailResponses uses shared.Fail2BanStatusSuccess ("0")
  instead of literal "1" for proper success path simulation

* fix: restore operation-specific log messages in ProcessOperationWithContext

Add back Logger.WithFields().Info(opType.Message) call that was lost
during refactoring. This restores the distinction between ban and unban
operation messages (shared.MsgBanResult vs shared.MsgUnbanResult).

* fix: return aggregated errors from parallel operations

Previously, errors from individual parallel operations were silently
swallowed - converted to status strings but never returned to callers.

Now processOperations collects all errors and returns them aggregated
via errors.Join, allowing callers to distinguish partial failures from
complete success while still receiving all results.

* fix: add input validation to processOperations before parallel execution

Validate IP and jail inputs at the start of processOperations() using
fail2ban.CachedValidateIP and CachedValidateJail. This prevents invalid
or malicious inputs (empty values, path traversal attempts, malformed
IPs) from reaching the operation functions. All validation errors are
aggregated and returned before any operations execute.
2026-01-25 19:07:45 +02:00

191 lines
5.3 KiB
Go

package cmd
import (
"errors"
"testing"
)
func TestParallelOperationProcessor_IndexValidation(t *testing.T) {
// Test to ensure negative indices don't cause panics
processor := NewParallelOperationProcessor(2)
// Mock client for testing
mockClient := NewMockClient()
jails := []string{"sshd", "apache"} // Use default jails from mock
// This should not panic even if there were negative indices
results, err := processor.ProcessBanOperationParallel(mockClient, "192.168.1.100", jails)
if err != nil {
t.Fatalf("ProcessBanOperationParallel failed: %v", err)
}
if len(results) != 2 {
t.Errorf("Expected 2 results, got %d", len(results))
}
// Verify all results are valid
for i, result := range results {
if result.Jail == "" {
t.Errorf("Result %d has empty jail", i)
}
if result.Status == "" {
t.Errorf("Result %d has empty status", i)
}
}
}
func TestParallelOperationProcessor_UnbanIndexValidation(t *testing.T) {
// Test unban operations for index validation
processor := NewParallelOperationProcessor(2)
// Mock client for testing - need to ban first
mockClient := NewMockClient()
// Ban the IP first so we can unban it using framework for consistency
NewCommandTest(t, "ban").WithArgs("192.168.1.100", "sshd").WithMockClient(mockClient).ExpectSuccess().Run()
NewCommandTest(t, "ban").WithArgs("192.168.1.100", "apache").WithMockClient(mockClient).ExpectSuccess().Run()
jails := []string{"sshd", "apache"}
// This should not panic even if there were negative indices
results, err := processor.ProcessUnbanOperationParallel(mockClient, "192.168.1.100", jails)
if err != nil {
t.Fatalf("ProcessUnbanOperationParallel failed: %v", err)
}
if len(results) != 2 {
t.Errorf("Expected 2 results, got %d", len(results))
}
// Verify all results are valid
for i, result := range results {
if result.Jail == "" {
t.Errorf("Result %d has empty jail", i)
}
if result.Status == "" {
t.Errorf("Result %d has empty status", i)
}
}
}
func TestParallelOperationProcessor_EmptyJailsList(t *testing.T) {
// Test edge case with empty jails list
processor := NewParallelOperationProcessor(2)
mockClient := NewMockClient()
results, err := processor.ProcessBanOperationParallel(mockClient, "192.168.1.100", []string{})
if err != nil {
t.Fatalf("ProcessBanOperationParallel with empty jails failed: %v", err)
}
if len(results) != 0 {
t.Errorf("Expected 0 results for empty jails, got %d", len(results))
}
}
func TestParallelOperationProcessor_ErrorHandling(t *testing.T) {
// Test error handling returns aggregated errors while still populating results
processor := NewParallelOperationProcessor(2)
// Mock client for testing
mockClient := NewMockClient()
// Use non-existent jail to trigger error
jails := []string{"nonexistent1", "nonexistent2"}
results, err := processor.ProcessBanOperationParallel(mockClient, "192.168.1.100", jails)
// Errors should now be returned (aggregated)
if err == nil {
t.Error("Expected error for non-existent jails, got nil")
}
if len(results) != 2 {
t.Errorf("Expected 2 results, got %d", len(results))
}
// All results should still be populated with error status
for i, result := range results {
if result.Jail == "" {
t.Errorf("Result %d has empty jail", i)
}
// Status should indicate the error (e.g., "jail 'nonexistent1' not found")
if result.Status == "" {
t.Errorf("Result %d has empty status", i)
}
}
}
func TestParallelOperationProcessor_ConcurrentSafety(t *testing.T) {
// Test concurrent access doesn't cause race conditions or index issues
processor := NewParallelOperationProcessor(4)
mockClient := NewMockClient()
// Set up for multiple IPs and jails
ips := []string{"192.168.1.100", "192.168.1.101", "192.168.1.102"}
jails := []string{"sshd", "apache"} // Use existing jails in mock
// Run multiple operations concurrently
errChan := make(chan error, len(ips))
for _, ip := range ips {
go func(testIP string) {
results, err := processor.ProcessBanOperationParallel(mockClient, testIP, jails)
if err != nil {
errChan <- err
return
}
if len(results) != len(jails) {
errChan <- errors.New("incorrect number of results")
return
}
errChan <- nil
}(ip)
}
// Check all operations completed successfully
for i := 0; i < len(ips); i++ {
if err := <-errChan; err != nil {
t.Errorf("Concurrent operation %d failed: %v", i, err)
}
}
}
func TestNewParallelOperationProcessor(t *testing.T) {
// Test processor creation with various worker counts
tests := []struct {
name string
workerCount int
expectCPU bool
}{
{"positive worker count", 4, false},
{"zero worker count uses CPU count", 0, true},
{"negative worker count uses CPU count", -1, true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
processor := NewParallelOperationProcessor(tt.workerCount)
if processor == nil {
t.Fatal("NewParallelOperationProcessor returned nil")
}
if tt.expectCPU {
// Should use CPU count when invalid worker count provided
if processor.workerCount <= 0 {
t.Error("Worker count should be positive when using CPU count")
}
} else {
if processor.workerCount != tt.workerCount {
t.Errorf("Expected worker count %d, got %d", tt.workerCount, processor.workerCount)
}
}
})
}
}