Files
f2b/cmd/metrics_cmd.go
Ismo Vuorinen 605f2b9580 refactor: linting, simplification and fixes (#119)
* refactor: consolidate test helpers and reduce code duplication

- Fix prealloc lint issue in cmd_logswatch_test.go
- Add validateIPAndJails helper to consolidate IP/jail validation
- Add WithTestRunner/WithTestSudoChecker helpers for cleaner test setup
- Replace setupBasicMockResponses duplicates with StandardMockSetup
- Add SetupStandardResponses/SetupJailResponses to MockRunner
- Delegate cmd context helpers to fail2ban implementations
- Document context wrapper pattern in context_helpers.go

* refactor: consolidate duplicate code patterns across cmd and fail2ban packages

Add helper functions to reduce code duplication found by dupl:

- safeCloseFile/safeCloseReader: centralize file close error logging
- createTimeoutContext: consolidate timeout context creation pattern
- withContextCheck: wrap context cancellation checks
- recordOperationMetrics: unify metrics recording for commands/clients

Also includes Phase 1 consolidations:
- copyBuckets helper for metrics snapshots
- Table-driven context extraction in logging
- processWithValidation helper for IP processors

* refactor: consolidate LoggerInterface by embedding LoggerEntry

Both interfaces had identical method signatures. LoggerInterface now
embeds LoggerEntry to eliminate code duplication.

* refactor: consolidate test framework helpers and fix test patterns

- Add checkJSONFieldValue and failMissingJSONField helpers to reduce
  duplication in JSON assertion methods
- Add ParallelTimeout to default test config
- Fix test to use WithTestRunner inside test loop for proper mock scoping

* refactor: unify ban/unban operations with OperationType pattern

Introduce OperationType struct to consolidate duplicate ban/unban logic:
- Add ProcessOperation and ProcessOperationWithContext generic functions
- Add ProcessOperationParallel and ProcessOperationParallelWithContext
- Existing ProcessBan*/ProcessUnban* functions now delegate to generic versions
- Reduces ~120 lines of duplicate code between ban and unban operations

* refactor: consolidate time parsing cache pattern

Add ParseWithLayout method to BoundedTimeCache that consolidates the
cache-lookup-parse-store pattern. FastTimeCache and TimeParsingCache
now delegate to this method instead of duplicating the logic.

* refactor: consolidate command execution patterns in fail2ban

- Add validateCommandExecution helper for command/argument validation
- Add runWithTimerContext helper for timed runner operations
- Add executeIPActionWithContext to unify BanIP/UnbanIP implementations
- Reduces duplicate validation and execution boilerplate

* refactor: consolidate logrus adapter with embedded loggerCore

Introduce loggerCore type that provides the 8 standard logging methods
(Debug, Info, Warn, Error, Debugf, Infof, Warnf, Errorf). Both
logrusAdapter and logrusEntryAdapter now embed this type, eliminating
16 duplicate method implementations.

* refactor: consolidate path validation patterns

- Add validateConfigPathWithFallback helper in cmd/config_utils.go
  for the validate-or-fallback-with-logging pattern
- Add validateClientPath helper in fail2ban/helpers.go for client
  path validation delegation

* fix: add context cancellation checks to wrapper functions

- wrapWithContext0/1/2 now check ctx.Err() before invoking wrapped function
- WithCommand now validates and trims empty command strings

* refactor: extract formatLatencyBuckets for deterministic metrics output

Add formatLatencyBuckets helper that writes latency bucket distribution
with sorted keys for deterministic output, eliminating duplicate
formatting code for command and client latency buckets.

* refactor: add generic setNestedMapValue helper for mock configuration

Add setNestedMapValue[T] generic helper that consolidates the repeated
pattern of mutex-protected nested map initialization and value setting
used by SetBanError, SetBanResult, SetUnbanError, and SetUnbanResult.

* fix: use cmd.Context() for signal propagation and correct mock status

- ExecuteIPCommand now uses cmd.Context() instead of context.Background()
  to inherit Cobra's signal cancellation
- MockRunner.SetupJailResponses uses shared.Fail2BanStatusSuccess ("0")
  instead of literal "1" for proper success path simulation

* fix: restore operation-specific log messages in ProcessOperationWithContext

Add back Logger.WithFields().Info(opType.Message) call that was lost
during refactoring. This restores the distinction between ban and unban
operation messages (shared.MsgBanResult vs shared.MsgUnbanResult).

* fix: return aggregated errors from parallel operations

Previously, errors from individual parallel operations were silently
swallowed - converted to status strings but never returned to callers.

Now processOperations collects all errors and returns them aggregated
via errors.Join, allowing callers to distinguish partial failures from
complete success while still receiving all results.

* fix: add input validation to processOperations before parallel execution

Validate IP and jail inputs at the start of processOperations() using
fail2ban.CachedValidateIP and CachedValidateJail. This prevents invalid
or malicious inputs (empty values, path traversal attempts, malformed
IPs) from reaching the operation functions. All validation errors are
aggregated and returned before any operations execute.
2026-01-25 19:07:45 +02:00

138 lines
4.9 KiB
Go

package cmd
import (
"encoding/json"
"fmt"
"io"
"sort"
"strings"
"github.com/spf13/cobra"
"github.com/ivuorinen/f2b/fail2ban"
"github.com/ivuorinen/f2b/shared"
)
// MetricsCmd returns the metrics command with injected client and config
func MetricsCmd(_ fail2ban.Client, config *Config) *cobra.Command {
return NewCommand(
"metrics",
"Show performance metrics",
[]string{"stats"},
func(cmd *cobra.Command, _ []string) error {
// Get the global metrics instance
metrics := GetGlobalMetrics()
snapshot := metrics.GetSnapshot()
// Output metrics based on format
if config != nil && config.Format == JSONFormat {
encoder := json.NewEncoder(GetCmdOutput(cmd))
encoder.SetIndent("", " ")
if err := encoder.Encode(snapshot); err != nil {
return fmt.Errorf("failed to encode metrics: %w", err)
}
} else {
// Plain text output - use a helper to simplify error handling
if err := printMetricsPlain(GetCmdOutput(cmd), snapshot); err != nil {
return fmt.Errorf("failed to print metrics: %w", err)
}
}
return nil
})
}
// printMetricsPlain prints metrics in plain text format
func printMetricsPlain(output io.Writer, snapshot MetricsSnapshot) error {
// Use a string builder to build the output
var sb strings.Builder
sb.WriteString("F2B Performance Metrics\n")
sb.WriteString("======================\n\n")
// System metrics
sb.WriteString("System:\n")
sb.WriteString(fmt.Sprintf(" Uptime: %ds\n", snapshot.UptimeSeconds))
sb.WriteString(fmt.Sprintf(" Max Memory: %.2f MB\n", float64(snapshot.MaxMemoryUsage)/(1024*1024)))
sb.WriteString(fmt.Sprintf(" Goroutines: %d\n\n", snapshot.GoroutineCount))
// Command metrics
sb.WriteString("Commands:\n")
sb.WriteString(fmt.Sprintf(shared.MetricsFmtTotalExecutions, snapshot.CommandExecutions))
sb.WriteString(fmt.Sprintf(shared.MetricsFmtTotalFailures, snapshot.CommandFailures))
if snapshot.CommandExecutions > 0 {
avgLatency := float64(snapshot.CommandTotalDuration) / float64(snapshot.CommandExecutions)
sb.WriteString(fmt.Sprintf(shared.MetricsFmtAverageLatencyTop, avgLatency))
}
sb.WriteString("\n")
// Ban/Unban metrics
sb.WriteString("Ban Operations:\n")
sb.WriteString(fmt.Sprintf(" Ban Operations: %d (failures: %d)\n", snapshot.BanOperations, snapshot.BanFailures))
sb.WriteString(
fmt.Sprintf(" Unban Operations: %d (failures: %d)\n", snapshot.UnbanOperations, snapshot.UnbanFailures),
)
sb.WriteString("\n")
// Client metrics
sb.WriteString("Client Operations:\n")
sb.WriteString(fmt.Sprintf(shared.MetricsFmtTotalOperations, snapshot.ClientOperations))
sb.WriteString(fmt.Sprintf(shared.MetricsFmtTotalFailures, snapshot.ClientFailures))
if snapshot.ClientOperations > 0 {
avgLatency := float64(snapshot.ClientTotalDuration) / float64(snapshot.ClientOperations)
sb.WriteString(fmt.Sprintf(shared.MetricsFmtAverageLatencyTop, avgLatency))
}
sb.WriteString("\n")
// Validation metrics
sb.WriteString("Validation:\n")
sb.WriteString(fmt.Sprintf(" Cache Hits: %d\n", snapshot.ValidationCacheHits))
sb.WriteString(fmt.Sprintf(" Cache Misses: %d\n", snapshot.ValidationCacheMiss))
sb.WriteString(fmt.Sprintf(" Failures: %d\n", snapshot.ValidationFailures))
if total := snapshot.ValidationCacheHits + snapshot.ValidationCacheMiss; total > 0 {
hitRate := float64(snapshot.ValidationCacheHits) / float64(total) * 100
sb.WriteString(fmt.Sprintf(" Cache Hit Rate: %.2f%%\n", hitRate))
}
sb.WriteString("\n")
// Command latency distribution
if len(snapshot.CommandLatencyBuckets) > 0 {
sb.WriteString("Command Latency Distribution:\n")
formatLatencyBuckets(&sb, snapshot.CommandLatencyBuckets)
sb.WriteString("\n")
}
// Client latency distribution
if len(snapshot.ClientLatencyBuckets) > 0 {
sb.WriteString("Client Operation Latency Distribution:\n")
formatLatencyBuckets(&sb, snapshot.ClientLatencyBuckets)
}
// Write the entire string at once
_, err := output.Write([]byte(sb.String()))
return err
}
// formatLatencyBuckets writes latency bucket distribution to the builder.
// Keys are sorted for deterministic output.
func formatLatencyBuckets(sb *strings.Builder, buckets map[string]LatencyBucketSnapshot) {
// Sort keys for deterministic output
keys := make([]string, 0, len(buckets))
for name := range buckets {
keys = append(keys, name)
}
sort.Strings(keys)
for _, name := range keys {
bucket := buckets[name]
fmt.Fprintf(sb, shared.MetricsFmtOperationHeader, name)
fmt.Fprintf(sb, shared.MetricsFmtLatencyUnder1ms, bucket.Under1ms)
fmt.Fprintf(sb, shared.MetricsFmtLatencyUnder10ms, bucket.Under10ms)
fmt.Fprintf(sb, shared.MetricsFmtLatencyUnder100ms, bucket.Under100ms)
fmt.Fprintf(sb, shared.MetricsFmtLatencyUnder1s, bucket.Under1s)
fmt.Fprintf(sb, shared.MetricsFmtLatencyUnder10s, bucket.Under10s)
fmt.Fprintf(sb, shared.MetricsFmtLatencyOver10s, bucket.Over10s)
fmt.Fprintf(sb, shared.MetricsFmtAverageLatency, bucket.GetAverageLatency())
}
}