mirror of
https://github.com/ivuorinen/f2b.git
synced 2026-01-26 03:13:58 +00:00
* refactor: consolidate test helpers and reduce code duplication
- Fix prealloc lint issue in cmd_logswatch_test.go
- Add validateIPAndJails helper to consolidate IP/jail validation
- Add WithTestRunner/WithTestSudoChecker helpers for cleaner test setup
- Replace setupBasicMockResponses duplicates with StandardMockSetup
- Add SetupStandardResponses/SetupJailResponses to MockRunner
- Delegate cmd context helpers to fail2ban implementations
- Document context wrapper pattern in context_helpers.go
* refactor: consolidate duplicate code patterns across cmd and fail2ban packages
Add helper functions to reduce code duplication found by dupl:
- safeCloseFile/safeCloseReader: centralize file close error logging
- createTimeoutContext: consolidate timeout context creation pattern
- withContextCheck: wrap context cancellation checks
- recordOperationMetrics: unify metrics recording for commands/clients
Also includes Phase 1 consolidations:
- copyBuckets helper for metrics snapshots
- Table-driven context extraction in logging
- processWithValidation helper for IP processors
* refactor: consolidate LoggerInterface by embedding LoggerEntry
Both interfaces had identical method signatures. LoggerInterface now
embeds LoggerEntry to eliminate code duplication.
* refactor: consolidate test framework helpers and fix test patterns
- Add checkJSONFieldValue and failMissingJSONField helpers to reduce
duplication in JSON assertion methods
- Add ParallelTimeout to default test config
- Fix test to use WithTestRunner inside test loop for proper mock scoping
* refactor: unify ban/unban operations with OperationType pattern
Introduce OperationType struct to consolidate duplicate ban/unban logic:
- Add ProcessOperation and ProcessOperationWithContext generic functions
- Add ProcessOperationParallel and ProcessOperationParallelWithContext
- Existing ProcessBan*/ProcessUnban* functions now delegate to generic versions
- Reduces ~120 lines of duplicate code between ban and unban operations
* refactor: consolidate time parsing cache pattern
Add ParseWithLayout method to BoundedTimeCache that consolidates the
cache-lookup-parse-store pattern. FastTimeCache and TimeParsingCache
now delegate to this method instead of duplicating the logic.
* refactor: consolidate command execution patterns in fail2ban
- Add validateCommandExecution helper for command/argument validation
- Add runWithTimerContext helper for timed runner operations
- Add executeIPActionWithContext to unify BanIP/UnbanIP implementations
- Reduces duplicate validation and execution boilerplate
* refactor: consolidate logrus adapter with embedded loggerCore
Introduce loggerCore type that provides the 8 standard logging methods
(Debug, Info, Warn, Error, Debugf, Infof, Warnf, Errorf). Both
logrusAdapter and logrusEntryAdapter now embed this type, eliminating
16 duplicate method implementations.
* refactor: consolidate path validation patterns
- Add validateConfigPathWithFallback helper in cmd/config_utils.go
for the validate-or-fallback-with-logging pattern
- Add validateClientPath helper in fail2ban/helpers.go for client
path validation delegation
* fix: add context cancellation checks to wrapper functions
- wrapWithContext0/1/2 now check ctx.Err() before invoking wrapped function
- WithCommand now validates and trims empty command strings
* refactor: extract formatLatencyBuckets for deterministic metrics output
Add formatLatencyBuckets helper that writes latency bucket distribution
with sorted keys for deterministic output, eliminating duplicate
formatting code for command and client latency buckets.
* refactor: add generic setNestedMapValue helper for mock configuration
Add setNestedMapValue[T] generic helper that consolidates the repeated
pattern of mutex-protected nested map initialization and value setting
used by SetBanError, SetBanResult, SetUnbanError, and SetUnbanResult.
* fix: use cmd.Context() for signal propagation and correct mock status
- ExecuteIPCommand now uses cmd.Context() instead of context.Background()
to inherit Cobra's signal cancellation
- MockRunner.SetupJailResponses uses shared.Fail2BanStatusSuccess ("0")
instead of literal "1" for proper success path simulation
* fix: restore operation-specific log messages in ProcessOperationWithContext
Add back Logger.WithFields().Info(opType.Message) call that was lost
during refactoring. This restores the distinction between ban and unban
operation messages (shared.MsgBanResult vs shared.MsgUnbanResult).
* fix: return aggregated errors from parallel operations
Previously, errors from individual parallel operations were silently
swallowed - converted to status strings but never returned to callers.
Now processOperations collects all errors and returns them aggregated
via errors.Join, allowing callers to distinguish partial failures from
complete success while still receiving all results.
* fix: add input validation to processOperations before parallel execution
Validate IP and jail inputs at the start of processOperations() using
fail2ban.CachedValidateIP and CachedValidateJail. This prevents invalid
or malicious inputs (empty values, path traversal attempts, malformed
IPs) from reaching the operation functions. All validation errors are
aggregated and returned before any operations execute.
441 lines
12 KiB
Go
441 lines
12 KiB
Go
package fail2ban
|
|
|
|
import (
|
|
"bufio"
|
|
"context"
|
|
"errors"
|
|
"fmt"
|
|
"io"
|
|
"net"
|
|
"os"
|
|
"path/filepath"
|
|
"sort"
|
|
"strconv"
|
|
"strings"
|
|
|
|
"github.com/ivuorinen/f2b/shared"
|
|
)
|
|
|
|
/*
|
|
Package fail2ban provides log reading and filtering utilities for Fail2Ban logs.
|
|
This file contains logic for reading, parsing, and filtering Fail2Ban log files,
|
|
including support for rotated and compressed logs.
|
|
*/
|
|
|
|
// GetLogLines reads Fail2Ban log files (current and rotated) and filters lines by jail and/or IP.
|
|
//
|
|
// jailFilter: jail name to filter by (empty or "all" for all jails)
|
|
// ipFilter: IP address to filter by (empty or "all" for all IPs)
|
|
//
|
|
// Returns a slice of matching log lines, or an error.
|
|
// This function uses streaming to limit memory usage.
|
|
// Context parameter supports timeout and cancellation of file I/O operations.
|
|
func GetLogLines(ctx context.Context, jailFilter string, ipFilter string) ([]string, error) {
|
|
return GetLogLinesWithLimit(ctx, jailFilter, ipFilter, shared.DefaultLogLinesLimit) // Default limit for safety
|
|
}
|
|
|
|
// GetLogLinesWithLimit returns log lines with configurable limits for memory management.
|
|
// Context parameter supports timeout and cancellation of file I/O operations.
|
|
func GetLogLinesWithLimit(ctx context.Context, jailFilter string, ipFilter string, maxLines int) ([]string, error) {
|
|
// Validate maxLines parameter
|
|
if maxLines < 0 {
|
|
return nil, fmt.Errorf(shared.ErrMaxLinesNegative, maxLines)
|
|
}
|
|
|
|
if maxLines > shared.MaxLogLinesLimit {
|
|
return nil, fmt.Errorf(shared.ErrMaxLinesExceedsLimit, shared.MaxLogLinesLimit)
|
|
}
|
|
|
|
if maxLines == 0 {
|
|
return []string{}, nil
|
|
}
|
|
|
|
// Sanitize filter parameters
|
|
jailFilter = strings.TrimSpace(jailFilter)
|
|
ipFilter = strings.TrimSpace(ipFilter)
|
|
|
|
// Validate jail filter
|
|
if jailFilter != "" {
|
|
if err := ValidateJail(jailFilter); err != nil {
|
|
return nil, fmt.Errorf("invalid jail filter: %w", err)
|
|
}
|
|
}
|
|
|
|
// Validate IP filter
|
|
if ipFilter != "" && ipFilter != shared.AllFilter {
|
|
if net.ParseIP(ipFilter) == nil {
|
|
return nil, fmt.Errorf(shared.ErrInvalidIPAddress, ipFilter)
|
|
}
|
|
}
|
|
|
|
config := LogReadConfig{
|
|
MaxLines: maxLines,
|
|
MaxFileSize: shared.DefaultMaxFileSize,
|
|
JailFilter: jailFilter,
|
|
IPFilter: ipFilter,
|
|
BaseDir: GetLogDir(),
|
|
}
|
|
|
|
return collectLogLines(ctx, GetLogDir(), config)
|
|
}
|
|
|
|
// collectLogLines reads log files under the provided directory using the supplied configuration.
|
|
func collectLogLines(ctx context.Context, logDir string, baseConfig LogReadConfig) ([]string, error) {
|
|
if baseConfig.MaxLines == 0 {
|
|
return []string{}, nil
|
|
}
|
|
|
|
pattern := filepath.Join(logDir, "fail2ban.log*")
|
|
files, err := filepath.Glob(pattern)
|
|
if err != nil {
|
|
return nil, fmt.Errorf("error listing log files: %w", err)
|
|
}
|
|
|
|
if len(files) == 0 {
|
|
return []string{}, nil
|
|
}
|
|
|
|
currentLog, rotated := parseLogFiles(files)
|
|
|
|
var allLines []string
|
|
|
|
appendAndTrim := func(lines []string) {
|
|
if len(lines) == 0 {
|
|
return
|
|
}
|
|
allLines = append(allLines, lines...)
|
|
if baseConfig.MaxLines > 0 && len(allLines) > baseConfig.MaxLines {
|
|
allLines = allLines[len(allLines)-baseConfig.MaxLines:]
|
|
}
|
|
}
|
|
|
|
for _, rotatedFile := range rotated {
|
|
fileLines, err := readLogLinesFromFile(ctx, rotatedFile.path, baseConfig)
|
|
if err != nil {
|
|
if ctx != nil && errors.Is(err, ctx.Err()) {
|
|
return nil, err
|
|
}
|
|
getLogger().WithError(err).
|
|
WithField(shared.LogFieldFile, rotatedFile.path).
|
|
Error("Failed to read rotated log file")
|
|
continue
|
|
}
|
|
appendAndTrim(fileLines)
|
|
}
|
|
|
|
if currentLog != "" {
|
|
fileLines, err := readLogLinesFromFile(ctx, currentLog, baseConfig)
|
|
if err != nil {
|
|
if ctx != nil && errors.Is(err, ctx.Err()) {
|
|
return nil, err
|
|
}
|
|
getLogger().WithError(err).
|
|
WithField(shared.LogFieldFile, currentLog).
|
|
Error("Failed to read current log file")
|
|
} else {
|
|
appendAndTrim(fileLines)
|
|
}
|
|
}
|
|
|
|
return allLines, nil
|
|
}
|
|
|
|
func readLogLinesFromFile(ctx context.Context, path string, baseConfig LogReadConfig) ([]string, error) {
|
|
fileConfig := baseConfig
|
|
fileConfig.MaxLines = 0
|
|
|
|
if ctx != nil {
|
|
return streamLogFileWithContext(ctx, path, fileConfig)
|
|
}
|
|
return streamLogFile(path, fileConfig)
|
|
}
|
|
|
|
// parseLogFiles parses log file names and returns the current log and a slice of rotated logs
|
|
// (sorted oldest to newest).
|
|
func parseLogFiles(files []string) (string, []rotatedLog) {
|
|
var currentLog string
|
|
var rotated []rotatedLog
|
|
|
|
for _, path := range files {
|
|
base := filepath.Base(path)
|
|
if base == shared.LogFileName {
|
|
currentLog = path
|
|
} else if strings.HasPrefix(base, shared.LogFilePrefix) {
|
|
if num := extractLogNumber(base); num >= 0 {
|
|
rotated = append(rotated, rotatedLog{num: num, path: path})
|
|
}
|
|
}
|
|
}
|
|
|
|
// Sort rotated logs by number descending (highest number = oldest log)
|
|
sort.Slice(rotated, func(i, j int) bool {
|
|
return rotated[i].num > rotated[j].num
|
|
})
|
|
|
|
return currentLog, rotated
|
|
}
|
|
|
|
// extractLogNumber extracts the rotation number from a log file name (e.g., "fail2ban.log.2.gz" -> 2).
|
|
func extractLogNumber(base string) int {
|
|
numPart := strings.TrimPrefix(base, "fail2ban.log.")
|
|
numPart = strings.TrimSuffix(numPart, shared.GzipExtension)
|
|
if n, err := strconv.Atoi(numPart); err == nil {
|
|
return n
|
|
}
|
|
return -1
|
|
}
|
|
|
|
// rotatedLog represents a rotated log file with its rotation number.
|
|
type rotatedLog struct {
|
|
num int
|
|
path string
|
|
}
|
|
|
|
// LogReadConfig holds configuration for streaming log reading
|
|
type LogReadConfig struct {
|
|
MaxLines int // Maximum number of lines to read (0 = unlimited)
|
|
MaxFileSize int64 // Maximum file size to process in bytes (0 = unlimited)
|
|
JailFilter string // Filter by jail name (empty = no filter)
|
|
IPFilter string // Filter by IP address (empty = no filter)
|
|
BaseDir string // Base directory for log validation
|
|
}
|
|
|
|
// resolveBaseDir returns the base directory from config or falls back to GetLogDir()
|
|
func resolveBaseDir(config LogReadConfig) string {
|
|
if config.BaseDir != "" {
|
|
return config.BaseDir
|
|
}
|
|
return GetLogDir()
|
|
}
|
|
|
|
// streamLogFile reads a log file line by line with memory limits and filtering
|
|
func streamLogFile(path string, config LogReadConfig) ([]string, error) {
|
|
return streamLogFileWithContext(context.Background(), path, config)
|
|
}
|
|
|
|
// streamLogFileWithContext reads a log file line by line with memory limits,
|
|
// filtering, and context support for timeouts
|
|
func streamLogFileWithContext(ctx context.Context, path string, config LogReadConfig) ([]string, error) {
|
|
// Check context before starting
|
|
select {
|
|
case <-ctx.Done():
|
|
return nil, ctx.Err()
|
|
default:
|
|
}
|
|
|
|
baseDir := resolveBaseDir(config)
|
|
cleanPath, err := validateLogPathForDir(ctx, path, baseDir)
|
|
if err != nil {
|
|
return nil, err
|
|
}
|
|
|
|
if shouldSkipFile(cleanPath, config.MaxFileSize) {
|
|
return []string{}, nil
|
|
}
|
|
|
|
scanner, cleanup, err := createLogScanner(cleanPath)
|
|
if err != nil {
|
|
return nil, err
|
|
}
|
|
defer cleanup()
|
|
|
|
return scanLogLinesWithContext(ctx, scanner, config)
|
|
}
|
|
|
|
// validateLogPath validates and sanitizes the log file path with comprehensive security checks
|
|
func validateLogPath(path string) (string, error) {
|
|
return validateLogPathForDir(context.Background(), path, GetLogDir())
|
|
}
|
|
|
|
func validateLogPathForDir(ctx context.Context, path string, baseDir string) (string, error) {
|
|
return ValidateLogPath(ctx, path, baseDir)
|
|
}
|
|
|
|
// shouldSkipFile checks if a file should be skipped due to size limits
|
|
func shouldSkipFile(path string, maxFileSize int64) bool {
|
|
if maxFileSize <= 0 {
|
|
return false
|
|
}
|
|
|
|
if info, err := os.Stat(path); err == nil {
|
|
if info.Size() > maxFileSize {
|
|
getLogger().WithField(shared.LogFieldFile, path).WithField("size", info.Size()).
|
|
Warn("Skipping large log file due to size limit")
|
|
return true
|
|
}
|
|
}
|
|
return false
|
|
}
|
|
|
|
// createLogScanner creates a scanner for the log file, handling gzip compression
|
|
func createLogScanner(path string) (*bufio.Scanner, func(), error) {
|
|
// #nosec G304 - Path is validated and sanitized above
|
|
const maxLineSize = 64 * 1024 // 64KB per line
|
|
return CreateGzipAwareScannerWithBuffer(path, maxLineSize)
|
|
}
|
|
|
|
// scanLogLines scans lines from the scanner with filtering and limits
|
|
func scanLogLines(scanner *bufio.Scanner, config LogReadConfig) ([]string, error) {
|
|
var lines []string
|
|
lineCount := 0
|
|
|
|
for scanner.Scan() {
|
|
line := strings.TrimSpace(scanner.Text())
|
|
if line == "" {
|
|
continue
|
|
}
|
|
|
|
if !passesFilters(line, config) {
|
|
continue
|
|
}
|
|
|
|
lines = append(lines, line)
|
|
lineCount++
|
|
|
|
if config.MaxLines > 0 && lineCount >= config.MaxLines {
|
|
break
|
|
}
|
|
}
|
|
|
|
if err := scanner.Err(); err != nil {
|
|
return nil, fmt.Errorf(shared.ErrScanLogFile, err)
|
|
}
|
|
|
|
return lines, nil
|
|
}
|
|
|
|
// scanLogLinesWithContext scans log lines with context support for timeout handling
|
|
func scanLogLinesWithContext(ctx context.Context, scanner *bufio.Scanner, config LogReadConfig) ([]string, error) {
|
|
var lines []string
|
|
lineCount := 0
|
|
linesProcessed := 0
|
|
|
|
for scanner.Scan() {
|
|
// Check context periodically (every 100 lines to avoid excessive overhead)
|
|
if linesProcessed%100 == 0 {
|
|
select {
|
|
case <-ctx.Done():
|
|
return nil, ctx.Err()
|
|
default:
|
|
}
|
|
}
|
|
linesProcessed++
|
|
|
|
line := strings.TrimSpace(scanner.Text())
|
|
if line == "" {
|
|
continue
|
|
}
|
|
|
|
if !passesFilters(line, config) {
|
|
continue
|
|
}
|
|
|
|
lines = append(lines, line)
|
|
lineCount++
|
|
|
|
if config.MaxLines > 0 && lineCount >= config.MaxLines {
|
|
break
|
|
}
|
|
}
|
|
|
|
if err := scanner.Err(); err != nil {
|
|
return nil, fmt.Errorf(shared.ErrScanLogFile, err)
|
|
}
|
|
|
|
return lines, nil
|
|
}
|
|
|
|
// passesFilters checks if a log line passes the configured filters
|
|
func passesFilters(line string, config LogReadConfig) bool {
|
|
if config.JailFilter != "" && config.JailFilter != shared.AllFilter {
|
|
jailPattern := fmt.Sprintf("[%s]", config.JailFilter)
|
|
if !strings.Contains(line, jailPattern) {
|
|
return false
|
|
}
|
|
}
|
|
|
|
if config.IPFilter != "" && config.IPFilter != shared.AllFilter {
|
|
if !strings.Contains(line, config.IPFilter) {
|
|
return false
|
|
}
|
|
}
|
|
|
|
return true
|
|
}
|
|
|
|
// readLogFile reads the contents of a log file, handling gzip compression if necessary.
|
|
// DEPRECATED: Use streamLogFile instead for better memory efficiency.
|
|
func readLogFile(path string) ([]byte, error) {
|
|
// Validate path for security using comprehensive validation
|
|
cleanPath, err := validateLogPath(path)
|
|
if err != nil {
|
|
return nil, err
|
|
}
|
|
|
|
// Use consolidated gzip detection utility
|
|
reader, err := OpenGzipAwareReader(cleanPath)
|
|
if err != nil {
|
|
return nil, err
|
|
}
|
|
defer safeCloseReader(reader, cleanPath)
|
|
|
|
return io.ReadAll(reader)
|
|
}
|
|
|
|
// OptimizedLogProcessor is a thin wrapper maintained for backwards compatibility
|
|
// with existing benchmarks and tests. Internally it delegates to the shared log collection
|
|
// helpers so we have a single codepath to maintain.
|
|
type OptimizedLogProcessor struct{}
|
|
|
|
// NewOptimizedLogProcessor creates a new optimized processor wrapper.
|
|
func NewOptimizedLogProcessor() *OptimizedLogProcessor {
|
|
return &OptimizedLogProcessor{}
|
|
}
|
|
|
|
// GetLogLinesOptimized proxies to the shared collector to keep behavior identical
|
|
// while allowing benchmarks to exercise this entrypoint.
|
|
func (olp *OptimizedLogProcessor) GetLogLinesOptimized(jailFilter, ipFilter string, maxLines int) ([]string, error) {
|
|
// Validate maxLines parameter
|
|
if maxLines < 0 {
|
|
return nil, fmt.Errorf(shared.ErrMaxLinesNegative, maxLines)
|
|
}
|
|
|
|
if maxLines > shared.MaxLogLinesLimit {
|
|
return nil, fmt.Errorf(shared.ErrMaxLinesExceedsLimit, shared.MaxLogLinesLimit)
|
|
}
|
|
|
|
// Sanitize filter parameters
|
|
jailFilter = strings.TrimSpace(jailFilter)
|
|
ipFilter = strings.TrimSpace(ipFilter)
|
|
|
|
config := LogReadConfig{
|
|
MaxLines: maxLines,
|
|
MaxFileSize: shared.DefaultMaxFileSize,
|
|
JailFilter: jailFilter,
|
|
IPFilter: ipFilter,
|
|
BaseDir: GetLogDir(),
|
|
}
|
|
|
|
return collectLogLines(context.Background(), GetLogDir(), config)
|
|
}
|
|
|
|
// GetCacheStats is a no-op maintained for test compatibility.
|
|
// No caching is actually performed by this processor.
|
|
func (olp *OptimizedLogProcessor) GetCacheStats() (hits, misses int64) {
|
|
return 0, 0
|
|
}
|
|
|
|
// ClearCaches is a no-op maintained for test compatibility.
|
|
// No caching is actually performed by this processor.
|
|
func (olp *OptimizedLogProcessor) ClearCaches() {
|
|
// No-op: no cache state to clear
|
|
}
|
|
|
|
var optimizedLogProcessor = NewOptimizedLogProcessor()
|
|
|
|
// GetLogLinesUltraOptimized retains the legacy API that benchmarks expect while now
|
|
// sharing the simplified implementation.
|
|
func GetLogLinesUltraOptimized(jailFilter, ipFilter string, maxLines int) ([]string, error) {
|
|
return optimizedLogProcessor.GetLogLinesOptimized(jailFilter, ipFilter, maxLines)
|
|
}
|