ACK - ADVANCED TECHNIQUES
Now that you know the basics, let's get fancy.
The .ackrc File
Put common options in ~/.ackrc:
# Ignore certain directories
--ignore-dir=node_modules
--ignore-dir=vendor
--ignore-dir=.cache
# Ignore certain files
--ignore-file=match:/\.min\.js$/
# Always use color
--color
# Add custom file types
--type-set=mojo:ext:ep,pm,pl
Now every ack search automatically uses these settings.
Searching Multiple Patterns
You can use alternation, but sometimes it's cleaner to use multiple searches. The --match flag lets you chain patterns:
ack "TODO" && ack "FIXME"
Or use alternation in the pattern:
ack "TODO|FIXME|HACK|XXX"
Excluding Directories
Besides .ackrc, you can exclude on the command line:
ack --ignore-dir=logs "error"
ack --ignore-dir=backup --ignore-dir=tmp "pattern"
Only Match Certain Parts
Show only the matching part, not the whole line:
ack -o "\d+"
Given a file with "There are 42 items and 17 more":
Output: 42
17
This extracts just the matches. Incredibly useful for data extraction.
Print Only Unique Matches
Combine -o with sort and uniq:
ack -o "\d+" | sort | uniq
Or with Perl (more portable):
ack -o "\d+" | perl -ne 'print unless $seen{$_}++'
Searching Specific Files Only
Combine with find for ultimate control:
find . -name "*.log" -mtime -1 | xargs ack "ERROR"
This searches only .log files modified in the last day.
Multi-line Patterns
By default, ack searches line by line. For multi-line patterns, you need different tools. But for most work, line-by-line is fine.
A workaround - search for patterns that might span lines:
ack -A 1 "function\s*$" # function at end of line, show next
Using ack with xargs
Process ack results with other commands:
ack -l "deprecated" | xargs wc -l # Line counts of matching files
ack -l "TODO" | xargs grep -c "TODO" # TODO counts per file
ack -l "pattern" | xargs rm # DELETE matching files (careful!)
Searching Binary Files
ack skips binary files by default. To include them:
ack -a "pattern"
To search only binary files:
ack --type=nofilter "pattern"
Outputting JSON/Structured Data
ack doesn't have built-in JSON output, but you can fake it:
ack -c "error" | perl -pe 's/^(.+):(\d+)$/{"file":"$1","count":$2}/'
For serious structured output, use ripgrep (rg) which has --json.
ack vs grep vs ripgrep
When to use each:
grep Comes with your system. Basic but universal.
ack Great for code. Smart defaults. Readable output.
ripgrep Fastest. Good for huge codebases. Has JSON output.
For most people, ack is the sweet spot. It's fast enough and the smart defaults save tons of typing.
Real World Scenario: Auditing a Codebase
You just inherited a codebase. Let's audit it:
# Find hardcoded passwords or API keys
ack -i "password\s*=|api_key\s*=|secret\s*="
# Find debug statements
ack "console\.log|print\s*\(|var_dump"
# Find SQL that might be injectable
ack "\$_GET|\$_POST.*SELECT|.*\+.*SELECT"
# Find TODO/FIXME comments
ack "TODO|FIXME|HACK|XXX|BUG"
# Find files without proper headers
ack -L "^#!|Copyright|License"
Real World Scenario: Parsing AI Output
You ran a batch vision analysis. Results are scattered across logs:
# Find all pass/fail decisions
ack -o "(PASS|REJECT):\s*\w+" logs/
# Extract confidence scores
ack -o "confidence:\s*\d+\.\d+" logs/
# Find files flagged for review
ack -l "NEEDS_REVIEW" logs/
# Count rejections by type
ack -o "REJECT:\s*\w+" logs/ | sort | uniq -c | sort -rn
Combining ack with Other Unix Tools
ack plays well with the Unix philosophy:
ack "pattern" | head -20 # First 20 matches
ack "pattern" | tail -10 # Last 10 matches
ack -c "pattern" | sort -t: -k2 -rn # Sort by count
ack -l "pattern" | wc -l # Count matching files
ack -o "pattern" | sort -u # Unique matches sorted
Pro Tips
1. Start broad, then narrow down. Better to see too much than miss
something.
2. Use -l first to see which files match, then investigate.
3. The -C 2 (context) flag helps you understand matches without
opening files.
4. Create aliases for common searches in your shell config:
alias todos="ack 'TODO|FIXME|HACK'"
alias hardcoded="ack -i 'password|secret|api.?key'"