Automated Recon Pipeline: Set It and Forget It
most people run recon once and forget to check back.
targets evolve. new subdomains appear. services change. vulnerabilities surface. you miss it all because you moved on to the next thing.
this pipeline inverts that model.
you configure it once. it runs continuously. it watches your targets while you sleep, cataloging every shift in the attack surface. when something interesting surfaces, it sends you a message. no more manual enumeration. no more forgetting to rescan. no more missing the subdomain that appeared last tuesday.
WHAT YOU GET:
THE COMPLETE SYSTEM (working docker container)
→ amass + subfinder for subdomain discovery
→ httpx for service detection and fingerprinting
→ aquatone for visual reconnaissance and screenshots
→ nmap for port scanning and service enumeration
→ nuclei for vulnerability detection (CVEs, misconfigs, exposed panels)
→ automated change detection comparing each scan to previous runs
→ notification system (Discord, Slack, Telegram webhooks)
→ cron scheduling so it runs automatically on your timeline
THE ORCHESTRATION LAYER (4 core scripts)
→ main reconnaissance workflow that chains all tools
→ diff detection that identifies what changed since last scan
→ cleanup system for managing archive rotation
→ notification dispatcher for alerts
THE CONFIGURATION SYSTEM
→ target list (add domains, pipeline watches them all)
→ full config file controlling every aspect of scanning
→ notification webhooks for real-time alerts
→ cron schedule you define (every 6 hours, daily, weekly, whatever)
→ custom nuclei template support for target-specific checks
THE DOCUMENTATION (70+ pages)
→ complete setup guide from zero to running pipeline
→ advanced customization for extending with additional tools
→ troubleshooting guide covering every common issue
→ quick reference cheat sheet for daily operations
→ integration examples (APIs, external databases, custom scripts)
WHAT IT DOES AUTOMATICALLY:
every scan cycle:
- discovers all subdomains using multiple passive/active sources
- probes every subdomain to identify live HTTP services
- captures screenshots of every live endpoint
- runs targeted port scans on discovered hosts
- executes vulnerability templates against all targets
- compares results to previous scan
- identifies new subdomains, new services, new vulnerabilities
- generates organized reports with timestamps
- sends notifications when thresholds are met
- archives everything for historical analysis
data it collects:
→ complete subdomain inventory (updated continuously)
→ live service catalog with tech stack detection
→ visual screenshots of every web interface
→ open port inventory across all discovered hosts
→ vulnerability findings by severity
→ change logs showing evolution over time
THIS ISN’T FOR:
→ people who want a GUI dashboard
→ people who need their hand held through Docker basics
→ people who won’t read documentation
→ people scanning targets they don’t own
THIS IS FOR:
→ people tired of running the same recon commands manually
→ people who understand the value of continuous monitoring
→ people comfortable with command line and containers
→ people authorized to test their targets
→ people who want intelligence gathering that never sleeps
REALISTIC EXPECTATIONS:
setup time:
- 15 minutes if you know Docker
- 30-60 minutes if you need to learn Docker basics
- add 10 minutes per notification channel you configure
first scan timeline:
- small target (1 domain): 10-30 minutes
- medium target (5-10 domains): 30-90 minutes
- large target (20+ domains): 2-4 hours
resource requirements:
- 2-4GB RAM during active scanning
- 500MB disk per scan archive
- modest CPU (any modern VPS works)
- reliable network connection
what you maintain:
- add new targets to config file
- review notifications when they arrive
- check archived results when you need them
- update tools occasionally (docker pull)
what runs itself:
- everything else
TECHNICAL DETAILS:
containerization:
→ complete Docker image with all tools pre-installed
→ no dependency conflicts or version hell
→ runs on Linux, Mac, Windows (via Docker Desktop)
→ isolated from your host system
→ reproducible builds
data organization:
→ timestamped archives for every scan
→ current/ directory always points to latest results
→ diffs/ directory tracks changes between runs
→ all output in standard formats (JSON, XML, TXT)
tool versions included:
→ Amass 4.2.0 (OWASP subdomain enumeration)
→ Subfinder (latest, ProjectDiscovery)
→ httpx (latest, ProjectDiscovery)
→ Nuclei v3 (latest vulnerability scanner)
→ Aquatone 1.7.0 (visual reconnaissance)
→ Nmap (latest port scanner)
→ notify (ProjectDiscovery notification system)
notification options:
→ Discord webhooks
→ Slack webhooks
→ Telegram bots
→ custom webhooks for any service
→ configurable triggers and thresholds
WHY THIS EXISTS:
reconnaissance is not a one-time activity.
attack surfaces evolve. infrastructure changes. new assets appear. old vulnerabilities get fixed, new ones surface.
most people scan once and move on. they miss the subdomain that appeared three weeks later. they miss the admin panel that went live on a forgotten staging server. they miss the S3 bucket someone misconfigured last month.
this system exists because i got tired of:
- running the same commands repeatedly
- forgetting which targets i’d already scanned
- missing new assets because i didn’t rescan often enough
- losing track of what changed between scans
so i automated it.
now it runs continuously. it remembers everything. it alerts me when the surface shifts.
you can have the same system.
USE CASES:
bug bounty hunters:
→ monitor your targets 24/7 for new assets
→ get alerted when new subdomains appear
→ never miss a newly exposed service
→ track vulnerability lifecycles
penetration testers:
→ continuous reconnaissance during engagements
→ identify changes to target infrastructure
→ comprehensive asset inventory without manual work
→ historical data for reporting
security teams:
→ monitor your own infrastructure continuously
→ detect shadow IT and forgotten assets
→ track external attack surface over time
→ compliance and audit trail generation
red teamers:
→ passive monitoring of target organizations
→ identify new entry points as they appear
→ build comprehensive target intelligence
→ maintain operational security with rate limiting
WHAT HAPPENS AFTER YOU BUY:
- you download a 26KB compressed archive
- you extract it (contains complete working system)
- you add your target domains to config/targets.txt
- you configure notification webhooks (optional but recommended)
- you run ./install.sh or docker-compose up -d
- the system starts watching your targets
- you check back when notifications arrive
- results accumulate in organized archives
- you have continuous intelligence with zero ongoing effort
no subscriptions. no cloud dependencies. no vendor lock-in.
you own the code. you run it wherever you want. you control the data.
BONUS: EXTENSIBILITY
the system is modular by design.
want to add new tools? drop them in the Dockerfile.
want custom checks? write nuclei templates.
want different notifications? modify the webhook script.
want API access to results? documentation includes examples.
want distributed scanning? guide covers that too.
this is not a black box. every script is documented. every decision is explained. you can modify anything.
SUPPORT & UPDATES:
no formal support. documentation covers everything.
if you follow the system and hit a genuine issue, the troubleshooting guide walks through every common problem.
updates: tools update themselves via Docker rebuilds. documentation is versioned. you get all future versions of this package.
no recurring fees. no expiring access. buy once, use forever.
LEGAL DISCLAIMER:
this tool generates network traffic and probes systems.
only use it on:
→ targets you own
→ targets you have written authorization to test
→ bug bounty programs where you’re in scope
scanning systems without permission is illegal in most jurisdictions.
i provide the tools. you’re responsible for how you use them.
FAQ
Q: Do I need coding skills?
A: Basic command line literacy. If you can use Docker and edit text files, you’re fine. If you’ve never touched a terminal, expect a learning curve.
Q: What if I don’t know Docker?
A: Documentation includes Docker basics. Worst case, spend 30 minutes learning Docker fundamentals elsewhere, then come back. It’s worth it.
Q: Does this work on Windows?
A: Yes, via Docker Desktop. Linux and Mac work natively. Windows users need WSL2 or Docker Desktop.
Q: How much does it cost to run?
A: Negligible. Any $5-10/month VPS works. Or run it on a home server. Or your laptop. No recurring service fees.
Q: Will this get me blocked?
A: Rate limiting is built in. Scans are respectful by default. You can tune aggression in config. Being blocked is usually user error (scanning too aggressively).
Q: Can I scan multiple targets?
A: Yes. Add as many domains as you want to targets.txt. The pipeline handles them all in each scan cycle.
Q: How often does it scan?
A: You decide. Default is every 6 hours. Edit the cron schedule to daily, weekly, hourly, whatever fits your needs.
Q: What if a scan fails?
A: Next scheduled scan runs normally. Check logs to diagnose issues. Troubleshooting guide covers common failure modes.
Q: Can I run multiple instances?
A: Yes. Spin up multiple containers targeting different domains. Or run distributed scanners feeding a central results collector.
Q: Do you offer refunds?
A: No. You’re buying information and working code. You can’t un-learn it. Price is low enough that the risk is minimal.
Q: Is this legal?
A: Scanning systems without permission is illegal. Only use on authorized targets. You’re responsible for compliance with local laws.
You’ll get a file that when opened, reveals everything you need for totally autonomous reconnaissance build.