Watching the Wire
For the first decade of Sherlock Forensics, the work was almost entirely defensive. We sat on the other side of the wire. We deployed network intrusion detection systems. We wrote Snort rules. We tuned Suricata signatures. We stared at packet captures in Wireshark until the hex made sense.
We watched networks for attackers. Thousands of hours. Millions of packets. Every port scan, every brute force attempt, every malware callback, every lateral movement hop. We saw what attackers did from the defender's perspective. We knew what their traffic looked like on the wire. We knew which signatures caught them and which did not.
Over the years, something became obvious. We were not just learning how to detect attackers. We were learning how to be one.
The Realization
The shift happened gradually. Every time we wrote a signature to detect a specific attack technique, we had to understand that technique in detail. How does Kerberoasting look on the wire? What distinguishes a malicious DNS query from a legitimate one? What packet characteristics differentiate a port scan from normal TCP connection behavior?
To write effective detection, you must understand attack mechanics at the packet level. You cannot detect what you do not understand.
After years of this work, we had accumulated something valuable: a deep understanding of the gap between what detection tools catch and what they miss. We knew the signatures. All 12,000 of them. And we knew exactly which ones had blind spots, which ones could be bypassed with minor technique modifications and which ones had false positive rates so high that security teams had disabled them.
12,000+ signatures did not just tell us what to detect. They told us 12,000+ things we knew how to test for.
The Gap Between Detection and Validation
In every client engagement, we saw the same pattern. Organizations deployed detection tools, configured them according to vendor best practices and then assumed they were protected. Nobody tested whether the tools actually worked against real techniques.
When we raised this concern, the response was always the same: "Our vendor runs detection tests." But vendor tests are demonstrations, not validation. The vendor tests their own rules against their own test cases. That proves the tool can detect attacks it was designed to detect. It does not prove it can detect the attacks it was not designed for.
We realized there was a gap in the market. Organizations needed someone who understood detection deeply enough to test it properly. Someone who had written the rules, tuned the sensors and watched the alerts for years. Someone who knew not just how attacks work, but how detection works, and more importantly, where detection fails.
That gap is what ShadowTap was built to fill.
Building the Device
The first ShadowTap prototype was simple. A small single-board computer preconfigured with an encrypted VPN tunnel, a collection of offensive tools and a custom testing framework that mapped techniques to detection signatures.
The concept was straightforward. Ship the device to the client. They plug it into their network. We connect through the encrypted tunnel and perform full internal network testing without traveling to their site. For a firm based in Metro Vancouver serving clients across Canada, this eliminated geography as a constraint.
But the testing methodology is what makes ShadowTap different from a standard penetration test. Because we come from a detection background, every technique we execute is mapped to the signatures and behavioral models that should detect it. We do not just try to compromise hosts. We test whether the client's detection tools see our activity at every step.
A standard pentest asks: "Can I get in?" ShadowTap asks: "Can I get in, and did anyone notice?"
The Detection-Offensive Feedback Loop
The defensive background gives ShadowTap a unique advantage. Most offensive security firms employ pure red teamers. They know how to attack, but they may not have deep experience with the detection tools they are trying to evade. They test against detection tools from the outside.
We test from the inside out. We know how Darktrace builds behavioral baselines because we have configured similar systems. We know how Snort processes packets because we have written the rules. We know how SIEM correlation engines work because we have tuned them.
This creates a feedback loop. Every defensive engagement teaches us new attack surface. Every offensive engagement teaches us new detection gaps. The red and blue perspectives reinforce each other, producing better testing and better detection recommendations.
What ShadowTap Tests Today
ShadowTap has evolved significantly from that first prototype. Today it supports:
- Internal penetration testing: Full Active Directory testing, lateral movement, privilege escalation, credential harvesting and segmentation validation
- NDR validation: Controlled adversary simulation that measures Darktrace, Vectra, ExtraHop and other NDR platforms against real attack techniques
- Detection coverage mapping: Quantified reporting that maps every executed technique to whether detection tools alerted, how quickly they alerted and the quality of the alert
- Evasion technique testing: Encrypted tunnels, DNS exfiltration, ICMP tunnels, identity rotation and other techniques that specifically target detection blind spots
- Compensating control recommendations: Configuration changes, architectural improvements and complementary tools that close identified detection gaps
The Name
ShadowTap is named for what it does. In network engineering, a tap is a passive device that copies network traffic for monitoring. ShadowTap operates in the shadow of your existing monitoring infrastructure. It sits on your network, generates adversary traffic and measures whether your monitoring tools see it.
It is a tap on your defenses. A validation device that tells you whether your security investment is delivering real detection capability or just green dashboards.
Where We Are Now
Twenty years of watching networks for attackers taught us exactly how attackers operate. That knowledge powers every ShadowTap engagement. We do not approach testing as outsiders trying to break in. We approach it as people who have watched thousands of attacks from the defender's console, who know what gets caught and what slips through.
If you want to know whether your security tools are doing their job, we can answer that question definitively. Not with vendor marketing. Not with a dashboard screenshot. With controlled adversary simulation in your production environment.