The Firewall You Trust Has Never Been Questioned
Every organization has a firewall. Most have several. They sit at the perimeter, between network segments and in front of critical applications. They are the most fundamental security control in any enterprise architecture. And in the vast majority of organizations we assess, they have never been independently tested.
Not reviewed. Not audited against a checklist. Tested. As in, someone stood on the other side and tried to push traffic through every rule to see what actually happens.
That distinction matters more than most security teams realize.
How Firewall Configurations Decay
Firewalls do not start broken. On day one, the configuration is usually clean. The network team builds a rule set that reflects the current architecture, documents the purpose of each rule and deploys it with reasonable segmentation.
Then time passes.
A developer needs port 8443 opened for a staging environment. The rule gets created as a temporary exception. Six months later the staging environment is decommissioned but the rule remains. A vendor requires a VPN tunnel with specific routing. The tunnel is configured, the vendor relationship ends two years later and the tunnel stays active. An executive requests access to a resource from a personal device. The rule is broadened to accommodate the request. The executive leaves the company. The rule stays.
This process repeats hundreds of times across the life of a firewall deployment. After five years, a firewall that started with 50 clean rules now has 500. After ten years, it might have over a thousand. The original architect has moved on. The documentation is incomplete or nonexistent. Nobody in the current team can explain what every rule does or why it exists.
500 rules. Can you tell me what each one does?
In over 20 years of testing, I have never met a team that could answer that question honestly.
Configuration Review Is Not Testing
Some organizations address this by conducting periodic configuration reviews. A consultant exports the rule set, compares it against best practices and delivers a report identifying rules that look problematic.
This is useful but incomplete. A configuration review tells you what the rules say. It does not tell you what the rules do.
Firewall behavior depends on rule ordering, object group membership, NAT translations, routing decisions and interaction between multiple rule sets on different devices. A rule that looks correct in isolation may be shadowed by a broader rule above it. A rule that appears restrictive may be bypassed by a NAT translation that routes traffic around it. Two rules that each look reasonable may interact in ways that create an unintended pathway.
You cannot discover these issues by reading the configuration. You discover them by testing the configuration. By sending traffic and observing what gets through.
What We Find When We Actually Test
When we conduct firewall validation as part of a penetration test, we consistently find the same categories of problems:
Rules that should have been temporary. Emergency access rules created during incidents, vendor access rules for projects that ended years ago and developer exceptions for environments that no longer exist. These rules create pathways that nobody monitors because nobody remembers they are there.
Overly broad source or destination definitions. Rules that specify "any" as the source when they should specify a single subnet. Rules that allow access to an entire server farm when only one server needs to be reachable. The broader the rule, the larger the attack surface it exposes.
Missing egress filtering. Most organizations focus their firewall rules on inbound traffic. Outbound traffic often flows with minimal restriction. This means that once an attacker has a foothold inside the network, they can establish outbound command-and-control channels with little resistance. Proper egress filtering is one of the most effective controls against data exfiltration and it is one of the most commonly absent.
Segmentation that exists on paper but not in practice. The network diagram shows clean segmentation between production and development, between departments and between sensitivity levels. The firewall rules tell a different story. Cross-segment rules accumulated over time erode the boundaries until the segmentation is theoretical rather than actual.
Logging gaps. Rules that should generate alerts when triggered are configured to log only or not log at all. If your firewall allows traffic through a rule that should never match in normal operations but that rule does not generate an alert, you have a detection blind spot.
The Vendor Will Not Test It for You
Firewall vendors provide excellent platforms. Palo Alto, Fortinet, Check Point, Cisco and others build capable products with sophisticated rule engines, threat intelligence integration and management interfaces. What they do not provide is independent validation that your specific configuration actually works as intended.
The vendor's job is to sell you the platform and provide technical support. Your configuration decisions, your rule sets, your architecture choices are your responsibility. The vendor has no incentive to tell you that your deployment is misconfigured. They may not even know, because they do not have visibility into your specific rule set in the context of your specific network topology.
This is not a criticism of firewall vendors. It is a statement about the division of responsibility. The vendor provides the tool. You are responsible for how you use it. And the only way to verify that you are using it correctly is to test it independently.
What Independent Firewall Testing Looks Like
When Sherlock Forensics validates a firewall configuration, we do not start by asking for the rule export. We start by testing from the perspective of an attacker who does not know your rules.
We probe from external positions to identify what the perimeter allows. We test from internal positions to validate segmentation. We attempt to establish outbound channels to test egress controls. We test rule behavior under different conditions, including fragmented packets, protocol manipulation and application-layer evasion techniques that exploit differences between how the firewall parses traffic and how the destination server parses it.
After testing, we correlate our findings with the actual rule set to identify exactly which rules are responsible for each finding. The result is a report that maps every gap to a specific configuration decision, giving your team actionable remediation steps rather than generic recommendations.
This is what we built our security stack validation methodology to accomplish. Not theoretical review. Practical proof.
The Question You Need to Answer
Your firewall is the first line of defense. It has been running for years, accumulating rules and exceptions. Your team has changed. Your network has changed. Your threat landscape has changed.
Has your firewall configuration kept up?
If the answer is "I think so" rather than "I know so because we tested it," then you have a gap that needs to be closed. Not with another configuration review. With a test.