The Executive Summary Is Written for You
The executive summary is the first two to three pages of the report. It is written specifically for people who do not live inside a terminal. If your pentest firm delivered a report where the executive summary reads like a server log, that is a red flag. A good executive summary tells you three things: the overall security posture of what was tested, the highest-risk findings in plain language and what needs to happen next.
You should be able to read the executive summary in under five minutes and walk into a board meeting with a clear understanding of where the organization stands. If you cannot do that, the report failed its primary job. The executive summary is not a formality. It is the single most important section for leadership because it translates technical risk into business risk.
Look for language that quantifies impact. "An attacker could access all customer records" is useful. "A SQL injection vulnerability was identified" is not useful to an executive without context. The summary should bridge that gap. If it does not, ask the tester to rewrite it.
Severity Ratings Explained
Every finding in a pentest report carries a severity rating. Most firms use a four-tier or five-tier scale based on the Common Vulnerability Scoring System (CVSS) maintained by FIRST.org. Here is what each level means in business terms.
| Severity | CVSS Range | What It Means for the Business | Expected Response Time |
|---|---|---|---|
| Critical | 9.0 to 10.0 | An attacker can take full control of the affected system with minimal effort. Data breach is likely if exploited. Regulatory notification may be required. | Immediate. Within 24 to 48 hours. |
| High | 7.0 to 8.9 | Significant unauthorized access is possible. May require some additional steps to exploit but the path is clear and repeatable. | Within 1 to 2 weeks. |
| Medium | 4.0 to 6.9 | Exploitable under specific conditions. May require internal network access or a combination of other weaknesses to leverage. | Next scheduled maintenance cycle. Within 30 to 60 days. |
| Low | 0.1 to 3.9 | Minor issues that provide limited information to an attacker. Configuration hardening recommendations. Best practice deviations. | Backlog. Address when resources permit. |
Do not treat all findings equally. A report with twelve findings sounds alarming until you realize nine of them are low severity. The two criticals and one high are what demand your attention and your budget. Ask your IT team to sort remediation by severity and work from the top down.
Finding vs Vulnerability: Know the Difference
These two words appear throughout every pentest report and they do not mean the same thing. A finding is any observation the tester documented. It could be a vulnerability, a misconfiguration, an informational note or a deviation from best practice. A vulnerability is a specific exploitable weakness that an attacker could use to compromise a system.
- Finding
- Broad term. Includes everything the tester deemed worth documenting. An open port that exposes version information is a finding. It may not be directly exploitable but it gives an attacker useful reconnaissance data.
- Vulnerability
- A subset of findings. A vulnerability has a clear exploitation path. A SQL injection that returns database contents is a vulnerability. A missing HTTP security header is a finding but typically not a vulnerability on its own.
- Informational
- Observations that do not represent direct risk but are worth noting. Examples include software version disclosures, missing best-practice headers or deprecated TLS ciphers that are not yet exploitable.
When reviewing the report, pay attention to how findings are categorized. A firm that labels everything as "high" or "critical" to justify their fee is doing you a disservice. Accurate severity ratings require honest assessment. The best reports clearly distinguish between what can be exploited today and what represents a theoretical concern.
What "Risk Accepted" Actually Means
You will encounter findings marked as "risk accepted" in follow-up reports or remediation tracking documents. This does not mean the issue was fixed. It means someone in your organization reviewed the finding and made a deliberate decision not to fix it.
Risk acceptance is a legitimate business decision. Sometimes the cost of remediation exceeds the potential impact. Sometimes a compensating control reduces the risk enough that full remediation is unnecessary. A legacy system scheduled for decommission in six months may not warrant a costly patch.
But risk acceptance must be documented properly. The person accepting the risk should be authorized to make that decision. A system administrator should not be accepting risk on behalf of the organization. That decision belongs to a director, VP or CISO who understands the business implications. The acceptance should include the rationale, any compensating controls in place and a review date. Risk acceptance is not permanent. It is a decision that requires periodic reassessment.
If you see a pile of "risk accepted" findings with no documentation and no signatures, your organization is not accepting risk. It is ignoring risk and calling it a strategy.
A Real Finding Explained for a CFO
Here is what a SQL injection finding looks like when stripped of jargon and translated into business terms.
| Report Field | What the Report Says | What It Means for You |
|---|---|---|
| Title | SQL Injection in Customer Portal Login | The login page on your customer-facing website has a flaw that lets an attacker talk directly to your database. |
| Severity | Critical (CVSS 9.8) | This is the highest possible risk. An attacker needs no special access or credentials to exploit it. |
| Impact | Full read access to the backend database including customer PII, payment records and session tokens. | An attacker could download your entire customer database. Names, emails, billing addresses, payment history. This triggers mandatory breach notification under PIPEDA and potentially GDPR if you have EU customers. |
| Evidence | Screenshot showing database table names returned via crafted input in the username field. | The tester proved this works. This is not theoretical. They demonstrated access to real data. |
| Remediation | Implement parameterized queries on all database calls in the login module. Apply input validation on all user-supplied fields. | Your development team needs to rewrite how the login page talks to the database. This is a code change, not a configuration change. Budget developer hours accordingly. |
That single finding could cost the organization millions in breach response, regulatory fines and customer trust if left unpatched. When you see a critical finding, the question is not whether to fix it. The question is how fast your team can deploy the fix.
Red Flags in Pentest Reports
Not all pentest reports are created equal. Here is what separates a legitimate report from one that should concern you.
- Scanner output disguised as manual testing
- If the report is mostly Nessus, Qualys or OpenVAS output pasted into a Word template with a cover page, you paid for a vulnerability scan at pentest prices. Manual findings with custom exploitation steps are what you are paying for. If you do not see them, push back.
- No evidence on findings
- Every finding should include a screenshot, HTTP request/response pair or tool output that proves the issue exists. A finding without evidence is an opinion. Your IT team cannot remediate opinions.
- Generic remediation advice
- "Apply vendor patches" or "follow security best practices" is not remediation guidance. Useful remediation tells your team exactly which patch, which configuration change or which line of code needs attention.
- Everything is critical
- If every finding is rated critical or high, the tester is either inflating severity to look impressive or lacks the experience to assess risk accurately. A well-written report has a realistic distribution of severity ratings.
- No methodology section
- The report should explain which testing methodology was followed. OWASP Testing Guide, NIST SP 800-115 or PTES are common frameworks. If there is no methodology section, you have no way to evaluate the completeness of the testing.
- Missing scope confirmation
- The report should confirm exactly what was tested: IP ranges, URLs, application endpoints and test type. If the scope section is vague or absent, there is no way to know what was actually assessed versus what was skipped.
What to Do After Reading the Report
Reading the report is step one. Here is the sequence that follows.
1. Schedule a Walkthrough with the Tester
Most quality firms include a report walkthrough call in the engagement. Take it. Have your CISO or IT director on the call along with the developers or system administrators who will handle remediation. Let the tester explain each critical and high finding directly. Questions are easier to answer while the engagement is fresh.
2. Build a Remediation Plan
Assign each finding to a specific person with a specific deadline. Critical findings get 24 to 48 hours. High findings get one to two weeks. Medium findings go into the next sprint or maintenance window. Track progress in whatever project management tool your team already uses. Do not let the report sit in someone's inbox.
3. Fix and Verify
After remediation, request a retest. The tester returns to verify that each fix actually resolved the issue. A closed ticket is not the same as a verified fix. The retest proves it. Most engagement contracts include a retest window of 30 to 90 days after the initial report delivery.
4. Brief Leadership
Prepare a one-page summary for the board or executive team. Include the total number of findings by severity, the remediation timeline and any risk acceptance decisions that require executive sign-off. Keep it concise. Leadership wants to know: how exposed are we, what are we doing about it and how much will it cost.
5. Update Your Security Roadmap
Pentest findings should feed directly into your security investment strategy. If the report revealed gaps in web application security, that informs your budget for developer training or a web application firewall. If internal network segmentation was weak, that goes on the infrastructure roadmap. The report is not a one-time event. It is an input to continuous improvement.
For organizations looking to validate their incident response process alongside pentest findings, a tabletop exercise tests whether your team can act on these findings under real-world pressure.
FAQ
What is the difference between a finding and a vulnerability in a pentest report?
A finding is any observation the tester documented, including vulnerabilities, misconfigurations and informational notes. A vulnerability is a specific exploitable weakness with a clear attack path. All vulnerabilities are findings but not all findings are vulnerabilities. When reviewing your report, focus remediation effort on the items explicitly classified as vulnerabilities with high or critical severity ratings.
What does "risk accepted" mean in a pentest report?
Risk accepted means the organization reviewed a finding and made a documented business decision not to remediate it. This is legitimate when compensating controls exist or when the cost of fixing exceeds the potential impact. However, risk acceptance requires authorization from an appropriate decision-maker, written rationale and a scheduled review date. It is not a permanent status.
How should an executive prioritize pentest findings?
Start with critical findings and allocate resources for immediate remediation within 24 to 48 hours. High findings should be resolved within one to two weeks. Medium findings fit into the next scheduled maintenance cycle. Low and informational items go on the backlog. If you must choose between findings of equal severity, prioritize those affecting systems that store sensitive data or face the public internet.