A security company that does not test its own site has no business testing yours.
We tell clients to test their applications before launch. We tell them to run regular audits. We tell them that no website is too small to have vulnerabilities. So we pointed our own tools at sherlockforensics.com and documented everything we found.
This is not a marketing exercise designed to show how perfect we are. We found real issues. Some were embarrassing. All of them are fixed now. Here is the full accounting.
The Methodology
We ran the same assessment we would run for a client website engagement. This included:
- HTTP security header analysis using our own scanning tool and securityheaders.com
- Port scanning with nmap
- Directory enumeration to find exposed files and paths
- SSL/TLS configuration testing
- Content-Security-Policy validation
- File permission review across the web root
- Sitemap and robots.txt review
- Manual review of JavaScript, forms and client-side code
We did not pull any punches. If something was wrong, we documented it.
Finding 1: Missing Security Headers
Severity: Medium
Status: Fixed
Our initial deployment was missing several recommended HTTP security headers. The site had X-Frame-Options and X-Content-Type-Options set correctly, but was missing Content-Security-Policy, Permissions-Policy and Referrer-Policy headers.
This is exactly the kind of thing we flag in client audits. Headers are easy to configure and provide meaningful defense-in-depth against XSS, clickjacking and data leakage. There was no good excuse for not having them on our own site from day one.
Fix: We added a comprehensive set of security headers to our server configuration. The Content-Security-Policy was the most complex to configure because of the interactions with third-party scripts, which leads us to finding number four.
Finding 2: Demo Files Left in Production
Severity: Low
Status: Fixed
During development, we created several demo and test HTML files to validate styling and functionality. Some of these files were still accessible in production. They did not contain sensitive data, but they revealed internal naming conventions, template structures and development patterns.
An attacker conducting reconnaissance would have found these files through directory brute-forcing. While they were not exploitable on their own, they provided information that could be useful in a more targeted attack.
Fix: We removed all demo and test files from the production web root. We added a pre-deployment check to our build process that scans for files matching common demo and test naming patterns and blocks deployment if any are found.
Finding 3: Permission Issues Across Builds
Severity: Medium
Status: Fixed
This one was subtle and took longer to identify. File permissions across our web root were inconsistent. Some files had been deployed with overly permissive permissions (644 for files that should have been 640, directories at 755 that should have been 750). Across different deployments, permissions were not applied uniformly because the build process did not explicitly set them.
In a shared hosting environment, this could allow other users on the same server to read files they should not have access to. On our infrastructure, the risk was lower because we control the server, but the principle matters. Permissions should be set deliberately, not left to defaults.
Fix: We added explicit permission setting to our deployment script. Every deployment now sets file permissions to 640 and directory permissions to 750 across the entire web root. We run a permission audit after each deployment to verify consistency.
Finding 4: CSP Blocking Google Analytics
Severity: Low
Status: Fixed
When we implemented our Content-Security-Policy header, we initially configured it too restrictively. The policy blocked Google Analytics 4 (GA4) from loading, which meant we were not collecting any analytics data. The browser console showed CSP violations for connections to googletagmanager.com and google-analytics.com.
This is a common issue when implementing CSP for the first time. The policy needs to allow every legitimate external resource your site loads. Miss one and it gets blocked silently. Most site operators never check their browser console, so they never know something is broken.
Fix: We updated the Content-Security-Policy to include the necessary script-src and connect-src directives for Google Analytics. We verified that GA4 was loading and reporting correctly. We also set up CSP violation reporting so we are notified if future changes break the policy.
Finding 5: Sitemap Permission Issues
Severity: Low
Status: Fixed
Our sitemap.xml file had a permission issue that intermittently prevented search engine crawlers from accessing it. The sitemap was generated by a PHP script, and the output file was occasionally written with permissions that did not allow the web server to read it. This meant Google and Bing were getting sporadic 403 errors when trying to fetch the sitemap.
This is not a security vulnerability in the traditional sense, but it is the kind of operational issue that a thorough audit catches. If your sitemap is not accessible, search engines cannot index your pages efficiently.
Fix: We fixed the PHP script to explicitly set 644 permissions on the generated sitemap file. We added monitoring to alert us if the sitemap returns a non-200 response code.
What We Did Not Find
For the sake of completeness, here is what the audit confirmed was working correctly:
- SSL/TLS configuration scored an A+ on Qualys SSL Labs
- No open ports beyond 80 (redirects to 443) and 443
- No SQL injection or XSS vectors in forms or URL parameters
- No exposed .git directory, .env files or server configuration files
- robots.txt properly configured without revealing sensitive paths
- No mixed content issues (all resources loaded over HTTPS)
- HSTS header properly configured with a long max-age
Lessons Learned
Every one of the issues we found falls into a predictable category. They are the same kinds of issues we find in client audits every week. That is the point. Nobody is immune to security configuration issues. Not even security companies.
The difference between a secure organization and an insecure one is not whether vulnerabilities exist. It is whether you look for them, find them and fix them. We practice that discipline on our own infrastructure because it is the right thing to do and because it makes us better at doing it for clients.
If you have not audited your own website recently, start with the free tools: securityheaders.com for headers, Qualys SSL Labs for TLS configuration and our own scanning tool for a quick vulnerability check. If you want a thorough assessment, we can help with that too.
We will run this same audit again in 90 days and publish the results. Accountability is not a one-time exercise.
People Also Ask
Should security companies audit their own websites?
Absolutely. A security company that does not test its own infrastructure has a credibility problem. Regular self-audits demonstrate that the company practices what it preaches. They also catch configuration drift, build artifacts and permission issues that accumulate over time.
What tools can I use to audit my own website?
Start with free tools: securityheaders.com for HTTP header analysis, Mozilla Observatory for overall security scoring, nmap for port scanning and OWASP ZAP for web application testing. For deeper testing, use Burp Suite, Nuclei or commercial scanners. Automated tools catch configuration issues but miss business logic flaws, which is why professional penetration testing is still essential.
What are the most common website security issues?
The most common issues we find are missing or misconfigured security headers, exposed development files and directories, outdated software versions, misconfigured file permissions, overly permissive CORS policies and missing rate limiting on forms and API endpoints.