6 April 2026 Security Tips Patrowl team

Manual vs automated penetration testing: A technical guide for security teams

OFFENSIVE SECURITY·ATTACK SURFACE MANAGEMENT·PATROWL RESEARCH·2026·11 MIN READ

In 2025, more than 48,000 CVEs were published. 68% of breaches documented by Verizon exploited known, unpatched vulnerabilities at the time of the incident. Not zero-days. Not sophisticated attacks. Referenced vulnerabilities with an existing fix, on assets nobody had tested in months.

The problem isn't the pentest. It's its frequency and the limited scope of its perimeter.

This guide compares manual pentesting, automated pentesting and the hybrid approach across eight operational criteria, and explains why the most mature CISOs have stopped choosing between the two.

The structural problem with point-in-time pentesting

Among organizations onboarded on Patrowl, the first scan detects an average of 30 to 40% of exposed assets missing from the declared IT inventory — and in extreme cases up to 60%: forgotten subdomains, non-decommissioned pre-production environments, unregistered cloud instances, legacy APIs never disabled. These assets have never been included in any pentest scope.

We traditionally conducted penetration tests, but their main drawback is being obsolete the day after the report is delivered. Continuous pentesting is an innovative solution perfectly aligned with the extreme scalability of today's enterprise IT

CISO, MGEN (4 million members)

A pentest produces a static snapshot of an environment that changes constantly.

The question a traditional pentest answers: "What is our security posture at a given point in time?"

The question CISOs ask today: "Am I vulnerable, and if so, where?"

An annual pentest is not a security program. It's a compliance exercise.

What each approach actually does

Manual pentesting is an audit conducted by human experts on a defined scope, within a bounded time window (typically 5 to 15 days). The goal: identify vulnerabilities that require contextual reasoning, business logic understanding and creative attack chaining. Pentesters must also be recalled to perform re-tests after remediation. Deliverable: a static PDF report.

Automated pentesting is the continuous execution of security tests via orchestrated tools (combining specific internal tools, AI and daily updates) capable of identifying common vulnerabilities, detecting misconfigurations and validating exploitability at scale. Deliverable: a continuous stream of prioritized, automatically re-tested vulnerabilities.

The hybrid approach combines both. Automated monitoring continuously covers the entire exposed surface. Human experts intervene in a targeted way on critical assets, new features and complex attack scenarios. This is not a compromise — it is the elimination of the blind spots of both approaches.

What human experts do that automation cannot?

Business logic flaws, authorization bypasses, multi-step chains that require understanding how a system is actually used — automated tools miss all of this.

Here is an example on a fictional e-commerce application. Four moderate vulnerabilities in isolation. Chained by an expert, they lead to full server compromise:

Step 1: IDOR on user profile API               CVSS 5.3 (medium)
Step 2: Internal token exposed in response     CVSS 4.9 (medium)
Step 3: Token grants access to admin panel     CVSS 6.8 (medium)
Step 4: Admin panel exposes an RCE             CVSS 8.1 (high)
Combined scenario: full server compromise
Real impact: CRITICAL

This type of reasoning is the domain of human experts. Automated tools also cannot cover social engineering, targeted phishing or red team exercises.

What automation does better than a pentester alone?

Most of a pentester's work is not high-wire hacking — it is systematically enumerating assets and their technologies, performing fingerprinting, and correlating identified software versions with known potential vulnerabilities (CVEs). These repetitive tasks are tedious, and automation handles them better, faster and without interruption.

Comparison across 8 criteria

Manual pentest
Automated pentest
Hybrid approach
Frequency
1 to 2 times per year
Continuous, real-time
Continuous + targeted point-in-time
Coverage
Declared scope only
Full, Shadow IT included
Full + deep on critical assets
Depth
High: business logic, chaining
Medium: known patterns
High where it matters, automated everywhere else
CVE reactivity
At next engagement
Within hours
Within hours, expert-validated if critical
Marginal cost
High (day rate × days)
Cost-effective at scale
Optimized: broad automation, targeted manual
False positives
Very low: human validation
Low: automated validation
Very low: dual validation
CI/CD pipelines
Difficult
Native integration
Native integration + human review on critical releases
Logic vulns
Yes: human experts
No: predefined patterns
Yes: experts on priority assets

The conclusion is straightforward: the hybrid approach sacrifices no criterion. It takes the continuity and coverage of automation, and the depth and reasoning of manual testing, concentrating human effort where it has the most impact.

A manual pentest on an average application scope costs between $5,000 and $20,000 depending on the provider. The hybrid approach redirects that budget: continuous automation across the entire surface, targeted human intervention on critical assets and key moments. Fewer billed days on what can be automated, more depth on what cannot.

CTEM: the framework that reframes the question

Continuous Threat Exposure Management (CTEM) is a five-phase program that enables organizations to continuously validate their real exposure, rather than relying on a snapshot taken once a year.

  1. Scoping Define the perimeter and business priorities

  2. Discovery Map the entire exposed attack surface

  3. Prioritization Rank exposures by contextualized real risk

  4. Validation Confirm exploitability in the target environment

  5. Mobilization Coordinate remediation across teams

Gartner predicts that by 2026, organizations applying CTEM will reduce breaches by two thirds compared to those still doing point-in-time testing.

Most organizations skip Phase 4 (Validation) or do it halfway. Detecting a vulnerability is not the same as confirming it is exploitable in your environment. In the hybrid approach, automation validates continuously. Human experts validate the complex scenarios that tools cannot replay.

Prioritizing beyond CVSS and EPSS scores

The CVSS score measures the theoretical severity of a vulnerability in a generic context — not the real risk contextualized to your environment.

A CVSS score of 9.8/10 on an internal monitoring service not exposed to the internet may seem urgent to fix. It is actually a low priority (P3). A CVSS score of 6.1/10 on your public authentication API, listed in the CISA KEV catalog, handling 500,000 connections per day, is a critical priority (P0).

The EPSS score is meant to correct the lack of contextualization of the CVSS score, but it is not updated frequently enough to reflect the current threat landscape.

Four criteria actually determine urgency:

  1. Is the vulnerability exploitable in the target environment?

  2. Is the asset accessible from the internet?

  3. Is this vulnerability being exploited in the wild?

  4. What is the real business impact if the asset is compromised?

This is precisely where the hybrid approach pulls ahead: automation ranks in real time, human experts resolve ambiguous cases and validate high-impact scenarios.

The 3 prioritization mistakes that cost the most

Mistake 1: Sort by CVSS. A vulnerability with a CVSS score of 9.8/10 on an unexposed internal service is less urgent than one with a CVSS score of 6.5/10 on a public API handling customer authentication.

KEY TAKEAWAYCross-reference the CVSS score with internet exposure, exploitation-in-the-wild status and business impact — then prioritize.

Mistake 2: Treat every scan result as confirmed. A typical scanner produces hundreds to thousands of results on a mid-sized IT environment. Without proof of exploitability, P0 tickets flood the remediation queue and erode trust between security teams and engineering. No PoC, no P0.

"Instead of dedicating our team's precious time to ad hoc penetration testing, we now continuously receive verified, high-quality vulnerability results that our engineers can immediately act on."

Xplor Technologies

Mistake 3: Close tickets on declaration. More than 80% of CISOs have discovered that a fix they believed deployed had not been rolled out across all assets, or only partially remediated the vulnerability. An automated re-test at the close of every ticket is the only way to know.

Decision guide: who needs what

Small organizationfewer than 50 exposed assets

Start with EASM and automated pentesting. This is already a solid hybrid posture.

Established security team

Continuous automation across the entire surface, targeted manual pentests before certifications and critical launches.

Regulatory auditNIS2 · ISO 27001 · SOC 2

Manual pentesting is non-negotiable: automation prepares the scope and reduces vulnerabilities to address on audit day. The combination accelerates and reduces certification costs.

MANUAL PENTEST FOR

  • Business logic vulnerabilities

  • Complex attack chains

  • Major product launches

  • Formal audits, red team

AUTOMATED PENTEST FOR

  • Continuous surface monitoring

  • Real-time CVE correlation

  • Re-test after every deployment

  • CI/CD integration, NIS2/DORA evidence

BOTH COMBINED FOR

  • Everything else

What Patrowl does in practice

39 min between the publication of a critical CVE and the operational alert, with no manual triage.

In March 2025, following the publication of CVE-2025-53770 (critical SharePoint vulnerability), Patrowl automatically triggered a detection test across all of Brest Métropole's exposed assets. Notification at 3:57 PM, alert transmitted to the Blue Team at 4:36 PM.

"The implementation of your test for the SharePoint vulnerability proved fully operational: our Blue Team was immediately alerted. We thank you for the quality of your active monitoring and the reliability of your support."

Brest Métropole

Patrowl is a cybersecurity SaaS platform, listed in the Gartner Market Guide 2026 for Preemptive Exposure Management. It combines continuous automated monitoring with integration with your testing teams for a hybrid approach operational from the first scan.

FAQ

Manual pentesting is conducted by human experts to identify flaws requiring contextual reasoning — business logic vulnerabilities, complex scenarios that automated tools cannot chain together. Automated pentesting continuously executes discovery and exploitability validation at scale. Automation covers breadth, human experts provide depth.
68% of breaches exploit known, unpatched vulnerabilities (Verizon DBIR 2024). More than 48,000 CVEs were published in 2025. Between two annual pentests, organizations deploy new assets, change configurations and accumulate vulnerable versions that nobody tests until the next audit.
PTaaS combines continuous automated testing with access to human experts. Unlike point-in-time pentest engagements, PTaaS delivers a continuous stream of vulnerabilities, automated re-testing and DevSecOps integration — the operational implementation of Gartner's CTEM program. See PTaaS →
Yes. Continuous pentesting generates the ongoing evidence that NIS2 Article 21 and DORA ICT risk management require — not a snapshot, a history. Results are documented, tracked through remediation and automatically re-tested, giving auditors a verifiable compliance trail. See NIS2 compliance →

Sources