Security Operations Center (SOC): How many SOC analysts do you really need?
- Ashraf Aboukass
- Jun 10
- 11 min read
Updated: 3 days ago
Determining the ideal number of analysts needed to operate a SOC is a complex task, but it can be approached by breaking it down into three key steps: first, understanding your budget constraints; second, defining the required operating hours; and third, establishing your target SOC maturity level. From there, the final team size can be further influenced by additional factors such as the scope of monitoring, the size and behavior of the user base, infrastructure complexity, the number and types of security tools in place, desired attack coverage, alert response time, organizational culture, and the regional threat landscape.
In this blog, we'll first focus on the primary foundations that determine your SOC constraints; budget, operating hours, and maturity targets. Then, we'll explore secondary factors that will shape and influence the foundational elements. Taking this layered approach will help you better assess your SOC staffing needs and allow you to make informed decisions when planning or scaling your security operations team.
Primary Factors
Budget
It’s important to acknowledge a hard truth: there is nearly always a gap between what you want and what you can afford. And at the end of the day, your staffing levels will be shaped less by ideal models and what you need but more by the budget you have been given. This magic number is pretty much set in stone, so everything you do will have to work around this number.
Knowing this, the first task is to determine whether your budget is enough to meet your requirements or not. If not, you’ll need to make some executive decisions on how best to utilize your limited human resources. And like most things in security (and life), compromises are inevitable.
Here are some challenging trade-offs you might encounter:
Speed of analysis vs quality of analysis
Real-time monitoring vs. retrospective analysis
Wide coverage across systems vs. deep focus on critical assets
In-house expertise vs. outsourced (offshore) expertise with shared access to your data
Each of these trade-offs will have their pros and cons, and it's crucial that these are well understood and any decisions are based on risk appetite and discussed and endorsed by the senior leadership team.
To put this into perspective, the table below illustrates how many SOC analysts you are likely to afford given various annual budgets. For the purpose of this example, we have set a hypothetical fixed salary of $60,000 USD a year for a SOC analyst. This should should be adjusted based on the average salary for your region and level of seniority you are looking to hire (L1/L2/L3).
Total Annual Budget | Roles & Team Size | SOC Analysts |
$280,000 | A small, foundational SOC. likely handling basic monitoring and initial triage. Limited coverage (maybe 24/7 on-call). | 3 |
$560,000 | Moving towards more structured operations. Might allow for some extended hour coverage, but 24/7 is still challenging. | 6 |
$1,200,000 | Respectable team size capable of 24/7 coverage (multiple shifts). | 12 |
To compare, the following table illustrates the variation in pay for SOC analysts across different regions.
Country / Region | Tier 1 (Entry-Level) | Tier 2 (Mid-Level) | Tier 3 (Senior-Level) |
United States | $60,000 - $96,154 | $75,000 - $110,000 | $100,000 - $140,000+ |
Canada | $45,431 | $56,855 | $62,346 |
Germany | $30,240 - $45,360 | $54,461 - $59,631 | $60,480 - $67,393 |
Netherlands | $57,810 - $77,080 | $77,080 - $115,620 | $115,620 - $173,430 |
France | $52,635 - $70,180 | $70,180 - $105,270 | $105,270 - $157,905 |
United Kingdom | $37,579 | $49,218 | $48,119* |
Australia | $60,780 - $81,040 | $81,040 - $121,560 | $121,560 - $182,340 |
India | $3,600 - $8,400 | $5,400 (Average) | N/A (Data Not Tiered) |
UAE | $41,769 - $76,000 | $48,600 - $118,750 | $81,000 - $140,000+ |
Saudi Arabia | $32,106 - $55,625 | $60,000 - $82,500 | $82,500 - $130,000+ |
At this point, you may be weighing the high cost of hiring dedicated staff against the seemingly more affordable option of subscribing to a managed service. However, it’s important to keep in mind that unless you’re paying a similar amount or more, you’re likely only getting access to shared resources and a basic level of service. In the end, the quality you receive will reflect what you invest.
Operating Hours
The amount of coverage required typically aligns with the organization’s threat profile & risk appetite. A business that only operate 9–5 without any internet public-facing services may feel the threat of attack is minimal and do not require 24/7 real-time monitoring.
An e-commerce platform on the other hand with global customers and a large digital footprint might consider 24/7 real-time monitoring as a must.
Typical operating models can be grouped as follows:
Hours | Coverage | Estimated Staffing |
8 hours (09:00-17:00) 5/7 | Business Hours: Primarily adopted due to budget constraints, as it requires the fewest staff to cover a single daytime shift during standard business hours; however, this strategy accepts the significant risk of undetected and unmitigated security incidents occurring outside of these operating hours | 3 |
12 hours (9:00-23:00) 12/7 | Extended Hours: Attempts to balance budget constraints with the need to reduce risk by extending monitoring beyond standard business hours, covering periods when automated attacks are common or critical operations may still be active. Requires more staff than a 9-5 model but significantly less than 24/7, while acknowledging the remaining risk associated with uncovered late-nights and early morning periods. | 6 |
24 hours 24/24 | 24 Hour coverage: When the risk tolerance is very low due to critical assets, sensitive data and there is enough budget to cover a large team working around the clock, 365 days a year. Unless you run critical national infrastructure or there was a strong justification, I would not expect this model to be operated entirely using internal staff as its costly and comes with a huge administrative overhead. | 12 |
As well as the above, it is also entirely possible to have an operating model in place that to auto-escalate alerts to staff on-call after business hours. This model also motivates the SOC team to improve the maturity of the use cases so that they are not called unnecessarily.
Maturity Targets
When establishing SOC staffing requirements, it's essential to reach an agreement with the senior leadership team on the desired target maturity level beforehand. This ensures that goals are realistic and aligned with the business's appetite and budget constraints. Most organisation sits in between developed and defined maturity level, since anything above that level requires huge investment.
Below is an adaptation of the SOC-CMM framework to estimate required SOC staff for each maturity level:
SOC-CMM Maturity Level | Staffing Concept | Estimated Staffing |
Level 1: Initial | Initial dedication, often 1-2 people managing basic alerts, reactive. | 2-4 (e.g., 2 dedicated analysts, maybe 1 shared IT lead) |
Level 2: Developed | Emerging formalization, some shift coverage, focus on basic incident response. | 5-8 (e.g., 4-6 analysts, 1-2 leads/engineers) |
Level 3: Defined | Formalized processes, 24/7 coverage, specialization begins, proactive elements. | 10-18 (e.g., 6-10 L1/L2 analysts, 2-4 L3/IR, 2-3 Threat Intel/Hunters, 1-2 Managers/Leads, 1-2 Engineers) |
Level 4: Quantitatively Managed | Data-driven, metrics-focused, mature 24/7 ops, advanced proactive measures. | 18-30+ (e.g., 8-12 L1/L2, 4-6 L3/IR, 3-5 Threat Hunters/Forensics, 2-3 Security Engineers, 2-3 Managers, 1-2 Automation/DevSecOps) |
Level 5: Optimizing | Cutting-edge, innovation-focused, highly automated, deeply integrated. | 30-50+ (e.g., larger teams across all L4 roles, plus dedicated R&D, AI/ML Specialists, Red Team, advanced GRC) |
Secondary Factors
Threat detection use cases
If you already have a SOC and you scrutinize your use cases, you might realize that not all use cases are actually threat detection use cases. Some use cases might have been designed to monitor the SIEM health and log sources, while others might have been developed to identify newly discovered system vulnerabilities.
It is important to note that while all use cases contribute to the overall maturity of the SOC, some use case categories might distract from responding to threat alerts.
We can group the various use cases under five distinct categories:
Threat Detection (e.g., uncleaned virus alerts, brute force attacks)
Vulnerability Detection (e.g., missing security patches)
Compliance (e.g., AV/EDR Coverage)
System Monitoring (e.g., high EPS)
Log Source Monitoring (e.g., log source delays)
Threat detection use cases can be further divided into two categories: behavior-based and signature-based. Signature-based use cases depend on known malicious signatures, such as file names, hashes, and process names. Behavior-based use cases look for unusual activities or a sequence of events often associated with an attack. The reason for highling this division is that behavior-based threat detection use cases are prone to generate more false positives if not properly tuned, potentially increasing resource demand.
In a nutshell, if your SOC covers all categories of use cases, you are likely to trigger multiple alerts at the same time, which may cause you to miss real threats if you are limited in the number of resources in one shift. This will become clearer in the next section, and we will discuss some strategies to overcome this challenge.
Volume of Security Alerts
There is a direct correlation between the average number of alerts generated in a given hour and the potential amount of resources you need in a given day. Therefore, it is important to calculate (guesstimate) how many alerts you are likely to generate in a given hour.
And it is important to avoid simply taking the total number of alerts created in a day and dividing it by 24, as this will not give you a true reflection of the number per hour, since there are typically more alerts generated during business hours, and there will be peak times within the business day as shown in the below table:
Time/Day | Monday | Tuesday | Wednsday | Thursday | Friday | Saturday | Sunday |
00:00 - 04:00 | 10 | 10 | 10 | 10 | 10 | 10 | 10 |
04:00 - 08:00 | 15 | 15 | 15 | 15 | 15 | 10 | 10 |
08:00 - 12:00 | 20 | 30 | 30 | 30 | 30 | 10 | 10 |
12:00 - 16:00 | 30 | 30 | 30 | 30 | 30 | 10 | 10 |
16:00-20:00 | 15 | 15 | 15 | 15 | 15 | 10 | 10 |
20:00-24:00 | 10 | 10 | 10 | 10 | 10 | 10 | 10 |
Once you’ve estimated the number of alerts your SOC will receive in a given hour, the next step is to assess how many tickets you are likely to get within a short period of time (let's say within 5 minutes of one another) and then how long it takes your analyst to close a ticket (a common benchmark is around 15 minutes per alert).
This is where it gets interesting. Let's say, for example, an analyst can handle 32 security alerts per shift. This means that if you have two analysts per shift, they can potentially handle 64 security alerts.
If you have 50 alerts a day, you might be led to believe that you have more than enough resources to handle the load. However, this is far from the real world. Some security alerts are likely to come in bunches and at roughly the same time, maybe only minutes apart. Now, if you have a target SLA to pick up alerts within 15 minutes, you will find that your two analysts might be busy on one ticket each for 15 minutes, and the rest of the tickets will start to pile up.
Following is an example of how many analysts you will need to handle different numbers of simultaneous tickets (considering a maximum of 32 tickets handled by each analysts in any given shift at a rate of 1 ticket per 15 minutes).
Simultaneous Alerts Handled | Notes on Analyst Staffing Logic | Total Analysts |
1 Simultaneous Alert | To handle 1 simultaneous alert, 1 analyst must be active. Scheduling 2 analysts per shift (to ensure 1 active even with sick leave) across 3 shifts, plus factoring in 40 days off, requires 7 analysts. | 7 |
2 Simultaneous Alerts | To handle 2 simultaneous alerts, 2 analysts must be active. Scheduling 3 analysts per shift (to ensure 2 active even with sick leave) across 3 shifts, plus factoring in 40 days off, requires 11 analysts. | 11 |
4 Simultaneous Alerts | To handle 4 simultaneous alerts, 4 analysts must be active. Scheduling 5 analysts per shift (to ensure 4 active even with sick leave) across 3 shifts, plus factoring in 40 days off, requires 17 analysts. | 17 |
6 Simultaneous Alerts | To handle 6 simultaneous alerts, 6 analysts must be active. Scheduling 7 analysts per shift (to ensure 6 active even with sick leave) across 3 shifts, plus factoring in 40 days off, requires 24 analysts. | 24 |
As you can see, these numbers are quite high, and although six simultaneous alerts is a slight exaggeration of what could happen, it does highlight the impact on resources.
If you find that there are simply way too many alerts in a given hour for your resources to handle, following are some ways in which you can sensibly reduce the number of alerts generated without ditching use cases:
Alert grouping, use those capabilities to reduce number of alerts for a given period of time or same asset
Generate reports rather than alerts for informational events
Automate response actions for as many use cases as possible, if you are unsure of the response action, you should really revisit the use case
Make improvements to baselines to reduce noise
Note that adding new detection technologies in the future will also increase alert volume, so be sure to consider the impact on staff when making significant technology changes.
Pickup Time
As discussed earlier, while alert volume per hour is important, the target pickup time also known as the mean time to acknowledge is even more critical. If the target SLA is very low, you are going to need an army of analysts to meet this target.
Another important aspect that we haven't discussed yet is the need to strictly limit the number of alerts classified as critical. This approach helps reduce the chances of facing multiple critical use cases triggering every hour. Ideally, you should aim for only a few critical alerts per week, with the number of triggered alerts increasing as the severity decreases.
Below is an example of average SLA for Mean Time To Acknowledge
Severity Level | Average Pickup Time (Mean Time to Acknowledge) |
Critical | Within 15 minutes (often aiming for under 5 minutes) |
High | Within 30 minutes to 2 hours |
Medium | Within 2 to 6 hours |
Low | Within 4 to 24 hours |
Infrastructure
While it's clear that an increase in devices leads to more security events, it may not be immediately apparent that if your organization has a heterogeneous environment consisting of on-premise infrastructure, private cloud, and public cloud, each with different log source types, you'll need to address a wide array of security events in a complex setting. This complexity makes log source management, use case creation, automation, and investigation significantly more complex, thereby further intensifying the demand on human resources.
Userbase
Each organization utilizes computer systems uniquely, and user behavior varies across different departments. IT users, particularly IT administrators, often trigger numerous security alerts due to their privileged administrative tasks, while business users generally trigger fewer alerts as they work mainly on business applications with limited standing privileges.
We can hypothesize that an increase in IT staff correlates with a higher likelihood of security alerts. The table below aims to quantify this concept:
Company Size | IT Staff:Employee Ratio | Company Size | IT Staff:Employee Ratio | SOC Analysts:IT Staff Ratio | SOC Analysts:Employee Ratio | Estimated Staffing | SOC Analysts:IT Staff Ratio | SOC Analysts:Employee Ratio |
0 to 4,999 Employees | 1:20 | 0 to 4,999 Employees | 1:20 | 3:100 | 1:667 | ~7.5 at 5,000 employees | 3:100 | 1:667 |
5,000 to 9,999 Employees | 1:25 | 5,000 to 9,999 Employees | 1:25 | 5:100 | 1:500 | 10-20 | 5:100 | 1:500 |
10,000+ Employees | 1:35 | 10,000+ Employees | 1:35 | 6:100 | 1:583 | ~17 at 10,000 employees | 6:100 | 1:583 |
Attrition
Working in a SOC is stressful, repetitive, and involves a high workload; therefore, no matter how comfortable you make the environment, turnover is inevitable. Expect that most analysts will want to leave after 3 years, and therefore, to minimize disruptions, you should always have enough staff to cover someone resigning and leaving before a replacement is hired, so as not to risk operations.
And as the demand for cybersecurity professionals grows, finding experienced resources will become increasingly challenging. Therefore, it will pay dividends to maintain a talent pipeline that can support the hiring of at least one person every 18-24 months. To support your talent pipeline, consider internships, apprenticeships, or internal rotations.
Artificial Intelligence (AI)
With the integration of AI into cybersecurity, we can achieve more tasks in less time. This is especially beneficial for helping analysts understand alerts and events more effectively, while also providing guidance for response activities. Additionally, there is now a chance for less experienced individuals to be hired and be as effective as seasoned analysts.
However, I have not yet seen any groundbreaking advancements in threat detection and response technology that would remove the need for SOC analysts.
It will be hard at the moment to eliminate or even drastically reduce the amount of resources required due to the maturity of AI in SOC and the nature of operational work and the constant requirement for human oversight. Nonetheless, I envision a future where additional demand for SOC specialists is augmented by agentic AI.
Conclusion
Successfully staffing a Security Operations Center involves more than simply ensuring there is at least one person at all times monitoring alerts; it requires evaluating numerous factors and making data-driven decisions that align with your risk tolerance. By proactively planning your staffing requirements based on the previously mentioned points, you can ensure your SOC stays agile, resilient, and capable of delivering protection while maintaining analyst well-being.
In future posts, we’ll explore how other areas such use case development, log source management and even outsourcing can help further mature your SOC operations.
Comments