What Is The Optimal Security Scan Time For My Applications ?


With the trend of shifting left which means performing security tests earlier in the software development life cycle, last-minute deployment issues are about to be a thing of the past.

However, the variety of security tools used in the process still creates complexity when deciding on which project to scan at what frequency.

Considering the limited resources used in the security departments and the extra costs associated with concurrent scan features of scanners, we need to find new ways of optimizing the scan times.

The most common approach we observe in the field is running security tests on a calendar basis.

It surely helps to know you are scanning your projects on a regular basis however, this may not be the best approach considering different development loads in different time frames.

One week there might have been twice as many pull requests or merge events in a project than the other week.

So, you may end up with scans taking much more time than usual and twice as many findings which might hamper your production pipeline.

This also makes it harder for security engineers to analyze data, as scans may produce results from an uneven amount of input each time.

When trying to analyze the risk score trend of a project, it is not the best approach to compare the results of a two-week period when there were only a handful of merges one week and hundreds the next.

A better approach would be to use webhooks to schedule your source code scans so that your scans reflect the results of a more aligned amount of input each time.

Imagine when you start a scan every time there is a pull request or a merge in your project.

This would give you assurance that anything that comes to your master branch is tested before being accepted and any critical vulnerability created is caught red-handed.

However, this approach is also shortsighted in the sense that it may overload your scanners in case there are too many pull requests or merges at any given time.

Considering the long duration of scans in complex applications, this might turn into a nightmare quite easily.

To avoid the problem of overloading scanners unnecessarily, what we believe to be the best alternative is running scans on a scheduled webhook basis.

That means instead of initiating a scan each time a webhook is triggered, it makes more sense to wait till a certain time of day to see if the events we have been listening to have been fired at least once.

To be more specific, if we run a source code scan when a merge event takes place, best practice would be waiting until let’s say lunchtime to see if there have been one or more merges to decide if we need to initiate a scan or not.

If there have been one or multiple merges, probably by the time we are back from lunch, the scan will be complete and findings will be ready to work on.

This approach allows us to make use of idle times such as lunch breaks or midnight to check whether there is any action worth scanning and then start the scan if necessary.

Regarding dynamic scans, using webhooks is way more tricky.

To be able to run a dynamic scan, we need access to the test or production environment.


“However, when a merge request is sent to the server and the merge event fires a webhook to start a DAST scan, it is very likely that the server will not process the request before we start scanning the project.”


If we luck out we will probably be scanning the previous version one more time or, worst-case scenario, we will end up with a failed scan.

To be on the safe side, if webhook is going to be used, we always recommend starting DAST scans with a delay of 1-2 hours after the event is fired so that we make sure we are smoothly scanning the latest version of the project.

As DAST is more likely to be a destructive method compared to SAST, the selection of unbusy hours for scans is crucial even if you schedule your scans on a calendar basis or using webhooks.

Otherwise, by starting DAST scans during busy hours of the server you may cause denial of service.

Long story short, timing is a critical decision for SAST and DAST scans.

Unlike IAST or RASP tools which continuously monitor the test or production environments, a sound decision needs to be made relating to the timing of SAST and DAST scans to make sure no buggy piece of code slips through the cracks.

With the right combination of scans, it is possible to use all scanners efficiently and boost the ROI on the scanners.