DevSecOps
DevSecOps
[TBD]
Resources
Container Scanning
A container scanner is an essential tool for maintaining the security of containerized applications. Containers package an application along with its dependencies, making them consistent and portable. However, this convenience also comes with potential risks. Developers, either due to time constraints or lack of awareness, may not always adhere to security best practices when building container images. This can inadvertently introduce vulnerabilities, such as misconfigurations, inclusion of unnecessary binaries, or outdated dependencies.
Even if a container is initially built following best practices, the components within it (libraries, binaries, or system tools) can become vulnerable over time as new security issues are discovered. Without continuous scanning, these vulnerabilities might go unnoticed, exposing the organization to risks such as exploitation, data breaches, or service interruptions.
A container scanner mitigates these risks by continuously analyzing container images for known vulnerabilities, insecure configurations, and outdated dependencies. It enables teams to identify and address issues early in the SDLC, reducing the likelihood of deploying insecure containers to production. By integrating container scanning into the CI/CD pipeline, organizations can enforce security standards, maintain compliance, and ensure that the foundation of their applications remains robust and resilient against emerging threats.
Outcome
- Perform container scans for every image build to identify potential vulnerabilities early.
- Regularly scan deployed containers to ensure ongoing security and detect newly discovered issues.
- Integrate findings into the vulnerability management program for proper tracking and resolution.
- Configure container scanning tools to minimize false positives:
- Tailor rules to align with specific business needs.
- Disable rules that are not relevant to the project or organization.
- Provide timely feedback to developers on identified findings to enable prompt remediation.
Metrics
Metrics for this topic are included in Vulnerability Management
Tools & Resources
Further Reading
Dast Scans
DAST is a controversial topic that divides security engineers. On one side, some argue that DAST provides little to no real value due to the overwhelming noise it generates and low confirmed findings, making it not worth the effort. On the other side, many engineers rely on DAST to assess the effectiveness of security controls and uncover real, exploitable vulnerabilities.
One key issue with DAST is that many people still recommend outdated tools, some over a decade old, with little to no maintenance. These tools are unlikely to provide good value. However, even modern DAST solutions require significant fine-tuning and configuration to deliver meaningful results. When properly optimized, though, DAST can be a valuable asset.
A few years ago, DAST was more effective in monolithic applications, where it could easily crawl websites, detect endpoints, and test parameters. With the shift to microservices and single-page applications, DAST has become less effective, as it struggles to identify all relevant endpoints and parameters.
Choosing the right DAST tool is crucial, but just as important is ensuring it has context of the necessary endpoints and parameters for scanning. This can significantly impact its effectiveness. Using standards like OpenAPI and tools like Swagger simplifies this process since they provide structured information about APIs. If such tools or standards aren’t in place, DAST can still be useful but will require more manual effort to maximize its value.
In web applications, DAST can provide immediate value by identifying low-hanging fruit, such as misconfigured security headers. While these may not be critical vulnerabilities, properly configured headers enhance overall security.
Additionally, script kiddies often report these issues without understanding their context. Addressing them proactively helps reduce unnecessary noise in reports and prevents distractions from more serious security concerns.
Outcome
- DAST is performed regularly against the applications
- The scanning tools are properly customized to reduce false positives and in case of API's to be able to recognize the endpoints and authenticate
- Findings are being pushed to the vulnerability management program
- Severity/Priority is recalculated based on the asset's criticality/risk
- The DAST tool is integrated into the SDLC
Metrics
Metrics for this topic are included in Vulnerability Management
Tools & Resources
- Nuclei + Nuclei Dast Templates (Free)
- Dasterdly (Free)
- ZAP Proxy (Free)
- RESTler (Free)
- Pentest Ground Lists of vulnerable apps for tests (Free)
Further Reading
- What is wrong with the current state of DAST? Feedback from my conversations with AppSec engineers
- Scaling Dynamic Application Security Testing (DAST)
Sast Scans
Static Application Security Testing (SAST) analyzes source code to detect vulnerabilities early in the Software Development Lifecycle (SDLC). It provides fast feedback, allowing teams to address security issues when they are easier and cheaper to fix.
SAST is often integrated into Source Code Management (SCM) systems, like GitHub, enabling automatic scans during pull requests or commits. This ensures security checks become a seamless part of the development workflow.
To maximize value, chosen SAST tools should support SARIF for better integration with CI/CD pipelines and be fine-tuned to reduce false positives. Configuring tools to recognize the frameworks and libraries developers use (if needed) ensures more accurate results and actionable insights.
SAST tools vary in their ability to support different programming languages and frameworks. While some comprehensive tools can analyze multiple languages effectively, others specialize in specific languages or ecosystems, offering deeper insights and better accuracy. Depending on the organization’s technology stack, a single SAST tool may suffice, or it might be beneficial to use multiple tools, selecting the best one for each language. This tailored approach ensures more accurate results and a stronger overall security posture.
Outcome
- Ensure SAST scans are executed for every contribution to the Source Code Management (SCM) system.
- Integrate SAST findings into the Vulnerability Management Program.
- Customize SAST tools to minimize false positives:
- Adapt rules to align with specific business requirements.
- Disable rules that are irrelevant to the project or organization.
- Provide developers with prompt feedback on findings, ideally during pull requests in the SCM.
Metrics
Metrics for this topic are included in Vulnerability Management
Tools & Resources
- Semgrep (Free/Paid)
- Mobsfscan (Free)
- Breakman (Free)
- Bandit (Free)
- FindSecBugs (Free)
- KICS (Free)
- Tfsec (Free)
- Checkov (Free)
- Github Code Scanning (Free/Paid)
- Snyk Code (Paid)
- Pentest Ground Lists of vulnerable apps for tests (Free)
- Gixy-Next (Free)
Further Reading
- A Guide On Implementing An Effective SAST Workflow
- Building a SAST program at Razorpay’s scale
- How to introduce Semgrep to your organization
SBOMs
A Software Bill of Materials (SBOM) is a detailed inventory of all the components, libraries, and dependencies that make up a software application. It functions as a comprehensive list of the "ingredients" used in software development, including open-source and third-party components. This transparency is crucial for organizations to understand what technologies are in use within their software and infrastructure.
The importance of an SBOM lies in its ability to help organizations effectively manage and secure their software assets. By tracking the components within an application, an SBOM enables teams to quickly identify whether any part of their software is affected by specific scenarios, such as newly discovered vulnerabilities, license restrictions, or compliance requirements. This proactive approach reduces the risk of exploitation, ensures legal compliance, and facilitates faster remediation when issues arise.
Having SBOMs is not only essential for internally developed software but also for external tools and third-party applications that an organization relies on, although these may not be easy or even possible to obtain. External tools may include software-as-a-service (SaaS) platforms, open-source libraries, or vendor-provided solutions. An SBOM for these tools provides visibility into potential risks and helps organizations maintain a secure and compliant software ecosystem.
Outcome
- Generate SBOMs for all in-house developed applications, including:
- Application dependencies
- Binaries within containers or servers
- Tools used during testing and deployment
- Utilize a dedicated tool to manage and track SBOMs effectively.
- Regularly request and review SBOMs for external applications where possible.
Metrics
- Percentage of projects covered with SBOM's
Tools & Resources
- CycloneDX (Free)
- Github Dependency Graph (Free)
- DependencyTrack (Free)
- Neo4Cyclone (Free)
- Syft (Free) - Container Image SBOMS
Further Reading
SCA Scans
Software Composition Analysis (SCA) is a critical part of application security focused on identifying vulnerabilities in third-party libraries and dependencies used throughout a codebase. Modern software development heavily relies on open-source components, which can introduce risks if not properly monitored. SCA tools help teams detect known vulnerabilities, license issues, and outdated components, ensuring that software remains secure and compliant.
When selecting an SCA tool, it's recommended to choose one that can output results using the OpenSSF OSV format. This standardized schema simplifies integration with other tools and platforms, improving automation and reporting across the vulnerability management lifecycle.
A common mistake in SCA practices is excluding vulnerabilities found in development dependencies under the assumption that they don't affect production environments. However, this approach can expose developer machines to serious threats. For example, the Wrangler vulnerability demonstrated how even non-production dependencies can be exploited to compromise local environments.
To ensure comprehensive security coverage, it's important to assess and address vulnerabilities across all dependencies, production and development alike.
Outcome
- SCA scans are executed on every contribution to the source code management (SCM) system and on a regular schedule.
- All findings are integrated into the vulnerability management program for centralized tracking and remediation.
- Vulnerability severity and priority are recalculated based on the project's criticality and risk profile.
- Developers receive immediate feedback on findings, ideally during pull request (PR) reviews within the SCM platform.
- Aim to use EPSS scores to help prioritize vulnerabilities based on their likelihood of exploitation in the wild.
- Implement reachability analysis to determine whether the vulnerable code is actually used in the application, helping reduce false positives and focus on actionable issues.
Metrics
Metrics for this topic are included in Vulnerability Management
Tools & Resources
- Google OSV-Scanner Client tool for OSV (Free)
- OWASP DepScan Project to find vulnerabilities in dependencies (free)
- Renovate Tool for dependency updates (Free)
- Devs.dev Insights about open source projects, includes OpenSSF scorecards (free)
- OSV Database - Open Source Vulnerabilities Database (Free)
- Dependabot Open source tool to find vulnerabilities in dependencies (Free)
- OpenSSF Scorecards Database with scorecards for open source (free)
- OWASP Dependency Track Tool to manage dependencies (Free)
- Datadog's malicious-software-packages-dataset (Free)
- Snyk SCA (Free/Paid)
Further Reading
Secrets Scans
[TBD]
Outcome
- Secrets scanning is executed on every contribution to the source code management (SCM) system.
- All findings are integrated into the vulnerability management program.
- Scanning tools are properly customized to reduce false positives:
- Detection rules are tailored to align with specific business and technical requirements.
- Irrelevant or non-applicable rules are disabled based on the project's context.
- Developers receive immediate feedback on findings, ideally during pull request (PR) reviews within the SCM platform.
- Secrets following defined patterns are detected and blocked before they can be committed to the SCM.
- A clear and documented process is in place for properly removing secrets from source control and invalidating exposed credentials.
- A standardized format is defined for organization-generated secrets, where possible, and custom detection rules are implemented to identify them in code.
Metrics
Metrics for this topic are included in Vulnerability Management
Tools & Resources
- Kingfisher (Free)
- TruffleHog (Free)
- GitLeaks (Free)
- Github's Push Protection (Paid)
Further Reading
- Removing sensitive information from a repository
- Phantom Secrets: Undetected Secrets Expose Major Corporations
Secure Deployments
[TBD]
Outcome
- Infrastructure deployments are reproducible and automated
- IaC is used for infrastructure provisioning
- State files are stored securely and encrypted
- Any secret or sensitive data is not hardcoded in deployment scripts or IaC files
- OIDC or short-lived credentials are used for authentication in deployment pipelines
- Deployment pipelines have least privilege access to resources they manage
- Approval processes are in place for deployments to sensitive environments (e.g., production)
- Deployment pipelines are monitored and logged for auditing purposes
Metrics
- [TBD]
Tools & Resources
- OpenPolicyAgent (Free)
Further Reading
- FacetController: How we made infrastructure changes at Lyft simple
- CI/CD SECRETS EXTRACTION, TIPS AND TRICKS
- Poisoned Pipeline Execution Attacks
- Continuous Deployment at Lyft
- Monocle: How Chime creates a proactive security & engineering culture (Part 1)
- Shifting left at enterprise scale: how we manage Cloudflare with Infrastructure as Code
Secure the SCM Platform
[TBD]
Outcome
- Repository access is restricted using a least privilege approach, granting only the necessary permissions to each user or team.
- Pull request merges require approval from designated code owners to ensure proper oversight and code quality.
- Monitoring is in place to detect and alert on any attempts to bypass access restrictions or review requirements.
- CI/CD pipelines (e.g., GitHub Workflows, GitLab CI Pipelines) are reviewed and hardened to prevent malicious users from impacting deployed environments. Controls are implemented to minimize potential damage.
- Signed commits are enforced, and a verification process is in place to ensure signatures match a list of trusted users and keys.
Metrics
- Number of detected bypasses to the controls applied
- Number of exceptions to the controls applied
- Percentage of repositories covered
Tools & Resources
- Semgrep (Free)