Thank you for your interest in our newsletters. You will receive an email shortly to confirm your subscription.
Neil Roebert, Senior Information Security Consultant
15 mins read
With DevSecOps comes a wholly integrated approach to security within DevOps. It has established itself firmly as a way of thinking in organizations large and small, where software engineers, operations managers, and security specialists collaborate to build products quickly and securely.
As in the name, DevSecOps exists to make security a fundamental part of the DevOps process. Done correctly, this means security is built into every aspect, rather than being laid solely over development. For example, where observability is architected into a system, it is automatically treated as a tool for detection – its very nature concerning the visibility of the data flowing through this environment.
Without DevSecOps, organizations must manually align development and operations with security. This leads to bottlenecks in the development pipeline, or indeed increased risk when a security protocol is bypassed or oversimplified.
Our consultants have seen an encouraging level of DevSecOps adoption. However, whilst many of these approaches provide a strong foundation, they frequently lack processes for (securely) implementing new tech or managing the risk attributes of existing tech. The popularity of DevSecOps being embraced as a culture has come at the cost of its technical facets.
Formed primarily during the initial inception thereof, this culture-centric mindset positions DevSecOps specifically as an approach to modern day developmental practices and processes. Client engagements have shown us the potential detriment of such a mindset, i.e. one that overlooks securely-implemented tools and technology. In short, a wholly cultural approach leaves environments critical to business continuity vulnerable to attack.
Over time – with handovers, development teams owning processes without formal decision tracking, or indeed, because there was never any knowledge of what existed on day one – teams lose oversight of the tech underpinning their development. The streamlining, which made them that much quicker, starts to obscure the elements crucial to every action and product. Understanding how systems work individually and together, and their respective security, is the first step to a secure ecosystem. Thus, without a clear view of your technical framework, security becomes harder to achieve. What then should be the approach?
Much can go wrong if a technological component of the DevSecOps ecosystem is vulnerable or has been compromised. To avoid this, teams benefit from uncovering the existing and potential vulnerabilities of the environment they develop in.
To reduce the likelihood of security issues, a robust DevSecOps model employs the Continuous Integration/Continuous Deployment (CI/CD) delivery practices at its core to produce software reliably and safely at the required rate. These practices can then be integrated into a baseline model (Fig. 1), as below:
Using this model as the basis for assessments will aid discovery within your own DevSecOps environment; specifically: the discovery of risks associated with digital tools and systems, and their potential impact. As an abstraction of a DevSecOps environment, the structure may not be consistent with your own. Yet it can still be adopted by working to the principles below.
The model is split into integration blocks, each containing its own set of assets. Every block is a major component within, and forming, the DevSecOps environment – for example, your code repository and staging environment. Dividing the model into blocks that integrate with one another provides a navigable, visual representation of your environment. As DevSecOps is not a one size fits all solution, assets are not limited to a single block and may encompass multiple blocks.
Just as a cultural mindset suffers by neglecting technology, your model should not neglect people and process. Team responsibilities within the environment should be mapped out just like the digital assets. Teams, or team members, may be solely responsible for a single block in the environment, or they may be responsible for multiple blocks. This information is critical to establishing effective trust boundaries.
The CI block used in the example below illustrates the abstract concept of the modelling process, with tools that may form part of the environment.
Once you have identified the tools and systems that exist in your organization’s DevSecOps environment, you can then process map each block by asking the following questions:
DevSecOps is bespoke and multifaceted by design. Therefore, it can be difficult to understand the sequence of actions required to configure a baseline model. It’s within this confusion that security gaps and misconfigurations often occur. As such, understanding what your development environment consists of – and, crucially, having a system to guide the secure implementation of tools and systems –disentangles the process and quickens yours and your teams’ understanding.
Engineering pipelines, systems, and code are pathways to an organization’s critical assets and, as such, become critical assets themselves. For any organization that develops a digital asset, a breach within the DevSecOps environment would lead to either the full compromise of the asset, or a pivot point allowing access to the internal environment.
The examples below demonstrate how it is possible for attackers to pivot through CI/CD and DevSecOps systems. They reflect our own red teaming and Attack Path Mapping (APM) activities with clients.
The client’s DevSecOps environment was hosted on AWS, with Gitlab used as the repository, and Jenkins as CI tooling. This pipeline was used to develop a platform containing a custom cryptographic transport layer, which encrypted messages to and from a client.
The encryption key for this transport layer was stored within the secrets management system used by the pipeline. AWS Key Management Service (KMS) was the storage solution for the keys.
The client’s platform used AWS Lambdas to interact with AWS KMS for message encryption and decryption.
Extract the keys stored in AWS KMS or provide a backdoor using them. In a real-world situation, this would allow an attacker to decrypt messages at will.
The organization, a large financial services provider, hosted its build environment on-premise. Due to its large size, teams working on the environment were distinctly segregated, with a dedicated Ops team managing it wholly.
Separate development teams existed to build different application types. Various build environments also existed, comprising unique tooling sets. Some of the teams still shared a build environment, however, consisting of the Atlassian tool stack. As a result, the code repository was a BitBucket instance, and CI tooling was delivered through the Bamboo CI server. The BitBucket instance had anonymous read-only access enabled on its repositories.
Due to the shared building environment, different teams had access to the build queue on the Bamboo environment. Some would often cancel other teams’ build jobs to prioritize theirs. In response, one team had employed their own local build agent. This provided them with full control, but consequently also made them a target. They did not perform necessary maintenance, and the build agent machine was left outdated – placed on a desk next to the developers’ workspace. This of course left the machine vulnerable to be accessed by anyone in the office.
The same team did not make use of Multi-Factor Authentication (MFA) prior to a release being pushed from the staging environment to the production environment.
The remainder of the environment did not play a role in the attack scenario and will thus remain abstracted.
The main objective was to alter the code in the financial application to embed a keylogger within the codebase. In real terms, this would allow an attacker to harvest credentials from the entire userbase. Although this scenario was limited to the use of a keylogger to do so, any other malicious code (e.g. ransomware) could be used to achieve the same goal. The attack scenario was modelled around the NotPetya ransomware attacks .
Before an attacker has even begun their assault on the DevSecOps ecosystem, they will assess the environment for its weaknesses. The same assessment can be carried out by your team at the project design phase, making security an intentional step in response to the risk of a real attack and the techniques likely to be used.
As established earlier in this article, most organizations with large development environments have little oversight of their constituent parts. Furthermore, multiple unique development environments complexify the implementation of security measures.
When an organization understands the makeup of their DevSecOps environment, they can more effectively manage the technology in use, and secure that environment. It’s in this discovery process that the baseline model outlined previously comes to light. By mapping each component to its relevant sectors, it is easier to visualize how they integrate. These integrations are represented by the connecting arrows, with each being unique. They may follow the same concept, but it is very rare that two different toolsets fulfil the same purpose or have the same functionality.
Assessing these integrations is just as important as assessing the tools themselves, as the previous attack scenarios demonstrate; the tools were not vulnerable by design, rather risks were created by their integration with the environment. This is increasingly the case as organizations move to DevSecOps in the cloud.
To make the process of mapping the environment more effective, and to shift this security process even more to the left it needs to be accompanied by threat modelling exercises. (Shifting security “to the left”, is to implement security measures at the earliest possible time in the development lifecycle.) This is vital to the security of any digital asset, environment, and architectural design process. Threat modelling can be used post-design, but the cost of fixing risks discovered during this phase is likely to be higher.
Targeted Threat Modelling (TTM) follows the same principles as that of a traditional threat modelling exercise, but focuses on how people, process, and technologies interface with one another. It is designed specifically for the development environment.
To understand how TTM works, and why it is an effective security solution in this context, one need only visualize the vast sum of digital assets that exist in such an environment. Traditional security assessments are not suitable for this exact reason; deployed within a DevSecOps environment, the result will be a list of hostnames or IP addresses that need to be scanned. Normally, once a tester sees that non-default passwords have been used, and that the operating system and installed packages (including DevOps tooling) are up to date, the “all clear” is given. Traditional testing lacks the necessary holistic oversight, and by assessing components in isolation, the security implications of their integrations is lost.
A targeted approach to threat modelling can increase the effectiveness of security in a dedicated and bespoke environment such as DevSecOps. By assessing integrations between people, process, and technology, misalignments and misconfigurations can be highlighted. Coupling this approach with the visual representation of the environment gives stakeholders holistic oversight, where associated risk can be identified and attributed to attack paths. By prioritizing by risk, it is easier to spot where a security control may only prove to hamper the process with superfluous layers of security. By employing techniques such as TTM, it aids in the process of shifting security to the left.
DevOps, at its core, is founded on the speed-to-market conundrum facing any organization with a development faculty. When adding the Sec into DevOps, processes and technology should not be clamped down to hinder progress; they should be finely tuned to provide maximum security whilst allowing development at the required speed.
The approaches discussed in this article facilitate such an outcome, and have proven effective in engagements with clients. Due to their flexibility and scalability, the baseline model and TTM are approaches that can be adopted by any organization. Deployed properly, they deliver actionable results that a traditional security assessment simply cannot.