Software development is a complicated task that balance business and technical needs. Furthermore, the organization has to ensure the product complies with laws, regulations, and customer’s requirements while presenting robustness to cyber-attacks based on the customer’s risk-taking willingness.
Hence, I found it important and relevant than ever before based on the growing risks to develop while security in your mind. Nowadays, the accepted approach integrates security measures through all the product development through different stages.
It allows us to have an up to date knowledge against new threats, addressing it quickly, reducing the cost and time of finding issues later on in the process, overall lowering the probability of significant weaknesses that may harm the product’s quality. Thus, security is integrated into each step of the software development life cycle (SDLC).
Below, I’ll try to outline various sources and my own experience related to the subject as part of the development phase in the SDLC, assuming a deep work was made to involve security on previous steps (requirement gathering and design), eventually moving from SDLC towards Secure-SDLC.
Based on planning steps (which includes proactive steps as threat modeling, security requirements, and security design), define quality gates that effectively identify and block various malfunctions. It may be preventing the addition of external open-source code to the product until approved or no weaknesses were found. Audits by third-party security experts should be considered too. Automation is a must. Some of the well-known security and control measures are:
- Vulnerability assessment to identify, quantify and prioritize ongoing risks
- Dependency track monitors component usage across all versions of every application in its portfolio to proactively identify risk across the organization.
- Linting to identify any coding standards violation and enforcing well known best practices. There are plenty of tools addressing the need for each programming language ( Coverity, TICS, FxCop, CodeScene, and even the various IDEs — Visual Studio, Pycharm, IntelliJ,and more…)
- Software Composition Analysis to detect open source vulnerabilities with higher accuracy, for example, Synopsys’s Blackducktool.
- SAST (Static Application Security Testing), which scans the code for security flaws also referred to as static code analysis. One of the well-known tools in the industry is Fortify
- DAST (Dynamic Application Security Testing), as far as I know mainly applied to web applications using a black-box test on the front-end to find potential security vulnerabilities by attacking the user interface. Note, unlike SAST, it doesn’t have access to source code (for example Sentinel Dynamic by WhiteHat security)
- IAST combines the strength of both SAST and DATS methods
- Runtime application self-protection (RASP) which activity track (by logging) the application’s actions and determine attacks (for example Antivirus software
- Validate user input, for example, apply XML firewall that protects XML based interfaces (such as REST). It scans in and out traffic to filter content, limit the number of requests, and more. As a role of thumb, use OWASP best practices.
Apart from the above points, keep yourself up to date and patched (tools, malware scanners, IDE). It’s no use to test with the old AV or to run with the vulnerable OS.
A common principle is to apply segmentation and segregation practices to verify that “live” data is only available on production environments, which means developers work on dummy data specifically. It allows them to create a barrier from any weakness to enter into the product.
Furthermore, meticulously define the needed privileges for each user using JIT admin and just enough administration methodology. Monitor the different environments’ activity (CI\CD, devs, cloud storage, etc…); any change in privileges, unknown code, new user accounts, unfamiliar IDE’s plugin and more, are just some of the monitoring activities that organizations may choose to lower risks.
The purpose is twofold; gain knowledge and promote awareness (to be honest, this is one of the blog-post goals). Include learning material on security as part of each team member’s development plan and make sure it is part of the onboarding process.
Develop according to agreeable coding standards, use tools to verify it. Apply for mutual code review and use dedicated unit tests that cover security issues. That way, each SW engineer is responsible for his code’s weakness.
Finally, the team’s security trustee has to approve the code against the requirements. Internal audits should be conducted too to identify risks, and design review should be coordinated with all different disciples.
Documenting bugs is more of a best practice, which is also true for security issues. Keep records of found bugs, including investigation conclusions, screenshots, logs, and any other supportive information. It may help in the future while encountering a similar issue.
If any uncontrolled change in configuration files happened, send the information to the security representatives automatically and shut down the infected system until approved. Maintain a list of all files in the system; it helps to check unfamiliar changes. Another alternative is to apply AAA protocols that govern access control and application control (e.g., allow list).
Define a committee to review the frequent changes in the system, including bug classification. In high rate SW development methodologies such as in Agile, it is hard to keep pace with modifications, and a dedicated authority can mitigate it. Furthermore, metrics (KPIs) that measure trends and anomalies, leading to better root cause analysis should be executed to analyze the risk.
Either copying the source code or reverse-engineering it are common techniques for stealing your product. Hence, to defend against such abuse, the following method can be used considering no harm to functionality:
- Obfuscation is the process of creating a source code that is difficult for humans to understand. Note that while it may take time to reverse this action, it is not impossible.
- Anti-disassembly takes advantage of the disassembler assumptions. It uses dedicated code (jump tables) or piece of data to cause disassembly analysis tools to produce incorrect source code.
- Anti-debugging using various methods to block debugging the applications’ code, for example, exploiting system APIs to identify the existence of a debugger or identify code changes by debugger’s breakpoint and many other methods.
- Packeting is the process of compressing assemblies for making it difficult to reverse engineer it.
Finally, it is used mainly for IP protection, and one should not rely on these techniques alone to hide some secrets (like encryption key).
The configuration may change along the time. Document the various specs and verify the requirements and needs. There are plenty of automated tools that can assist in that task.
Use standard protocols and libraries. DO NOT “re-invent” the wheel (security) — use of controls to protect data at transit and data at rest.
Do Backups! moreover, do them frequently and test them.
The above are recommendations, which consolidate a few of the known practices in the security field. Here, I decided to touch mainly the development phase (of the SDLC), but further actions can be done in each part of the life cycle. Eventually, lowering the risk of being infiltrated and creating insights about how important is developing with security in mind.
As closing words, I highly advise automating everything you can and make it part of the process (continuous improvement), keep in mind that manual tests are prone to human errors.