First I want to emphasize that study from the Cybersecurity Ventures institute that said :
Cybersecurity damage costs will rise to $11.5 billion in 2019 and a business will fall victim to a ransomware attack every 14 seconds at that time.
Cybersecurity risks are going to grow over and over and over in the next few years and it is already widely distributed, that is why you should consider Security as a defacto for your organization and think about including it at the roots of any feature you are going to develop.
Keeping your code and architecture secure has a lot more to do with infusing security inside each part of your organisation than looking directly at your code.
You have to teach your teams, empower them to write secure code and then trust them that they are actually doing it.
Nevertheless, there is a very long way to go before you achieve that kind of organization that is self sufficient on Security.
So let's start with the big picture and then focus on improving the organization and then the technical stack.
Before actually trying to keep your code secure, I think that you should think why you have to keep that code secure.
Indeed it is very important to understand that keeping your code secure is important in terms of Brand but also in terms of money.
As you can see in the following picture, a security event can cost a lot of money when it is handled to late.
So you should try to answer these questions :
|Why do I want to improve Security?||Because someone told me I have to? I deeply think that I have to|
|Where should I push first?||Organization? Technical Stack?|
|What is going to be the easiest||Organization? Technical Stack?|
Let's set up a plan to infuse more Security into each part of your company on an organization and technical part.
Adding the Security Team to Tech Design Commitee.
Teaching your development team Secure Development and make them aware of offensive security so that they understand how to think as an attacker while they're coding a new feature.
Write documentation about how to code securely, handle secrets, manage source code, etc... so that newcomers can refer to that.
Add a SAST (like Checkmarx, Coverity, etc...) in your CI/CD to check the code quality in terms of security.
Add a tool to check the Security Vulnerability Alerts (CVE).
Add a tool to check the dependencies and if a public vulnerability has been disclosed that should require an update.
Instrument your automated testing to add Security use cases
Perform a Penetration Test onto your application
All of this is what we call Security by Design and DevSecOps.
If I should classified all of these into those two categories.
|Security by Design||Sec Team part of Tech Design Commitee|
|Security by Design||Teaching Secure Development to Teams|
|Security by Design||Write a Code of ...|
|DevSecOps||Add SAST to CI/CD|
|DevSecOps||Check CVE Automatically|
|DevSecOps||Check Vulnerable Dependencies|
|Security by Design||Penetration Testing on your App|
So now that the steps to achieve that are clear, we can have a deeper look on how to do that in concrete terms.
A lot of organization have a Tech Design Commitee, or something related, whose work is to decide what should be be on the roadmap, what should be done, what technical stack should be used, etc...
It depends from organization to organisation.
Nevertheless, the purpose is that this commitee is where decisions are made and actions can be taken.
So it is the perfect spot to add some security within the organisation starting from that commitee.
Do you have a Tech Design Commitee (or something similar)?
If you don't, maybe it is beause your structure is made so that you take your decision differently. So find your decision making process within your organisation in order to infuse security from it.
If you do have a Tech Design Commitee, let's jump into how we can infuse security within it.
At a lot of companies I worked with, the Tech Design Commitee is the commitee where the Architects, the Lead Software engineers and the Ops are supposed to talk about a solution in order to decide how it should be implemented on a higher level.
That is the perfect spot to add security to the list.
The sooner one adds security to the project the easier it is to add it and the cheaper it is.
That works for security but that works for a lot of other part of software design. Just figure out what are the ones that are missing for your organization.
To sum up, Security Engineers have to be embedded at the beginning of the process as soon as we are thinking about a new feature, etc...
And Security should have a voice to say no to a specific implementation if it can bring some flaws in the future.
I was a developer for less than ten years. During these years I developed a lot of applications, mainly on iOS (some others on the .Net framework 😉).
Looking back, I can say that I shipped some shitty softwares in terms of security. The application in itself was perfect. It worked very well, it was fast, didn't take a lot of footprint in the memory, was perfectly designed, etc...
But in terms of Security, if I were to pentest it now, I would laugh at myself for sure.
Was it because I was a bad developer? Certainly not.
Was it because I wasn't trained in terms of Software Security? For sure.
It’s more important than ever that every developer becomes a security developer
Secure Software Development is definitely not something that comes instinctively. If you don't have a broad interest in software security, you have more than enough to learn in the software part you are working on (was it iOS, Android, NodeJS, DataBases, etc...). So why spent some time on Security as it is not required by your management (we can haven a look back at the previous parts of that post).
That is why Security Engineers have to train the Software Development teams into Software Security.
So that developing secure software would be second nature.
There is some basic knowledge that every software engineer should know in terms of security.
And then the training would depend on your teams tech stack. If it is Android, you should train them about Android Security.
Nevertheless, your teams definitely have to understand the value in it in order to really dive into it. If they don't then it is going to be a total failure.
Now that your teams are trained and convinced about writing secure software from the ground up, they have to rely on best practices defined on a organization level.
So that the newcomers would know how to do things without asking.
For that rules to write code the secure way must be defined clearly on your documentation.
For instance, "how to store datas in your databases". Every data is different and not every data must be encrypted, you have to make it crystal clear what, when and how to store data.
Or how to use Certificate Pinning on an Android Application.
Moreover, password management for you organisation is very important (hint : you should definitely make everyone use a password manager). It is not directly related to secure software development but it is highly required that you store your organisation's passwords the secure way.
And I think that beyond information on a digital media, information should be displayed on physical media.
For instance you can have an infographic that explains in a simple manner what is asymetric and symetric encryption and a QR Code that redirects to a more detailed documentation about that.
It is a very good way to explain some complicated notions to everyone that goes by the office.
I assume that you have a Continuous Integration / Continuous Delivery (CI/CD) pipeline.
If you have so, there is a very simple way to add security on your CI/CD pipeline. You can add a Static Application Security Testing software to you CI/CD pipeline.
This type of software is going to analyze your code and try to find some security flaws in it.
You can add it to various part of your CI/CD pipeline. It can be at the very end just before your Jenkins is about to build your app. Or every time someone pushes code to the codebase.
It depends on what you want to achieve.
Nevertheless, I do not recommend to do it too often (every time someone pushes code seems too much to me).
A lot of security researchers are spending time in looking for vulnerabilities on softwares and disclose vulnerabilities responsibly.
These vulnerabilities are known as "Common Vulnerabilities and Exposures" and gathered on specific websites like cvedetails for instance.
Some tools, like XRay for instance, can check automatically if your software has some disclosed vulnerabilities.
You can set it within your CI/CD pipeline.
We are all interconnected in a world where we build software on the work of others.
That means using libraries, frameworks that other developers made in order to simplify their work and then share it to the world.
It helps developers to build software faster than ever but at the same time, it brings all of us at risk because of these interconnected dependencies.
XRay can check automatically if your software has some disclosed vulnerabilities in the dependencies that you are using.
Dependabot from Github can do that as well automatically.
But I really love Snyk to check all the known flaws from your dependencies.
This is especially useful if you are using some open source libraries in your codebase.
There are some common flaws that consist of some pirates getting admin access to a very widely used repositories and lightly modifying it in order to make any software that uses this library vulnerable. It was pretty common on the npm technology.
That is why it is very important to check that the code you are going to add to your software is flaws free in the beginning but also during its all life.
At the end of the pipeline, you have a working application that can be tested.
How to test it focusing on Security?
First, you can try to focus on the OWASP Top 10 for your technical stack (Mobile, Web, etc...).
The OWASP advocates approaching application security as a people, process, and technology problem because the most effective approaches to application security include improvements in all of these areas.
Let say that you are going to test an iOS application, you can follow the Mobile Security Testing Guide written by the Owasp foundation.
It will help you to set up your testing factory (Jailbroken iPhone, Rooted Android, etc...) and then focus on the most common vulnerabilities on these devices.
If you can be sure that you are not vulnerable to these flaws you are really a notch above the competition. 😉
But the thing is to automate it, wether it is based on scripts, human testers or anything else.
If it is not precisely processed, and defined you are doing it wrong.
You have to be sure that a test made on a version is going to have the exact same result onto the next version.
Eventually, you can make a pentest on your application.
It has a cost but it puts you in a real life scenario attack.
If you have a Red Team in your company (that probably means that you organization is pretty mature), it is their job to perform Penetration Testing on your application.
On a more common approach, you can buy a Penetration Testing on your application from companies. Their pentesters are going to try to find vulnerabilities on your application.
It is a perfect approach. But you must have some resources in your company in order to handle all the vulnerabilities the pentesters found. If you don't, you know that you are at risk, you spent a lot of money but you didn't improve your security.
So it comes back to bring security on an organisation level. You have to have some dedicated resources (App Security Engineer) to handle this properly, to explain the vulnerabilities in an intelligible manner to the development team, to follow that the vulnerabilities are properly corrected and test it. It can also add some new test scenario for these vulnerabilities (on step 7).
If you don't have anyone that is dedicated on Security, either train someone to take that role or hire someone.
Add Security to your Tech Design Commitee and listen to what they have to say. The sooner you take their remarks into account, the cheaper it is.
Teach your developers to become a team of seccurity developers. It can be either through internal training or a training organisation.
If you change your organisation following these recommendations, you are on the good path because every new feature or decision is going to go through a security process.
Now that you added Security into your process and within your organisation, we can try to add the security within every feature based on the developers teams new expertise.
They are now supposed to challenge anything they are doing, particularly within the team, in terms of Security. A good code review process can add a lot to your security and velocity in terms of developing a secure new feature.
Nevertheless, it is something that takes time and where the teams must be involved. Brown Bag Lunch sessions are a good way to not forget that security is an ongoing process and not a one-way ticket.
As soon as you are live, it's where the real work starts.
Because attackers are now able to test your application.
So you can't stop and say "It's live, it's over for me. Next feature please!!!"
In terms of good practice, instrumenting your CI/CD pipeline in the next step to go. Add dependencies chek, automatic test based on your business logic and knowledge of the app.
And I would recommend to perform a Penetration Test for every new major version of any application that you release. It is really good practice and so you can have a fresh start.
I hope that can help you reconsider why Security is so important in any organization.
And that you have to have someone whose job is to make sure that it is properly taken into account so that your activity is not going have a gruesome fate because of a security event.