DEV Community

Gabor Szabo
Gabor Szabo

Posted on • Originally published at osdc.code-maven.com

Only internally vetted and approved Open Source libraries can be used

This article is based on a conversation I had with a manager at one of the large hi-tech companies.
My conversation partner preferred that I leave out the name of the company.

I asked how can the developers use 3rd party open source libraries?

Developers can't install libraries directly from the 3rd party registry, nor from GitHub. Nor from anywhere else on the internet.
They can only use libraries and only the versions of those libraries that were vetted and approved by a team of the company specialized to screen open source libraries.
They have an in-house repository of the libraries, developers can only install from there.

If you wish to use a library that is not approved yet, you will need to submit a request to that team.
They will evaluate the library and all of its dependencies in terms of license and code security.

They will also ask you why do you need that particular library. What alternatives did you see and why would you like to use specifically this library.

They will will also take in account the level of maintenance this project gets.
After all, including a library that is not maintained will soon become a liability as the team finds out that some features are missing
or when they find bug or security vulnerabilities.

This of course makes it much harder to use Open Source libraries. It also means that this security team has to have members who can read
and properly evaluate code written in all the programming languages used in the organization.

This creates a pressure to reduce the number of programming languages in the organization. In itself this is not a problem,
but it might reduce the possibility to explore other languages that might fit better specific tasks.

In other words they say: We can't fully trust Open source software.
We need to have a strong SCM (Software Configuration Management) process.
We need to have version control for inbound open source code.
We need to have code review and license review for every piece of software that arrived to the company.
We are ready to allocate a separate team to verify and approve new open source packages to be added to the stack.

Only relatively large companies can afford to have such a team that is not working on the products of the company.

Top comments (3)

Collapse
 
094459 profile image
Ricardo Sueiras

This is a surprisingly common pattern in more risk averse enterprises. The good news is that with education and the right stakeholder management it’s behaviour that an be changed ( at least I was able to do that most of the time)

Collapse
 
szabgab profile image
Gabor Szabo

It seems as companies grow in size they become more and more risk averse.

Collapse
 
phlash profile image
Phil Ashby

TL;DR - I'm going to rant a bit about this anti-pattern (thanks for the opportunity Gabor :))!

Ah yes, the heavy-handed approach because not only does "the company" (often just one person who has suffered in the past!) not trust OSS, they also do not trust (the majority of) their own people to have good judgement or are not providing them with enough information to judge well. For me this is an 'organisational smell' that certainly won't scale or help them to build better systems.

From my experience, a better approach is to create a risk funnel: where choice is wide at one end, but risk is high so exposure must be limited; funnelling down to well-audited / "safe" / supported / in-production / low risk solutions at the other end that can be relied upon to operate the business. More detailed thoughts on how below:

Surprisingly(?) it's good to have an internal repository to host OSS components, as it provides two major benefits: security of supply (in all senses); discovery & promotion of components already used within the company, particularly if components have a trust level / supported status indication and critical contact informaiton (see below). Even better IMO is to have an internal source forge (eg: Gitea or Sourcehut instance) to provide well-known OSS tooling and processes to internal working, also avoiding internal barriers / stovepipes.

Instead of overly constraining early discovery I would recommend having an explicit ingestion / on-boarding process for new things that starts with individuals who are suitably informed of the company security guidelines (data protection, reputation risk, etc.) and can choose whatever technologies they believe solve the challenges they face. Deployment of their solution(s) however goes via a visible audit process, such that appropriately qualified people (other devs, infosec, lawyers, etc.) can assess their choices and work with them to reduce risk (eg: ensuring they can support OSS components, applying security analysis tooling, checking licences, etc.) as their solution(s) go through integration/test/beta/release cycles. Accompanying this process, the OSS components they have added (and indeed their own code) exist within the common repository, where the audit process updates status information (as above).

In my last role we used weekly "design authority" sessions to surface what new things teams were looking at, allowing interested parties (devs, risk managers, etc.) to self-assign to these new avenues of discovery and work through the audit process. Often just the preparation work for this surfacing raised enough flags in a team that they chose a different solution, we rarely encoutered high risk OSS choices.

(can you tell I spent too long on this in my past?)