DEV Community

loading...

LINUX KERNEL: Researchers from University of Minnesota had no bad intentions- lift ban

manishfoodtechs profile image manish srivastava ・5 min read

The University of Minnesota has been banned from contributing to the Linux kernel by one of its maintainers after researchers from the school apparently knowingly submitted code with security flaws.

This is my personal view after reading their open letter to Linux kernel community. I believe Linux kernel community can look into this matter with a warning.

Matter:
Earlier this year, two researchers from the university released a paper detailing how they had submitted known security vulnerabilities to the Linux kernel in order to show how potentially malicious code could get through the approval process.
Link of paper: https://github.com/QiushiWu/QiushiWu.github.io/blob/main/papers/OpenSourceInsecurity.pdf

Now, after another student from the university submitted code that reportedly does nothing, kernel maintainer and Linux Foundation fellow Greg Kroah-Hartman has released a statement calling for all kernel maintainers to reject any code submissions from anyone using a umn.edu email address.

I have been meaning to do this for a while, but recent events have
finally forced me to do so.

Commits from @umn.edu addresses have been found to be submitted in "bad
faith" to try to test the kernel community's ability to review "known
malicious" changes.  The result of these submissions can be found in a
paper published at the 42nd IEEE Symposium on Security and Privacy
entitled, "Open Source Insecurity: Stealthily Introducing
Vulnerabilities via Hypocrite Commits" written by Qiushi Wu (University
of Minnesota) and Kangjie Lu (University of Minnesota).

[....]

but they should be aware that future submissions from anyone
with a umn.edu address should be by default-rejected unless otherwise
determined to actually be a valid fix (i.e. they provide proof and you
can verify it, but really, why waste your time doing that extra work?)

thanks,

greg k-h
Enter fullscreen mode Exit fullscreen mode

You can read full mail here:
https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh@linuxfoundation.org/

In a statement meant to clarify the study, the researchers said they intended to bring attention to issues with the submission process — mainly, the fact that bugs, including ones that were potentially maliciously crafted, could slip through. Kernel developer Laura Abbot countered this in a blog post, saying that the possibility of bugs slipping through is well-known in the open-source software community. In what appears to be a private message, the person who submitted the reportedly nonfunctional code called Kroah-Hartman’s accusations that the code was known to be invalid “wild” and “bordering on slander.”

It’s unclear if that submission — which kicked off the current controversy — was actually part of a research project. The person who submitted it did so with their umn.edu email address, while the patches submitted in the study were done through random Gmail addresses, and the submitter claimed that the faulty code was created by a tool. Kroah-Hartman’s response basically said that he found it unlikely that a tool had created the code, and, given the research, he couldn’t trust that the patch was made in good faith either way.

The university open letter to community:

April 24, 2021
An open letter to the Linux community

Dear Community Members:

We sincerely apologize for any harm our research group did to the
Linux kernel community. Our goal was to identify issues with the
patching process and ways to address them, and we are very sorry that
the method used in the “hypocrite commits” paper was inappropriate. As
many observers have pointed out to us, we made a mistake by not
finding a way to consult with the community and obtain permission
before running this study; we did that because we knew we could not
ask the maintainers of Linux for permission, or they would be on the
lookout for the hypocrite patches. While our goal was to improve the
security of Linux, we now understand that it was hurtful to the
community to make it a subject of our research, and to waste its
effort reviewing these patches without its knowledge or permission.

We just want you to know that we would never intentionally hurt the
Linux kernel community and never introduce security vulnerabilities.
Our work was conducted with the best of intentions and is all about
finding and fixing security vulnerabilities.

The “hypocrite commits” work was carried out in August 2020; it aimed
to improve the security of the patching process in Linux. As part of
the project, we studied potential issues with the patching process of
Linux, including causes of the issues and suggestions for addressing
them.
* This work did not introduce vulnerabilities into the Linux code. The
three incorrect patches were discussed and stopped during exchanges in
a Linux message board, and never committed to the code. We reported
the findings and our conclusions (excluding the incorrect patches) of
the work to the Linux community before paper submission, collected
their feedback, and included them in the paper.
* All the other 190 patches being reverted and re-evaluated were
submitted as part of other projects and as a service to the community;
they are not related to the “hypocrite commits” paper.
* These 190 patches were in response to real bugs in the code and all
correct--as far as we can discern--when we submitted them.
* We understand the desire of the community to gain access to and
examine the three incorrect patches. Doing so would reveal the
identity of members of the community who responded to these patches on
the message board. Therefore, we are working to obtain their consent
before revealing these patches.
* Our recent patches in April 2021 are not part of the “hypocrite
commits” paper either. We had been conducting a new project that aims
to automatically identify bugs introduced by other patches (not from
us). Our patches were prepared and submitted to fix the identified
bugs to follow the rules of Responsible Disclosure, and we are happy
to share details of this newer project with the Linux community.

We are a research group whose members devote their careers to
improving the Linux kernel. We have been working on finding and
patching vulnerabilities in Linux for the past five years. The past
observations with the patching process had motivated us to also study
and address issues with the patching process itself. This current
incident has caused a great deal of anger in the Linux community
toward us, the research group, and the University of Minnesota. We
apologize unconditionally for what we now recognize was a breach of
the shared trust in the open source community and seek forgiveness for
our missteps.

We seek to rebuild the relationship with the Linux Foundation and the
Linux community from a place of humility to create a foundation from
which, we hope, we can once again contribute to our shared goal of
improving the quality and security of Linux software. We will work
with our department as they develop new training and support for
faculty and students seeking to conduct research on open source
projects, peer-production sites, and other online communities.  We are
committed to following best practices for collaborative research by
consulting with community leaders and members about the nature of our
research projects, and ensuring that our work meets not only the
requirements of the IRB but also the expectations that the community
has articulated to us in the wake of this incident.

While this issue has been painful for us as well, and we are genuinely
sorry for the extra work that the Linux kernel community has
undertaken, we have learned some important lessons about research with
the open source community from this incident. We can and will do
better, and we believe we have much to contribute in the future, and
will work hard to regain your trust.


Sincerely,


Kangjie Lu, Qiushi Wu, and Aditya Pakki
University of Minnesota
Enter fullscreen mode Exit fullscreen mode

Link:
https://lore.kernel.org/lkml/CAK8KejpUVLxmqp026JY7x5GzHU2YJLPU8SzTZUNXU2OXC70ZQQ@mail.gmail.com/

Further reading:

https://www.google.com/amp/s/www.theverge.com/platform/amp/2021/4/22/22398156/university-minnesota-linux-kernal-ban-research

Discussion (8)

Collapse
justchapman profile image
Chapman

I think if you are going to post a link to the apology, you should point a link to the actual conversation where the accusations fly. This issue goes far deeper than a simple "Sorry guys, I didn't mean it" apology.

lore.kernel.org/linux-nfs/YH5%2Fi7...

The bottom line here is these researchers were submitting KNOWN bad patches, in bad faith, to the linux kernel group in the name of research. There are a myriad of ethical ways this could have been achieved but, they chose not to. Instead, they quietly toyed with the process of known, highly ethical and trusted developers and the institutions they represent. The live Linux kernel is simply NOT the place for this kind of research. They were toying with CRITICAL INFRASTRUCTURE.

Put it this way - would you want someone toying with the code running the electrical grid, maybe a nuclear power plant? How about your local water treatment plant? Would this be OK? I should think not.

The Univ. of Minnesota IRB should have stepped in here to put an immediate stop to the research. But mystifyingly, they chose not to, due to lax oversight or pure incompetence. For that, they have paid a massive price. And deservedly so.

What Pakki achieved here is to cause real, irreparable harm to the UMN computer science department reputation. As an institution, they are now banned from contributing to the linux kernel. That is a HUGE black eye for the university. In the process, the researchers have done more than shoot themselves in the foot - I certainly would never hire them. Who could trust them?

Expect to see some lawsuits fly soon in this saga. It will be interesting to watch, to say the least.

Collapse
exadra37 profile image
Paulo Renato

Put it this way - would you want someone toying with the code running the electrical grid, maybe a nuclear power plant? How about your local water treatment plant? Would this be OK? I should think not.

Wouldn't you want to be sure that this code running the critical infrastructure and the process that leads to it is tested for security vulnerabilities in the process of merging the code to the mainstream code?

The increase number of attacks to the supply chain of software in the last years clearly reveals a need to all processes to be under scrutinize for ways they can be exploited, so that they can also be fixed and patched.

So, In my opinion putting at test the Linux process of merging patches to the code is a good thing and needed, so that everyone can learn with the mistakes and the whole process gets better and more tampered proof.

I am not defending the researchers neither accusing them, because I don't know enough, but I think this type of research needs to be done, maybe they just haven't choose the best way of doing it.

Collapse
justchapman profile image
Chapman

Without a doubt - the approval process needs scrutiny. That's something even the dev team has openly admitted. But there's a right way and a wrong way to do it. You don't go testing how to defuse a live nuke by just clipping random wires and hoping for a good outcome. The research team should have had complete approval from senior leadership who knew exactly what they were doing beforehand.

Thread Thread
exadra37 profile image
Paulo Renato • Edited

But there's a right way and a wrong way to do it. You don't go testing how to defuse a live nuke by just clipping random wires and hoping for a good outcome.

This are not exactly comparable things.

If the code gets merged to the mainstream branch its not released to the public immediately or it shouldn't be, therefore the researchers would then reveal that they have failed and the commit would be reverted.

The research team should have had complete approval from senior leadership who knew exactly what they were doing beforehand.

I have worked in the past in a factory where a leak could kill everyone in a radius of up to x kilomenters, depending on the wind and leak.

Emergency exercises where carried out to test the responsiveness of local authorities and of all employees in the factory, and when done everyone involved was aware that they would occur, therefore the outcome was always excellent. So they always contained the leak in around 20 minutes with the help of the fireman department of the nearest city.

They would appoint an hour to the exercise and then have police in all intersections from the fireman station to the factory and all employees involved in the emergency were already in their battle positions, therefore the exercise was always a tremendous success.

In real life if a leak occurs the police will not be in all intersections, or in any, the fireman will take the double of the time to leave the station and to arrive to the factory, the employees in the factory will have to stop their current task, run to the nearest protection equipment, get dressed up and then go to fight the leak, this if they don't get killed by the leak before they have time to put the mask.

So, yes I agree with tests being carried out without their knowledge, because that's the only true way to test their resistance against the supply chain attack, anything else it's just theoretical and may not reveal the issues that their process may have.

If their process releases the Linux kernel to the public when a merge request is approved, then their process is flawed and vulnerable to be exploited more easily. They must put it in a staging phase before it can reach the public.

Collapse
ssimontis profile image
Scott Simontis

I am all for exposing security flaws, but ethics are key when you do security research. There should have been some forward notification of the maintainers that an information security project was going to contribute potentially lethal code.

This also makes a key point that open-sourcing software does not make secure software. Very few people are qualified to do security reviews on a codebase, and without their expertise, one cannot say code is secure because it has passed public scrutiny.

Collapse
zilti profile image
Daniel Ziltener

They fucked up, now they have to deal with it. ¯_(ツ)_/¯ What kind of argument is that, "I had no bad intentions"? As they say, the road to hell is paved with good intentions. This is the real world, not some fairytale feelgood safe space.

Also, stop using Google AMP links. They're cancer.

Collapse
timur_kh profile image
Timur Khadimullin

It appears, the authors have failed to obtain consent from participants, based on their own Research Ethics guidelines. I feel the kernel dev community will have a valid claim for disciplinary action should they wish to push it further.

Collapse
stereoplegic profile image
Mike Bybee

I used to be an ethical hacker (key word: ethical), before I moved to dev full time. It was a lot of fun breaking (into) things and exposing what I broke (into). I ALWAYS had approval beforehand. If I didn't, I would have been fired or worse.

Another relevant concept from InfoSec: Responsible disclosure. Say you do find a vulnerability. If you legitimately care about security (and not just being 1337 on teh internets), you exploit the vuln in a way which doesn't affect others, you make a good faith effort to disclose the vuln to responsible parties - and then provide an ample window for them to patch it - before publicizing it (and only if they don't acknowledge and patch in a reasonable timeframe).

What these "researchers" did was violate, in essence, both of the above principles. They got no authorization from affected parties. They didn't disclose their efforts, and they willingly introduced known vulnerabilities to something which gets shipped (eventually) to everyone using Linux. That this flew under the radar of a department chair is even more concerning.

Forem Open with the Forem app