DEV Community

Rob Hoelz
Rob Hoelz

Posted on

Papers We Love - Reflecting on Trusting Trust

Hello everyone, and welcome to the first paper of Papers We Love! @thejessleigh was kind enough to get the word out announcing the group and asking for feedback last week, and we heard you loud and clear - this week's paper is on ethics, and it's called "Reflections on Trusting Trust". You can read the paper here. The paper uses C for its examples (many of our favorite languages, let alone us, didn't exist in 1984!), but I feel the C is simple enough that you don't need to know C to understand it. Give it a read and let's share our thoughts in the comments below!

Personally, I like this paper because it's short - weighing in at a paltry three pages (something I appreciate as a new dad!). It also goes over some neat programming tricks that we often forget about in contemporary programming, such as self-reproducing programs (also known as quines) and self-hosting compilers. The latter has a special place in my heart, having worked on a self-hosting compiler for a couple years.

This time we picked a paper on ethics - it happens to be the only paper on ethics from the Papers We Love repository. If you know of any other interesting papers on ethics that you think we should read as a group, plus feel free to recommend them! Regardless, I think we'll be rotating between topics as we pick papers to discuss each week.

Top comments (12)

Collapse
 
thejessleigh profile image
jess unrein

Unauthorized access to computer systems is already a serious crime in a few states and is currently being addressed in many more state legislatures as well as Congress.

The hope in this paper that the US and state legislatures will catch up to technical complexity and deal with it adequately is actually pretty sad, in retrospect. I really wish that our lawmakers had taken computing and network seriously from the beginning. As I read this, I can't help but think back to the Zukerberg hearings earlier this year and think how woefully unprepared our civic systems are.

The press must learn that misguided use of a computer is no more amazing than drunk driving of an automobile.

We've gone completely in the opposite direction here! When I saw that, all I could think of is someone saying I'm in in their best "hacker voice." We totally venerate the idea of the hacker as an antihero.

So my perception of the world as it stands now is clearly pretty bleak. What do we do in order to cultivate a sense of responsibility and shared destiny for the way we treat computer based crimes and social manipulation? I don't have any answers, but I think radically rethinking Section 230 is a good place to start.

Collapse
 
hoelzro profile image
Rob Hoelz

I can't help but think back to the Zukerberg hearings earlier this year and think how woefully unprepared our civic systems are.

I know exactly what you mean - every time I hear about tech interacting with Congress, or about a company that suffers data breach but no consequences, I get bummed out. And I don't think much will change unless politicians start understanding technology better, and maybe not even then.

I don't have any answers, but I think radically rethinking Section 230 is a good place to start.

I've never heard of Section 230 - I'll need to read up on that!

Collapse
 
thejessleigh profile image
jess unrein

Section 230 is basically what protects YouTube from responsibility for inappropriate or illegal content and insulates Facebook from the fact that it hosts hate groups and facilitates real world violence. It was a 90s provision in Congress that makes it so that websites are not legally liable for the content that users upload to the site, as long as it is dealt with when people ask. So, YouTube complies with 230 by taking down copyright violations, for example. It legally shifts responsibility for content from the platform to the individual

Thread Thread
 
thejessleigh profile image
jess unrein

It's immensely complicated, because the internet as we know it couldn't exist without section 230. If geocities had been responsible for the content on each site hosted through them, that would have been a disaster. There would have been no YouTube, which was basically created to upload copyrighted material for rewatching. But companies have been using this shield and not adequately enforcing good content practices because of it.

For a more in depth discussion from a modern viewpoint, here's a recent interview between Kara Swisher and Ron Wyden (who helped author the Communications Decency Act, including Section 230)

Thread Thread
 
hoelzro profile image
Rob Hoelz

Wow, that's amazing and kind of terrifying. Thanks for the explanation!

Thread Thread
 
rhymes profile image
rhymes

@thejessleigh right now the EU is talking about passing what we call Article 13, a monstruosity that's basically the opposite of your Section 230. In the attempt of reforming copyright laws (which must be incredibly hard even if you are well versed in those and technology) they want to make companies liable, at the moment of the upload, for any copyright infringements in the uploads.

Tim Berners-Lee and Vint Cerf sent a letter to EU in summer, some EU MPs are furious at the state of things, companies like YouTube are in crisis mode.

The are still a few chances to block in the next few months but the gist of the law is bonkers, as it was passed by a vast majority (twice the people that voted no, voted yes)

Collapse
 
rrampage profile image
Raunak Ramakrishnan

Thanks once again for starting this! This article almost disappeared in my feed. I think you should add a note in the post to follow #pwl tag so that we will not miss out :)

The first time I read the paper (almost 8 years back) I was impressed by Ken Thompson's ingenuity and deviousness in the construction of the compiler and how it tweaked the login program and the compiler binary itself. It was the first security related paper I had read.

Regarding ethics, I unfortunately think we are doomed to repeat our failings from similar advances in chemistry (explosives), physics (atomic energy). Technology always seems to advance at a pace faster than our legal and social norms keep up with.

With the advent of machine learning and increasingly pervasive data collection, we are in territory where humans do not understand why the algorithms make certain predictions. Many such algorithms are merely amplifying our inherent bias either due to faulty modeling or biased training data. Cathy O Neil's Weapons of Math destruction is a good overview of the same.

Collapse
 
rhymes profile image
rhymes

With the advent of machine learning and increasingly pervasive data collection, we are in territory where humans do not understand why the algorithms make certain predictions.

Agreed. I don't think there's a chance to be "stopping" anything here. We don't understand what we're doing, why should we stop? I hope, after a period of "stupid things done by very smart people", we'll start thinking and talking more before implementing things. In the meantime I have faith in the new batches of programmers :-)

There's a difference between "can" and "should" :D

Collapse
 
ben profile image
Ben Halpern

Added the #discuss tag for more exposure.

Folks who like this stuff specifically should follow #pwl to see more stuff like this in their feeds 🙂

Collapse
 
ben profile image
Ben Halpern

1) Amazing how little has changed in so many ways.

2)

The moral is obvious. You can't trust code that you did not totally create yourself.

With the proliferation of open source, how do we reconcile this? I think responsible checks/tooling/etc. can account for this, but

I have watched kids testifying before Congress. It is clear that they are completely unaware of the seriousness of theft acts.

This remains painfully true, and with the simplicity of pulling in code from npm and elsewhere, this problem must only be getting worse.

Coming to mind:

hackernoon.com/im-harvesting-credi...

Collapse
 
hoelzro profile image
Rob Hoelz

As far as the attack described in the paper itself goes, I alluded to how the Rust team addressed this in the comments to Jess' introductory post: manishearth.github.io/blog/2016/12... I assume that the GCC/Clang/etc teams have also incorporated this fix.

However, one problem that we weren't dealing with back then was the sheer amount of code that ends up folded into an application. Maybe we can try to compensate for this with tooling, but attackers love to find tools' blind spots!

One idea I kind of like is having permissions be more granular - even more granular than the application level. So you'd have an application that needs network access, but it calls a third party module that perhaps entirely trustworthy - let's say it's a checksum calculation library. The checksum calculation library would specify in its package metadata that it only needs compute power, so if it lied about that, the code would get SIGKILL'd/throw an exception/whatever. I think one could pull this off in native code with some memory segmentation/call gate magic, but it would need to be supported all the way down to the kernel level, and memory segmentation is passé, so I don't think this would ever get implemented as anything other than a toy. With the rise of WASM and the ability to call functions created from different contexts, however, maybe this could happen.

That being said, I don't think this would take off - it would undoubtedly introduce too much overhead, plus it would be very tedious to work with, and our industry seems to prefer sacrificing these kind of protections to avoid tedium.

Collapse
 
kspeakman profile image
Kasey Speakman

I'm glad you posted this. I was actually looking for this paper last week for a comment I was drafting about bootstrapped languages.