Hello everyone, and welcome to the first paper of Papers We Love! @thejessleigh
was kind enough to get the word out announcing the group and aski...
For further actions, you may consider blocking this person and/or reporting abuse
The hope in this paper that the US and state legislatures will catch up to technical complexity and deal with it adequately is actually pretty sad, in retrospect. I really wish that our lawmakers had taken computing and network seriously from the beginning. As I read this, I can't help but think back to the Zukerberg hearings earlier this year and think how woefully unprepared our civic systems are.
We've gone completely in the opposite direction here! When I saw that, all I could think of is someone saying
I'm in
in their best "hacker voice." We totally venerate the idea of the hacker as an antihero.So my perception of the world as it stands now is clearly pretty bleak. What do we do in order to cultivate a sense of responsibility and shared destiny for the way we treat computer based crimes and social manipulation? I don't have any answers, but I think radically rethinking Section 230 is a good place to start.
I know exactly what you mean - every time I hear about tech interacting with Congress, or about a company that suffers data breach but no consequences, I get bummed out. And I don't think much will change unless politicians start understanding technology better, and maybe not even then.
I've never heard of Section 230 - I'll need to read up on that!
Section 230 is basically what protects YouTube from responsibility for inappropriate or illegal content and insulates Facebook from the fact that it hosts hate groups and facilitates real world violence. It was a 90s provision in Congress that makes it so that websites are not legally liable for the content that users upload to the site, as long as it is dealt with when people ask. So, YouTube complies with 230 by taking down copyright violations, for example. It legally shifts responsibility for content from the platform to the individual
It's immensely complicated, because the internet as we know it couldn't exist without section 230. If geocities had been responsible for the content on each site hosted through them, that would have been a disaster. There would have been no YouTube, which was basically created to upload copyrighted material for rewatching. But companies have been using this shield and not adequately enforcing good content practices because of it.
For a more in depth discussion from a modern viewpoint, here's a recent interview between Kara Swisher and Ron Wyden (who helped author the Communications Decency Act, including Section 230)
Wow, that's amazing and kind of terrifying. Thanks for the explanation!
@thejessleigh right now the EU is talking about passing what we call Article 13, a monstruosity that's basically the opposite of your Section 230. In the attempt of reforming copyright laws (which must be incredibly hard even if you are well versed in those and technology) they want to make companies liable, at the moment of the upload, for any copyright infringements in the uploads.
Tim Berners-Lee and Vint Cerf sent a letter to EU in summer, some EU MPs are furious at the state of things, companies like YouTube are in crisis mode.
The are still a few chances to block in the next few months but the gist of the law is bonkers, as it was passed by a vast majority (twice the people that voted no, voted yes)
Thanks once again for starting this! This article almost disappeared in my feed. I think you should add a note in the post to follow #pwl tag so that we will not miss out :)
The first time I read the paper (almost 8 years back) I was impressed by Ken Thompson's ingenuity and deviousness in the construction of the compiler and how it tweaked the
login
program and the compiler binary itself. It was the first security related paper I had read.Regarding ethics, I unfortunately think we are doomed to repeat our failings from similar advances in chemistry (explosives), physics (atomic energy). Technology always seems to advance at a pace faster than our legal and social norms keep up with.
With the advent of machine learning and increasingly pervasive data collection, we are in territory where humans do not understand why the algorithms make certain predictions. Many such algorithms are merely amplifying our inherent bias either due to faulty modeling or biased training data. Cathy O Neil's Weapons of Math destruction is a good overview of the same.
Agreed. I don't think there's a chance to be "stopping" anything here. We don't understand what we're doing, why should we stop? I hope, after a period of "stupid things done by very smart people", we'll start thinking and talking more before implementing things. In the meantime I have faith in the new batches of programmers :-)
There's a difference between "can" and "should" :D
Added the #discuss tag for more exposure.
Folks who like this stuff specifically should follow #pwl to see more stuff like this in their feeds 🙂
1) Amazing how little has changed in so many ways.
2)
With the proliferation of open source, how do we reconcile this? I think responsible checks/tooling/etc. can account for this, but
This remains painfully true, and with the simplicity of pulling in code from npm and elsewhere, this problem must only be getting worse.
Coming to mind:
hackernoon.com/im-harvesting-credi...
As far as the attack described in the paper itself goes, I alluded to how the Rust team addressed this in the comments to Jess' introductory post: manishearth.github.io/blog/2016/12... I assume that the GCC/Clang/etc teams have also incorporated this fix.
However, one problem that we weren't dealing with back then was the sheer amount of code that ends up folded into an application. Maybe we can try to compensate for this with tooling, but attackers love to find tools' blind spots!
One idea I kind of like is having permissions be more granular - even more granular than the application level. So you'd have an application that needs network access, but it calls a third party module that perhaps entirely trustworthy - let's say it's a checksum calculation library. The checksum calculation library would specify in its package metadata that it only needs compute power, so if it lied about that, the code would get SIGKILL'd/throw an exception/whatever. I think one could pull this off in native code with some memory segmentation/call gate magic, but it would need to be supported all the way down to the kernel level, and memory segmentation is passé, so I don't think this would ever get implemented as anything other than a toy. With the rise of WASM and the ability to call functions created from different contexts, however, maybe this could happen.
That being said, I don't think this would take off - it would undoubtedly introduce too much overhead, plus it would be very tedious to work with, and our industry seems to prefer sacrificing these kind of protections to avoid tedium.
I'm glad you posted this. I was actually looking for this paper last week for a comment I was drafting about bootstrapped languages.