Recently, someone in the ML community discovered Siraj Rival, of quirky AI video YouTube fame, had plagiarized significant portions of a "paper". It doesn't appear to be officially published anywhere but it does exist online and in a video of his (both of which claim to have since been removed by Siraj). You can read more about it in the thread.
Andrew M. Webb@andrewm_webbSo in @sirajraval's livestream yesterday he mentioned his 'recent neural qubit paper'. I've found that huge chunks of it are plagiarised from a paper by Nathan Killoran, Seth Lloyd, and co-authors. E.g., in the attached images, red is Siraj, green is original22:40 PM - 12 Oct 2019
Now I've never written a paper in AI, published or otherwise, but I have written a paper with a team of folks who participated in an REU (Research Experience for Undergraduates) at Trinity University on multi-agent distributed systems. Its not my core focus today, but it was a great opportunity for me to learn that I wanted to look into UI/UX work. Not to mention, the experience of writing and presenting an academic computer science paper as an undergrad.
Which if you ever want to go from 0 to extreme imposter syndrome, present at a conference where they just assumed you are a PhD so every email is addressed "Dr." (when you don't even have a BS yet...).
Our research, which underwent a few different versions and changes before we arrived at what we ultimately used and presented, took the full summer duration of the program, followed by additional testing and simulation runs that fall. We had some results and even more ideas for future research. We also had a second opportunity to present at conference the following semester, so naturally we tried to cram even more into that timeframe. All while taking our respective class loads at different universities.
At some point we dubbed it "finished" and our paper was published in 2010, and to our surprise and delight we saw someone referenced it in 2013. How cool is that? Our hard work inspired other researchers, and they gave us the credit we deserved.
Amara GrahamThis is not ok. I’ve supported Siraj’s efforts in the past, I thought his mission to make AI learning more accessible was in line with my own mission... but this is suspect. Do the research, or pay someone to do the research. Plagiarism is disgusting. twitter.com/andrewm_webb/s…13:09 PM - 13 Oct 2019Andrew M. Webb @AndrewM_WebbSo in @sirajraval's livestream yesterday he mentioned his 'recent neural qubit paper'. I've found that huge chunks of it are plagiarised from a paper by Nathan Killoran, Seth Lloyd, and co-authors. E.g., in the attached images, red is Siraj, green is original https://t.co/UvJ65ldpuM
But because most of us have moved on to do other things out in industry, we wouldn't have known our work was referenced unless other researchers cited our paper. We don't, and can't, read every paper, published or otherwise. In most cases researchers have to trust the system and hope that their work will be cited and used appropriately in the future.
Amara GrahamArtificial tight deadlines don’t make plagiarism ok. In my book, it actually makes it worse.22:44 PM - 13 Oct 2019
Which leads me to why Siraj's casual response (that I will not entertain by even linking here, so see my subtweet above) that he was under a tight deadline because he publishes videos on some artificial timeline is ridiculous. Building a wealth of online AI content to make learning more accessible doesn't mean you can just C&P other people's hard work and call it yours. Adjusting your scope or timeline is understandable. But plagiarism is never acceptable, never excusable. It's a great way to kiss your credibility goodbye.
What's extra sad in this situation is this doesn't appear to be an isolated incident.
So what can you do to not trip and fall into plagiarism?
Give credit where credit is due, always. Cite your sources. Reference previous researchers, authors, maintainers, etc. Adjust timelines to produce the quality work you are capable of doing without cheating. And maybe even learn to say no when overloaded.