Hey DEV Community!
As a member of the DEV staff, I read a lottttt of your articles. Over the past month or so, itβs been awesome to see so many f...
For further actions, you may consider blocking this person and/or reporting abuse
I think just like Co-Pilot became a hype and then died down after it became paid. The ChatGPT that costs the OpenAI Credits may also start to lack the hype after a bit when human loses interest in AI generated content. But before the hype dies down, people will abuse it as much as possible.
On a separate note, I asked ChatGPT to write a reply to this post and it gave me an answer.
AI Content Detectors cannot detect the AI-ness of such custom command and post.
I would not encourage the use of automated AI-text detectors. I've read about how these work; they rely largely on the text being correct in spelling, punctuation, and grammar.
Well, I also write text that is correct in spelling, punctuation, and grammar. I would hate to be called an AI just because I had a good English teacher in school!
The detector I was using was based on detector models developed by OpenAI themselves - d4mucfpksywv.cloudfront.net/papers...
Don't worry Joe, looking good so far.
π
Jon, may you share the link to the ChatGPT detector? My latest search results in some very suspicious sites...
Sure - I was using detectchatgpt.com/
I shouldn't be shocked, but I am. To see AI generated content, and now counter content or verifying tool. Crazy if you ask me.
I have come across some articles that seem fishy. Mainly because the author themselves are also fishy users.
Looks like someone was using ChatGPT before it was cool :)
Well, thanks to ChatGPT for the very thoughtful and human-like response π« hahaha
Keep in mind with CoPilot - it hasnβt been available to GH orgs yet, many of us are waiting for that option.
GH orgs , what is that?
GH orgs = Teams and Enterprise plans.
Wait what?
I thought Copilot was offered for that plans for the sake of priority.
But it's not.
I'm so surprised
I love these decisions. My personal policy on AI is to use it transparently for images and limit my use of text generation to potential title generation in tricky articles and potential outlines for talks or books to double check my own decisions for things I might have wanted to include but forgotten.
TBH ChatGPT written articles almost always have a certain "feel" to them and are pretty easy to spot. Running a ChatGPT detector over a sample of a few suspect articles here on DEV seems to confirm my suspicion that there's already a lot of generated content on here being passed off as human written.
...Or, you've found some human authors who write in a style similar to ChatGPT.
It's great to start working on some ethical framework around the use of GPTChat and the wave that's coming right after it.
We are at the beginning of a monumental redefinition in what it means to be an author. I think this is on the same scale as what the calculator brought about -- I'm old enough to remember a time when schools would not allow calculators; and of course now they all do. We've offloaded mundane and sophisticated calculations to computers -- it's just assumed we are not carrying out long computations by hand. Mathematicians apply their intellect to more high level work now instead of losing time in lengthy manual computations.
In the near future readers will come to assume that a person did not write an essay themselves.
I'm sure it's already happening. I along with others have thought we've read the work of an insightful person, when in reality, that work was nobody's baby.
Or everyone's baby. At least everyone whose work the transformers were trained on. ChatGPT is both all of us and none of us at the same time.
Certainly a very large mixing bowl.
However we must agree the time is almost on us to agree that AI assisted text will become a normal thing in the near future.
However we must be careful or AI will continue to evolve and change the dynamics of how it can be spotted.
We only at the tip of what tomorrow will bring.
In the future humans won't be reading most of the essays, either β we'll have AIs summarizing the (mostly AI-written) content and using it to answer whatever questions they were seeking answers for directly.
So much of the Internet will be AIs writing content for other AIs. Crazy world, eh?
Just around the corner. Maybe it's already here in spots and we have not yet realized it.
Thank you for this piece:
My preference would be a tag but I appreciate the flexibility.
I will put the hashtag on the top of each article then. Thank you for accepting the use of AI. I just wanted to saved my results as a post to read it again when I want. However, I still love to read the generated text from the AI because it has a better grammar and my English skill is not as good as them.
Hey, thank you for actually linking to my articles!
First of all: I never intended to make anyone look like a fool or whatever by mentioning that the twitter API post got tweeted about.
Besides that, the article had the effect I intended it to have: Spread awareness on the topic of AI articles. I'm glad that the community guidelines got adjusted to restrict the use of ChatGPT, but not completely prohibit it.
While I think that it is a great tool, especially for people who's first language isn't English (including me), the quality of posts can be severely impacted (in creativity of topic and depth of arguments/explanations). I might do a follow-up article on this topic (taking your article into consideration), where I go more in depth about my point of view.
Of course! Thanks for being a good sport and for disclosing the nature of your AI articles. It's much more fun and engaging that way!
Just to be clear, these guidelines weren't written in reaction to your experimentβwe had been discussing sharing some guidelines around AI content since ChatGPT exploded in popularity, and we've received a handful of reports on suspected AI content in that time as well.
Looking forward to reading what more you have to share.
Very fair guidelines. I am very interested in AI-assisted writing as people are reading to get information. At the end, I don't think it really matters where it came from. As long as it is factual (I would like to see the AI incorporate citations), easy to read and well-structured, I'm cool with it.
Yes, there is the fear of AI writing putting content writers out of a job but the Internet apparently was going to close libraries etc...
The future is coming... (again and again).
Eventually we won't be able to tell the difference between ChatGPT and legit posts. IMO it will really just come down to treating AI generated content as traditional content. If any post is low quality, plagiarized, or condoning illegal activities it should be reported then handled according to the rules of the site.
Pointing at posts and marking them as AI generated is not going help the situation. It will just cause readers to distrust genuine content and will cause writers to distrust moderators.
Look at Twitter and Reddit moderation and how they have evolved over the years. They have labels and rules for pretty much everything. It's at the point now where unless you have established some sort of prior credibility, it's nearly impossible to create a successful post. They pretty much prevent average users from going viral.
Tinfoil hat or not, our decisions of how we moderate AI generated content today is going to impact us for years. All we can hope for is that we get it right the first time.
You'll definitely find this tool useful because it can help you come up with ideas, content, and even a variety of well-written sample articles
https://chrome.google.com/webstore/detail/chatgpt-for-search-engine/feeonheemodpkdckaljcjogdncpiiban/related?hl=en-GB&authuser=0
However we must agree the time is almost on us to agree that AI assisted text will become a normal thing in the near future.
However we must be careful or AI will continue to evolve and change the dynamics of how it can be spotted.
We only at the tip of what tomorrow will bring.
I think a tag should be mandatory - not an either/or with a declaration in the body. That way readers can avoid or filter out trash articles upfront instead of having to plough through them to either realise they were probably AI written (usually pretty easy to spot) - or read right to the end (where most people would likely put a declaration).
Also, this would need to be enforced in some way... maybe run articles through a ChatGPT detector (there are some good ones around - some based on models provided by OpenAI) - and flag those that may need investigation.
The quality of content has sunk lower and lower in recent years, and the increasing number of ChatGPT articles here is merely accelerating that trend.
My usual workflow for writing anything is:
So do I understand correctly that everything that is forbidden by AI-Generated content is OK if it was written by real human?
Not necessarily. Our content moderation is performed by humans, and we take both the Code of Conduct and any additional context into account when making decisions about content.
Human writers on DEV are still expected to adhere to the Code of Conduct, and while we prefer that they refrain from the same types of posts weβre strictly prohibiting for AI-assisted and -generated articles, it's still allowed to promote your own brand or course so long as you do so without misleading readers and disclose any affiliate links.
Nice article.
Nice!
Fast Move!