DEV Community

Cover image for Stop Using ChatGPT To Write Your Blog Posts For You! It's Not Working...
Barry Michael Doyle
Barry Michael Doyle

Posted on • Originally published at barrymichaeldoyle.com

Stop Using ChatGPT To Write Your Blog Posts For You! It's Not Working...

Disclaimer

I'm going to be frank here. I'm using ChatGPT to proof read this post. I even used ChatGPT 4's DALL-E features to generate the banner.

The State Of AI

At the time of writing this, we are nearing the end of 2023 and ChatGPT has been available to the public for about a year now. GPT-4 now has a knowledge cut off date of April 2023 and browsing capabilities. Do you know what that means? It means we've hit the era of AI incest or AI echo chamber. AI has officially started learning from crappy bad misleading content produced by itself last year.

Now to be fair, people have been posting low quality garbage on the internet since its inception. But at least the garbage posters had to take time to produce garbage back then because we didn't have AI to speed-run that process for us.

I Am A Hypocrite

My journey with writing blog posts using AI assistance looks a lot like this meme template that I use way too much:

Meme graph showing that beginners never use ChatGPT, mid level users always use ChatGPT and experts never use ChatGPT

I feel guilty writing this post now because I'm guilty of running my written work through ChatGPT to a point where my own voice is lost in AI lingo. In fact, about a month ago I was ready to write a post about how to use ChatGPT to write your blog posts. This post will technically inversely answer that question.

ChatGPT Is A Tool, Not A Crutch

Just like a spell-checker is pretty useless if you feed it gibberish or nothing, ChatGPT is also useless if you ask it to come up with results without any guidance.

I use ChatGPT in my programming to save me time by writing unit tests and e2e tests for me. I sometimes also use it to build out UIs for me after feeding it the context of what I'm currently using. There have been a bunch of scenarios where I know exactly what I want to accomplish but don't feel like remembering the syntax so ChatGPT comes to the rescue. I've even used it as a sounding board for ideas at crazy hours of the night when nobody else is awake to talk to me.

In all these scenarios it helps me accomplish my goals. Sometimes it gets the job done very well, and sometimes it misses the ball completely. The problem comes in when you blindly trust it.

I used to play a game where I'd hit the autocomplete button on my phone to see what strange nonsensical string of words it would produce. The results made less sense than the drunkest text I had ever received. This autocomplete button works great to speed up typing, but unguided it becomes worthless.

The biggest problem here is that ChatGPT can "sound right", and that will mask misleading information.

The Problem With Using ChatGPT To Write For You

Writing technical blog posts has always been a great way to impart your knowledge and talk about your experiences. It is a great way to practice communication and solidify your understanding on the topic you're writing about. A big part of writing is to demonstrate your knowledge in a specific area and your ability to communicate.

This is where the issue of AI generated content crops in. People - especially developers - are lazy. When you ask AI to come up with ideas for you, you are opening yourself up to writing about concepts that you don't really understand. This means that you have no way to vet whether or not the content that you're generating is actually true or not. In the context of software development, the generated content can also point to outdated information.

Horror Story Examples

I saw a post on Dev.to the other day called "45 NPM Packages to Solve 16 React Problems". I'm not going to link to it because I hope it gets deleted. The post "sounded" legit but it was full of information that was good for a developer in 2018. The problem was that many packages on that list were linking to projects that had been abandoned. If the writer of that post had actually used those packages, they would know this. In fact, even the packages that were not out of date still linked to versions of the packages that were no longer maintained.

In another post, I read the following concluding statement about TypeScript types vs interfaces:

By applying these best practices, you can ensure that your TypeScript projects are not only efficient but also outrank others in search engine results.

This is just blatantly incorrect. If you know anything about TypeScript you would know that using a type over an interface will have absolutely no effect on your project's search engine ranking. In this case you can see that dodgy misleading information can creep in everywhere.

Conclusion

Using ChatGPT to write on your behalf is only going to hurt you in the long run. Use it as a tool, not a crutch.

Do your part in preventing the rise of low quality content by calling it out to prevent others from blindly following misleading information on the internet. And be careful yourself not to be misled by content that "sounds right" but isn't.

Top comments (38)

Collapse
 
jankapunkt profile image
Jan Küster

I would always propose to not use generative AI for skills you aim to master. This is, because mastery requires lots of repetition and failure and extensive dealing with the subject. The more we rely out skill in externals the less mastery we can achieve.

What's the conclusion of this hypothesis? Using AI as a supplement (as you described for review or generate images as your goal is to write articles and not design banners) is great.

Even greater is using AI as support towards your mastery. Prompt hot to give detailed feedback on your overall writing skills, instead of just reviewing the single stticle. Summarize them and feed it again to give you a summary of your skill development over time.

Collapse
 
dumebii profile image
Dumebi Okolo

I totally agree. Even if you must use AI to generate an article, go the extra mile of reading it thoroughly, and making sure that everything it says checks out.
Also about properly guiding the AI to give you what you want.
Sometimes, you'd need to write on something but you don't know where to start or how to start. AI sort of gives you a landing pad and then, it is up to you to launch from there.

Collapse
 
bbkr profile image
Paweł bbkr Pabian

That concept of AI progressively hallucinating because of learning from previous hallucinations is great base for a Sci-Fi book / movie. We always assumed that AI will be superior, like SkyNet from "Terminator" or post evolutionary forms like in "Ideal Imperfection" by Jacek Dukaj.

But reality may be more like "Idiocracy" with AI :)

Collapse
 
anwar_nairi profile image
Anwar

I agree, AI for the moment is correct for proof-reading.

If you really understand how the algorithm works behind chatGPT, it is obvious to conclude it is not good for creative contents.

I would just nuance the analysis of other dev.to articles, on the second example you took, you are not 100% sure the conclusion had been copy/pasted from an AI generated content (even if I agree TS is not directly responsible for the search engine ranking).

Collapse
 
barrymichaeldoyle profile image
Barry Michael Doyle

I skimmed things down with the TS article, I'm pretty sure a lot of that TS article that I was referencing used plenty of AI to fill in the blanks. And that's my problem with the use of AI. If the message is not fully your message, you shouldn't be using it to generate filler content.

Collapse
 
anwar_nairi profile image
Anwar

You're right, it is a pity to see people searching for ranking spots rather than providing actual value. To be honest, I noticed the quality of dev.to content have lowered way before the rise of chatGPT, but since then it might have become worse unfortunately. Sad to see it!

Collapse
 
aatmaj profile image
Aatmaj

partially right is sometimes more dangerous than completely wrong......

Collapse
 
barrymichaeldoyle profile image
Barry Michael Doyle

💯💯💯

Collapse
 
indrasisdatta profile image
indrasisdatta

In my opinion, ChatGPT definitely helps you generate a component/function with some boiler-plate code (as you mentioned generation of e2e and unit tests). But when it comes to debugging complex bugs or generating new code to work with existing codebase, it can be misleading at times.

Collapse
 
oshratn profile image
Oshrat Nir

I agree wholeheartedly with the statement: ChatGPT Is A Tool, Not A Crutch.

Everytime I see another title that includes the world "unvieling" and the word "delve" somehwere in a post, it immediately discredits it for me, becasue those are some of the blatant telltale signs of uncurated AI usage.

Just like we moved from quills to pens to typewriters to wordprocessers, AI is another step on this path. Say what you will, it still has a ways to go before it can replace a writer that melds unique knowledge, opinions and aspirations.

Collapse
 
c0mmand3rj profile image
James Batista

The problem is "AI-assisted hyperdependence". As I delve into the concept of AI-assisted hyperdependence, I can't help but feel concerned about its long-term consequences. Relying heavily on artificial intelligence for a vast range of tasks, from simple daily decisions to complex analytical problems to even as simple as writing a blog or correcting grammar, I fear it might lead to a gradual erosion of my own skills and critical thinking abilities. It's very possible it can slowly be happening to a lot of people.

While this dependency streamlines my processes and enhances efficiency for others (like yourself when revising an article), it also makes me wonder if my judgment and creativity are becoming undervalued, and my ability to operate independently of AI is diminishing. Good points are made in this article. Copy and pasting straight out of ChatGPT for topical blog articles can only be recognized by someone who heavily uses it.

Collapse
 
shoptinhyeuvn profile image
Shop tình yêu shoptinhyeu.vn

The content generated by AI will not be highly regarded

Collapse
 
barrymichaeldoyle profile image
Barry Michael Doyle

It will not indeed, and as a trusted member here I make an active member to flag posts, but I still see some posts get a lot of attention with people grateful for content that’ll lead them astray.

Collapse
 
farzaadweb profile image
Farzad Farzanehnya

I have a strong admiration for individuals who scrutinize patterns in collective behaviors, Thank you .

Collapse
 
barrymichaeldoyle profile image
Barry Michael Doyle

Thanks man :)

Collapse
 
barrymichaeldoyle profile image
Barry Michael Doyle

Haha well fair enough, that's true. But I'd rather be the whistle-blower warner than the person who encourages people to keep at it ;)

Collapse
 
joshuaamaju profile image
Joshua Amaju

Did AI generate that title for you?

Collapse
 
barrymichaeldoyle profile image
Barry Michael Doyle

Nope :)

Collapse
 
barrymichaeldoyle profile image
Barry Michael Doyle

I love the ratio in this reply 😅

Collapse
 
lebbe profile image
Lars-Erik Bruce

"also outrank others in search engine results" could be a tongue in cheek sense of humor by the author as well. I laughed at least :-)

My two posts on dev.to is authored by chat gpt, and I clearly state so as per dev.to's guidelines. Also, I make sure that I stand by every statement given (and ask chat gpt to edit out the points I disagree with or is wrong/outdated).

As long as I carefully monitor the output, and use my experience to make sure the content is true, and relevant, to the best of my knowledge, I see chat gpt as a wonderful tool to communicate knowledge I wouldn't have else. But of course, to just spew out it's generated mumbling without editing or proofing is deteriorating. Hopefully such accounts will stay mostly unpopular and unread.

Collapse
 
danielhansson profile image
Daniel Hansson

I agree with you too! Posting an article generated with AI with bad writing and links just for a sake of posting, is something that I think will bite the person back in the foreseeable future.