Right, so it's been a minute since I last posted something.
Like many other companies this last year, we weren't spared layoffs. While I still have my job, my team has been greatly reduced, both from people leaving when they saw the signs and from people being let go. So while fighting survivors guilt and an increased workload, posting hasn't been my first priority.
But while doing my day to day and trying to keep my head down and deliver as much as possible as quickly as possible, I noticed something. And I'm not sure how I feel about it, other than we need to talk about it.
I felt this post needed to happen. So here it is:
ChatGPT has made me lazy.
😳
Yeah.
Okay, but... you said AI tools are awesome?
Yes, I have been saying that, and I still think the improvements to GPT and other LLMs and the new features with Copilot are fantastic tools and will make many jobs more productive.
But in much the same way I imagine people using horses felt when they first started getting cars, or people who walked everywhere when they got bicycles, or carpenters with their first power tools, this increase in productivity comes at a cost.
Surely you're overacting?
Maybe. Maybe not. And don't call me Shirley.
I think we need to start with a story. Something I caught myself doing earlier this week.
For local testing we have this little stub service which acts as the client/partner endpoint. Up until now all our data has always been JSON, but for a new partner we are sharing binary data as well, so I wanted to update the stub to handle both. All this needs is a simple if statement to look at the request headers.
I've been doing Go for about 6 years and have built many, many microservices with REST APIs. I should be able to code this logic in my sleep, since I know the APIs of the "net/http"
package. But what was the first thing I did?
So you asked ChatGPT to write some boilerplate, big deal
I know, I know, it probably doesn't seem like a big deal... yet. But here's the thing; that's not where it ended. A large part of the rest of my code was just me starting to type, looking at the Copilot suggestion in my IDE, and then pressing Tab.
Then writing a line of custom code unique to my service, before just autocompleting boilerplate from Copilot again.
And yeah, this is awesome. It reduces coding time, hopefully reduces mistakes, it make my life so much easier, right? Right!?!
Except when it doesn't.
Something wasn't working, and usually this is where I add some debug logging, and if that doesn't work, I pull out the debugger and start stepping through code, investigating every single line and memory allocation like a detective trying to solve the perfect murder.
Only I didn't. Instead, I did this.
As expected, it told me the obvious things; add logging on the request and receive side and print out the headers and status codes and body. Which I obviously did even before it told me to, because I'm not some n00b and this isn't my first rodeo.
So I told it I already confirmed the problem is on the receive side. And it gave me some good ideas back, and to be fair to the tool, it did help me solve the problem... One of the suggestions was
- Is the request being read correctly? - Ensure that the request body is being read correctly. For instance, it could be that the body is being read somewhere else before your
io.ReadAll(req.Body)
line and it's not being reset.
Which was the issue. When doing some copy-pasta refactoring to add the if-else
, I moved the reading of the body to outside the if
statement since I only need to do that once, but then I didn't delete it from the else
part, which caused some variable shadowing I didn't notice, and because an io.ReadCloser
such as req.Body
is a stream and not a buffer, the data I already read is gone from the stream. It worked fine for the if
because it didn't try and read twice.
And I kinda feel like I should have just picked this up by reading my code properly, instead of jumping onto ChatGPT.
Ouch. Tough break.
Indeed. And this has made me think a lot about these tools and the impact they've had on me. While I do feel more productive, and there are certain tasks these tools make much easier than before, I wonder at what cost it comes.
It's all too easy to draw parallels between our increasing reliance on AI tools and the introduction of power tools to carpenters or automobiles to those used to horses. The idea of 'progress' often lures us into believing that newer, faster, and more efficient is invariably better. But like power tools, which introduced an increased risk of injury, or automobiles which brought along pollution, AI coding assistants also have their drawbacks.
While power tools enable us to build bigger, faster, and more intricate things than we could with our bare hands, they also run the risk of creating a generation of workers who wouldn't know how to hammer a nail if the power was out. In the same way, tools like ChatGPT and Copilot can help us become more productive coders but also threaten to leave us helpless if we forget how to do even the most simple of tasks.
Of course, I'm not arguing against progress. Without the willingness to adopt new tools and adapt to changing circumstances, we'd still be etching symbols onto cave walls. But as we embrace these AI tools that promise to make our coding lives easier, we must remain vigilant about the risks of over-reliance.
We must be careful not to trade our skill and expertise for convenience.
Conclusion
I'm sure I'm not the only one who's become a little complacent, a bit lazy, in the face of these new tools. But recognizing the problem is the first step towards solving it.
So yeah, the next time you're reaching for that shiny AI tool to solve your coding problem, pause for a moment. Is this a shortcut you genuinely need, or is it a crutch you're leaning on? Are you using the tool, or is the tool using you?
Remember, the best tool any coder has is their brain. And unlike AI tools, it's 100% unique to you and perfectly tailored to solve your coding problems. So before you let AI take the wheel, make sure you're still in the driver's seat.
Top comments (14)
Thanks for shedding some light on this. IMO AI is yet another tool. And how valuable a tool is, is determined by how we are using it. If your first impulse is to head over to ChatGPT and generate some boilerplate or just break your enter key by accepting all the suggestions by CoPilot (just kidding) of course it potentially leads to declining skills.
So it really comes down to finding the sweet spot between applying your own skills and using AI to do the job. Looks like you should lean more towards your own skills for a while.
Thanks @dasheck0. I totally agree, as with any tool, it's about finding the right balance between use and reliance. My personal experience last week has definitely tipped towards reliance, and what I really wanted to accomplish with this post is to just have others learn from my experience and ensure they aren't making the same mistake. This week I'm making a point of using ChatGPT and CoPilot less.
Yes and this is where I see the value in this article. We have to talk about how such things change the way we work and think. The solution might be simple (or not). But you can only talk about solutions, when you identify the problem.
Much needed topic for sure. I mean I love to write and read code. And there are certain things in soft dev that I take the more purist view, but with AI code generation, I feel it's that old debate about using a calculator or using our heads. It doesn't have to be one or the other, like you mentioned.
Is it making us lazy? Probably, but I hope what it's also doing is freeing our minds to tackle issues that we are much better at solving: gathering user requirements, solving design and security issues, increasing productivity, defining goals, etc. I think software engineers especially, have an unreasonable long list of things to address. It's a list of tasks that other engineering disciplines have divided into at least 3 separate jobs (technician, engineer, architect, safety inspector, etc.)
Thanks for feedback @freddyhm
I like that you raised the calculators as an example, which I also thought about but originally decided against using for the comparisons. But reading this again, and thinking on it a bit, it wasn't so much that the calculator by itself changed peoples behaviour. Rather, it was the ease of access to a calculator on our mobile phones as they became prevalent where I at least started noticing people "becoming lazy" and rather using the tool than their heads, and start even struggling with basic arithmetic.
And I think that is part of what shifted for me recently and made me think about this topic; after paying for ChatGPT and keeping a tab always open, and installing the CoPilot plugin in all of my IDEs and text editors, it's just become too easy to use them; there is no hesitation because there is no resistance.
My hope as well is that using these tools the right way will free up cognitive load from the more dull work so we can focus on the problem areas where humans shine. Like you say, we already have an unreasonably long list of things to address, giving some of that up to a machine will surely help in the long run.
Thanks for sharing, it's an interesting reflection.
It makes me thing about the shift in how people's brains work. People used to be really good at retaining knowledge and there's been a general overall shift away from this. As information is so easy to look up (e.g. Google) people stopped remembering and became adept at finding things instead.
I wonder if the same will happen here? Will we "forget" how to debug and problem solve, instead becoming overly reliant on these tools? Only time will tell...
Personally I don't use CoPilot much (not sure if it's just my SublimeText setup but it only suggests Noddy stuff line by line that I can type more quickly than suggestions appear). Sometimes I'll use chatGPT if I need help getting started or have a problem I'm not sure how best to Google for. But I try to use it for the first hurdle or two and then stop relying on it. I often find myself Yak Shaving if I rely on it too much.
Thanks for the comment @ipwright83 , appreciate the insights.
The "Google effect" / digital amnesia you mention has indeed changed how we store and retrieve information. I do think something similar is happening with problem-solving and debugging skills in the face of advanced AI tools, but then as mentioned and discussed in other comments, it only really becomes a "problem" once access to the tools becomes so easy that we don't even think about it anymore, and I don't think we are there yet.
I personally didn't feel the Google Effect until I had a search engine in my pocket all the time, and didn't start struggling with arithmetic until I had a calculator in my pocket all the time. Smartphones gave us ease of access, which is where I feel things shifted, and I suspect the shift here will be inclusion of AI tools as standard in IDEs and editors.
Out of curiosity:
Do you think you would've solved the problem faster without ChatGPT?
In this particular instance, yes. If I had taken a minute to properly read my code, or just step through it with a debugger, I'd have spotted the issue since it wasn't that complex.
The back and forth between me and ChatGPT and having to sanitize code I give it meant it took longer than needed.
Late to the party here, but you addressed a really salient point. If we rely on AI to the point where we no longer have a good understanding of context, problem-solving will definitely be more difficult. When you write code which is a result of your mind thinking about how to solve a problem, you have context from the get-go, and you give your mind time to process that context further as you type/modify the code. Your brain will be much better able to identify problems after the fact than if you rely on a code generator.
Of course, there's no single "correct" way to go about all this, but it's good to at least be aware of the trade-offs. Nice article!
Thanks for the comment @squidbe, I appreciate the feedback. I like this way of thinking as well, about having the context and taking time to process.
I agree completely that if you thought through and solved the problem yourself initially it's much easier to think and reason about any new problems that come up. Easier to fix my own code I that I kinda understand, than someone else's code (whether AI or StackOverflow) that I just copy-pasta'd.
What I've been focusing on since writing this post is using the tools to make my solutions better, rather than having them solve my problem. So telling it what I did and getting suggestions on improvements. The results have been mixed, I think ChatGPT is getting dumber...
Great post
Interesting thoughts, I feel the same many times lately.
Great post