Not long ago, an AI called ChatGPT was released, which raised many questions among software developers. This AI model is capable of answering any question you have about almost anything that is documented on the internet. It can also write a function in any language you ask and perform any task you desire. Additionally, it can even rewrite the function in a different language and make any modifications you want. It's like having an extra developer always available to help you. It's quite impressive.
Of course, it raises the question of whether software developer jobs are in danger. It's something that would have been hard to imagine just a few years ago. However, I don't think it will happen anytime soon. Why?
It knows only what is on the internet
ChatGPT is like a supercharged version of StackOverflow. If your problem has already been solved multiple times, you are likely to get a perfect answer. However, if you are trying to solve a very specific bug, the output may be unreliable. Additionally, there are many private communities and knowledge bases that require registration or a paid subscription, so the AI may not be able to access all the information it needs to provide an accurate answer.
Furthermore, I have noticed that ChatGPT sometimes has difficulty using very recent information in its training data. For example, when I asked about a recent Python feature, the model was convinced that the feature did not exist, but upon further questioning, it admitted that the latest training data did include the version of Python that contained this feature, but it was still unable to use it in the code.
If it's not on the internet, it will make it up
Yes, ChatGPT is not always reliable and it may provide false information. This is particularly true when you ask questions about topics that the AI has limited knowledge about. For instance, I was working with a poorly documented Jira API in Java, and I asked ChatGPT for a code example on how to use a specific service to retrieve and update data. It gave me a code that seemed like it should work, but it was entirely fictitious. It used methods and classes that did not exist in the API.
In the worst case scenario, the AI behaves like a junior developer who will blindly follow instructions, even if it means providing code that doesn't work. It is not an enjoyable experience to have to mentor an AI in this way.
The AI doesn't know full context
Generating a self-contained function is one task that ChatGPT can do, but it is not often a task that is required. The role of a skilled software developer is to have a holistic understanding of the code and the software as a whole. They should be able to integrate new code into the complex, multi-layered (and sometimes very fragile) ecosystem in a way that minimizes disruption and breakage. And this is simply impossible without a full knowledge of the codebase.
It is possible that a model specifically trained on programming languages could handle a codebase with hundreds of classes and thousands of lines of code, but would it be practical or desirable to do so? Storing a codebase in a cloud-based repository like Bitbucket is one thing, but would you willingly share it with an AI? This is especially concerning given the recent controversies around the use of AI in art. It is hard to imagine a large banking company willingly sharing their critical core code with an AI, as large corporations are typically cautious about even using basic cloud services like virtual servers or file hosting.
The AI is able only to generate code now
It may seem obvious, but ChatGPT is a text generation model and software development is far more complex than that. Writing code is just the first, and often the easiest, step in the process. Testing and debugging is also crucial, as even well-written code may not always behave as expected. The job of a software developer also includes tasks such as profiling, troubleshooting and drinking coffee. However, the current AI model is not capable of performing these interactive tasks.
This can be a major issue, particularly when the public documentation for an API or library is inadequate. In such cases, you may need to test multiple times to determine how the code actually operates and what inputs it requires. Additionally, relying solely on a language model may not be sufficient to uncover obscure bugs in your codebase.
It hinders the creativity
Coding is a creative endeavor. While it may not always require a high level of creativity, there are certainly times when you need to come up with creative solutions to problems. Recently, I used ChatGPT to generate some non-coding ideas, but I was disappointed by the lack of originality in the output. AI models tend to favor what is currently popular or considered 'normal' or 'standard', and this bias may also impact their ability to write code creatively.
Conclusion
I believe that AI will eventually replace many jobs, including those of software developers. However, technological progress often occurs in sudden leaps, and I think we are currently in the midst of such a leap. While the capabilities of language models will certainly improve, this progress will likely occur at a slower and more gradual pace. Therefore, I do not believe that my job will be replaced by AI in the next 20 years.
While not the main focus of this article, I believe that ChatGPT can be a valuable tool for software developers, enabling them to be more productive. For example, it can help to simplify and streamline code by rewriting complex conditionals. Additionally, it may introduce you to language constructs that you were previously unaware of, improving your code's professionalism and expanding your knowledge of the language.
I have several other ideas related to ChatGPT that I would like to explore further, and I plan to write about them in a future post with a more positive outlook.
Top comments (0)