ChatGPT. Everyone's talking about it. There's tons of content where people interact with ChatGPT in amazingly creative and even frivolous ways.
ChatGPT can help you plan your day, write an essay, and even make you some money (at least according to so many Tiktok videos).
Before you get all giddy, the big question is: just like A$AP Blockie on TikTokasks, will AI take your coding job?
Let's go on a journey to explore the question, "Will AI make programmers obsolete?" and more.
- 📍 How does AI write code?
- 📍 Examples of programming tasks AI is already doing well
- 📍 What limitations does AI have?
- 📍 Will learning to code be worth it in the near future?
- 📍 The verdict
There are two schools of thought here. There are people who see AI as another tool in developers' arsenals to help increase productivity.
On the other hand, the second school of thought is that AI will get so good that it will replace programmers. Hello ChatGPT?
Let's ask ChatGPT to generate code to connect our database to our endpoints.
ChatGPT prompts us back, asking for more information about our specific use case and the technology stack.
The assistant generates the code snippet, but you still have to tweak it to fit your particular use case, for example, by taking care of the specific versions of the tools you are using.
Depending on whether your code is in development or production, you might also want to make sure that sensitive information is hidden.
You could have gotten the same information from Stack Overflow. The only difference is that with ChatGPT, you don't get to read through several answers.
Let's further explore the question, "Will AI make programmers obsolete?"
If all this talk about AI writing its own code sounds complex or confusing, let's take a step back and have a look at the "behind the scenes" of how it all works.
Before we define AI, let's talk about programming. Programming is the process of writing instructions for a computer to follow so as to achieve a particular goal.
This set of instructions is known as a "program."
You can write a program for an e-commerce site like Amazon. For example. The instructions would be:
- A customer logs in.
- Goes through the item catalog.
- Chooses an item.
- Adds it to the cart.
- Checks out.
For the above instructions to be carried out, a lot of things need to take place. The user details need to be confirmed, for example.
The instructions, therefore, need to be very specific.
AI, or artificial intelligence, can be seen as "advanced programming." In addition to writing instructions for a computer to follow, we also "teach" it some human intelligence. For example, we "teach" AI to identify cat images.
This happens through a very complex process known as machine learning (ML). In ML, there are models (which represent real-world concepts). Programmers train these models using real-world data.
For example, a programmer can train a machine learning model to identify cat images. To do so, the model has to "see" lots of cat images. Over time, the model will identify the patterns that help it distinguish between a cat and something that is not a cat (something called "unsupervised learning").
The same thing happens with code — a model goes through lots of code or is "trained" via large amounts of code. Over time, it begins to identify the patterns that "define" code. Eventually, it will be able to generate some code.
This is what happened with ChatGPT. ChatGPT is basically a machine learning model that was trained with a lot of data from the internet, including code. The model on which ChatGPT was trained is called GPT-3 (Generative Pre-trained Transformer 3).
It then used an ML concept called reinforcement learning, where models are "rewarded" for accuracy and consistency.
In addition, there were also human AI trainers that helped frame conversations between humans and an AI. The conversations were then "fed" into the models. This is referred to as Reinforcement Learning from Human Feedback (RLHF).
ChatGPT is currently available for free in the research phase.
Now that AI is more advanced and there's more infrastructure to handle large data sets, will AI make programmers obsolete?
In this section, we look at some programming tasks that AI is doing well.
Coding competitions attract lots of developers, and for good reason. They are a good way for programmers to bolster their problem-solving skills. This way, they end up becoming better at their craft.
Moreover, there are also enviable prizes to be won.
AlphaCode is an ML model built for coding competitions. This far, it has ranked among the top 54% in coding competitions on sites like Codeforces, where there are tens of thousands of participants.
Who knows, AI might soon change how hackathons — coding competitions where you get to build a solution within a very short period of time (between 2 and 3 days) — are done.
Debugging is a necessary part of writing code. This is because developers need to deliver code that works according to a specified user story.
Remember our "list of instructions" for an e-commerce site? They can be rewritten into user stories, which are more customer-focused.
Instead of "a customer logs in," we can have something like "as Jaime, I would like to be able to log in to my account."
User stories and debugging are some coding best practices that every developer needs to learn in order to deliver a great product.
You can imagine what would happen if anyone was able to log in using Jaime's account. They could steal their credit card information, for example.
Let's not even get started with all the regulations around PII (personally identifiable information), like GDPR.
This is why debugging is important. It helps to identify aspects of your code you might have overlooked, for example, confirming Jaime's credentials before logging in.
The only downside to debugging is that it can get quite time-consuming. While writing tests (another programming best practice) helps with debugging, sometimes a developer can get absolutely "stuck."
They may not be able to identify an error or how to fix it. They spend hours on the internet and on sites like Stack Overflow trying to find out whether other people have experienced similar errors and how to solve them.
The good news is that with AI tools like ChatGPT and SonarQube (a code review tool), debugging becomes a faster process.
AI can therefore be used for pair programming, where it helps you write and debug code.
You may not understand the output (why it keeps printing "5"). Don't you have a for loop with a condition that checks the value of i every time, (if i < 5)?
Why does it print 5 in the first place, yet the loop should stop when the condition is no longer being met?
setTimeout() method sets a timer to execute a piece of code when the timer expires.
As you can see, ChatGPT not only points out the bug or error but also suggests a way to fix it.
Note that ChatGPT was not built as a debugger. It is a model that is meant to generate language and code (since code is written in programming languages).
The fact that you can "push" it to debug code means that we already have a framework and blueprint for an AI that can eventually be built to specialize in debugging and generating code.
This would make debugging way faster and easier in the near future, saving developers copious amounts of time.
As we have seen in the previous example, AI can help you debug code. In addition, it can generate code from prompts written in human language.
Let's ask ChatGPT to generate a code snippet that can help run the Fibonacci sequence. Its response:
The follow-up prompt was to ask ChatGPT to generate a code snippet for the Fibonacci sequence. It did so in Python.
As you can see, it generated a code snippet in Python and explained the code precisely — the different components, and expected output.
We have learned that AI can do some cool and phenomenal things. However, will AI replace web developers?
The good news is that it won't happen. At least not yet. This is because there are a few things AI has yet to figure out and some challenges it has yet to overcome. Let's explore them further.
When an AI uses data from the internet as part of its training data set, there are bound to be legal issues, especially with copyright, intellectual property, and licenses. For example, Codex's neural network (the OpenAI model Copilot is based on) was trained via billions of lines of code stored on GitHub in order to learn how to write code.
We don't know whether it only used code from public GitHub repositories or under free distribution licenses.
Recent research from MIT, on the other hand, shows that machine learning models trained with synthetic data can sometimes be more accurate than those trained with real-world data.
This could help with copyright issues. It would raise problems like working excellently in isolated cases (something referred to as "local maxima") but failing miserably at generalization.
For accurate results, a good ML model must be able to generalize data well.
Let's delve into accuracy when it comes to AI.
AI models like ChatGPT do give incorrect responses. In fact, in December 2022, answers from ChatGPT were temporarily banned from StackOverflow due to inaccuracy.
The AI does caution about its accuracy limitations, though.
The AI gives inconsistent responses as you tweak consecutive prompts. Besides, even when it's plainly wrong, it sounds really confident.
Another aspect that affects chatGPT's accuracy is the fact that it was trained with data until the beginning of 2022.
This makes it impossible to show current data. If you ask for current information, it says that it is unable to search the internet.
In addition, ChatGPT responses are inherently biased. It can, as a result, respond to harmful prompts or give inappropriate responses.
ChatGPT is also too wordy and uses the same phrases over and over, like saying again and again that it is a language model trained by OpenAI.
The company behind ChatGPT, OpenAI, on the other hand, is eager to address its model's limitations. This could hopefully resolve the inaccuracies and biases in the future.
While AI is doing a great job at code generation, it still needs input from humans to write code that actually works.
Moreover, it still needed more prompts to generate code that worked as expected.
So, will AI replace web developers? Not really.
Despite the advancements that we are seeing in AI, coding will still be an important skill in the near future.
Let's imagine for a moment that an AI like ChatGPT can write perfect code.
First, we'll still need humans who understand code to write appropriate prompts for AI to use. Secondly, a programmer needs to "verify" that the generated code actually works for their particular use case.
Thirdly, we need humans to think about other aspects of the software that is being built---the user experience and other constraints that may affect performance and scalability.
For example, programmers will need to make decisions on the best software combination to go with to ensure that it can handle traffic spikes.
Code generation tools are "assistants." There's a reason why Github's code-generating tool is called Co-pilot. It is helping you "navigate" through your code, rather than taking over the pilot's seat.
As we mentioned, AI would work great for pair programming. You become the navigator, instructing it, and it generates code as the driver.
We cannot ignore the fact that AI is getting better at learning. Who knows? We might see a decrease in demand for coding skills.
We might actually be very wrong. There's a book called "Optimism Bias" by Tali Sharot, and it scientifically explains why humans are wired to misinterpret the reality of circumstances and how grossly inaccurate we can get with predicting our own futures.
So, AI might actually be coming for web development jobs 🤭
We then dove deeper into questions around "Will AI make programmers obsolete?" We defined how AI generates code — the "behind the scenes" of how AI "writes" code.
We also looked at some examples of where AI is thriving as well as some of its limitations. Finally, we concluded that learning to code is still a valuable skill in the wake of AI advancements.
It is important to note that AI will keep getting better, and it might be a smart move to upskill and acquire advanced skills like DevOps in the future. Or stay alert for new jobs that will be created as a result of AI.