There has been popular worry about generative AI replacing jobs. Software engineers were also said to be on the list: producing code is similar to producing language, and while generative AI is not yet very good at producing working larger bodies of code, this would only be a matter of time. Or so the thinking goes.
(So also goes my thinking: I do think that the next iteration of LLMs will be able to produce significantly better code than previous iterations, and Iām looking forward to it.)
I donāt know if it really is just a matter of time until generative AI can write working code that is more than a few lines long. Whether or not it comes to be, it is a scenario worth thinking about.
The problem: AI and code inflation
Generative AI writing code doesnāt mean that software engineers will be out of a job, because only part of a software engineerās job is to write code. A much larger part of their job is to decide which code to write, and how.
An engineerās job consists of making a myriad of decisions, some smaller, some larger. Many of them concern the architecture of software: how generic a function or component should be, or how specific and custom it needs to be. How to handle data across the application. How to ensure performance and security on top of functionality. How to expose functionality to the user. And so on.
Having an AI assistant that helps an engineer produce more code more quickly will probably also mean that less time is spent on thinking about these decisions.
It also means that more code will be produced, which is not necessarily a good thing. It will create demand for all the surrounding functions that already now are often understaffed. Quality assurance and security are two that come to mind. User experience design is another.
If it were the case that more productive software engineers means less of them are needed and budget is freed up that actually goes towards these surrounding functions, great.
However, I think that is highly unlikely to happen.
The rise of throwaway-code
Already now there is an unhealthy focus on producing more code, more features, new releases with new things, instead of improving quality and user experience. This balance will worsen further if code production becomes cheaper.
It is similar to the horror scenario envisioned by some regarding marketing: with the cost of producing marketing content plummeting with the use of generative AI, there will be a lot more of it. A lot of it will also be garbage. (A lot of it already is garbage.) This creates noise, which sucks attention and energy, leads to equally rapidly plunging returns for content marketing, and will require a rethink of go to market strategies if content marketing gets āused upā and stops to work.
Something similar might happen with code: when producing code becomes cheap, managing it well will become both more difficult and more valuable.
Maybe this is the advent of throwaway-code: scripts written for a specific purpose that are not intended to last. They are used for as long as they work and are useful, and then they are simply replaced with a different script that fits new requirements, which might be produced from scratch every time.
This might make code management or even quality assurance obsolete - at least to some degree. A plastic fork that you use once doesnāt need to be particularly sturdy or pleasant to look at: you use it once, then you throw it away.
Throwaway software might be the same: unwieldy, inefficient, horrible from a technical standpoint, but if it gets you there cheaply, it might still come to dominate over software produced by humans which might be secure and pleasant to use, but much less flexible and a lot more expensive.
Even security gaps might become less urgent if throwaway software is used for only a day or two: not enough time to find and exploit security vulnerabilities. If the next iteration is written from scratch, it might have different vulnerabilities. A new form of security through obscurity.
However, generative AI writing software will coexist with generative AI hacking software. Also, since generative AI regurgitates what it reads āon the internetā, it is likely to reproduce common vulnerabilities that can be quickly identified and exploited.
One AI Is not enough
The obvious answer is that generative AI being used widely for producing code will only happen once generative (or other types of) AI will also be able to fulfill the function of quality and security assurance and UX design as well. You might get a āteamā of specialized AI models, similar to software development teams composed of humans.
That seems to me to be several steps further away than generative AI models that produce working code. Writing code seems pretty close to what they are already good at. Designing good user interfaces, translating these designs into something another AI model can understand and implement, and then making sure that the resulting software is safe and bug-free is a whole other game with a much higher level of complexity than ājustā figuring out how to write lines of code that do what a user asked for.
The hype is dead. Long live the hype!
Looking back, AI and machine learning have gone through many hype cycles. This is the latest one. Iām excited and curious to see what impact it will have.
Previous hype cycles fizzled out without much impact. The last one at least provided real and large benefits in some areas: content recommendation and image processing, for example. Due to the broad nature of generative AIās capabilities, Iām convinced that the effects of this hype cycle will be significantly larger than the effects of the last one. However Iām also sure that it will not be fundamentally life changing.
Top comments (0)