DEV Community

Wriju's Blog
Wriju's Blog

Posted on

Programmer vs AI

A word of caution before you read the whole article: it could be perceived as an individual's babbling. Having been in this IT industry since the 2000s and having spent a significant amount of time as a professional programmer for small and large customers, building for different platforms like mobile and web, I have come across a long surviving path. Some may call it "A Journey" --- but that to me seems philosophical. At times, you have no choice but to indulge in projects, that for an individual may seem to go backward. From a modern cloud application development project to being pulled into pure Excel macro automation with the hope that it might someday become an ASP.NET application is surreal.

While I have done a complete multi-page HTML, CSS, and JavaScript Website for a shop in notepad to enjoying almost all world-class code editors of the time, I have seen it moving towards programmers’ productivity. I enjoyed the intellisense and statement completion to popup help guiding about the function's parameters and return types. These might have made the life of a developer easier. Hence, she could write more lines of code. Working for a customer who pays the bill by counting the number of characters compiled for the production made sense when these little things helped felt like a luxury.

But computer programming is not just about writing code, rather it is about writing good and performant code. Subtle differences in types and classes selected for the code made a huge difference. These came with experiences from others and reviewers. Also, a senior developer's core job in any project was to first define the guardrail to guide each one in the team to follow a basic standard and certain common naming conventions and coding choices.

Understanding the inner workings of each type helped me figure out which one is the best. When concatenating strings in .NET Framework, why someone might not use "string+=" but must use "String.Append()" was part of that experience. There are many such examples. They are never documented as a single point of truth and as a single source. Pity, but we must live with this reality.

When AI comes and joins the league of a developer community, it builds its knowledge based on many millions of lines of code available in the public domain such as GitHub or Stack Overflow. If not all, but most of the code, might be compiling well without any significant syntactical error. What they don't tell us is if they have followed all the known best practices, are being reviewed by any senior developer, or have one thorough system-level performance testing. Even if so, no data is so connected and chained to provide this important required input to AI knowledge.

Also, AI never accepts its failure through an automatic system output. It is humans who either accept or deny their output. Most AI systems do not have a self-critical rating system. If designed well, it can have a feedback loop to try to improve. But unfortunately, end-users who are the examiners, are not that reliable or skip that optional activity.

If we start thinking about replacing human developers with AI systems, we never know if in near future will we have zero-error compiled non-performant code as our applications. The application will consume unnecessary memory; hence, we would end up building a more robust system. For example, to play a graphics-heavy game we need a GPU-based machine. What if AI generates a simple end-user utility that starts using the libraries designed for data science because it found that as its training knowledgebase?

There is no denying that AI will bring in productivity, but thinking towards no-human application development could be a real threat. As in the Spiderman movie they say, "Great power comes with greater responsibility", we need to acknowledge the fact that building an atomic bomb may be a scientific achievement but applying it against millions of innocent people after the war ended is sheer stupidity.

Top comments (0)