DEV Community

Cover image for Future of Generative AI
Ranjan Dailata
Ranjan Dailata

Posted on

Future of Generative AI


At least since Open AI's ChatGPT was made available to the public, the term "Generative AI" has gained popularity. The daily release of new Large Language Models (LLMs) or upgrades to existing ones is causing a dramatic shift in the field of generative artificial intelligence.

Disclaimer: No one can predict with certainty what the future of Generative AI will hold. But the purpose of this blog post is to provide some insight into how the field of generative AI is constantly changing.


Let's take a brief moment to understand the definition of Generation AI. Here's the quote from ChatGPT.

Generative AI refers to a class of artificial intelligence (AI) systems that are designed to generate new, original content or data rather than simply analyzing existing data or making decisions based on predefined rules. These systems use various techniques to create new content, often imitating or generating data that is similar to what they have been trained on.

Getting Deeper

Let's take a step deeper to understand the nature of "Generative AI" possibilities in the field of AI computing. At the moment, when you think about the Generative AI, it seems really an abstract thing. However, the applications of Generative AI is numerous in various domains, including image and video synthesis, text generation, music composition, and much more.

Fundamental Problem

One of the CORE fundamental problem of generative content is, the limitation of trained data. No matter, what unique or new content the large language model is capable of generating, yet there is always a limitation. Consider an example of a blog creation, if you seek the LLMs for writing an innovative or creative one, it does generate it for you. However, no matter what type of content the LLMs produce, you still get bored or don't feel enthusiastic with the generated content. Although, this doesn't seem to be a problem at first, there are a few things to be considered. Especially, the authenticity of the generated content. LLMs are known for their hallucination and the way with which they generate content seem to be realistic and making sense, however the tendency to generate bogus or fake content cannot be controlled. Depending upon the field you are working, the impact would be more or severe in nature.

If you are wondering about the workarounds, you may choose to go with the in context or fine-tuning. These are the supported techniques at the moment. Although the hallucination problem could be avoided to an extent, yet the human review is still required.

Human Dependency

No matter how intelligent the AI systems be, it is always stuck with the cut-off on the trained data. You could argue on automating the data collection and update the model and it's dependencies, yet the above fundamental problems still exist. This is where the "Humans" come into the picture. The AI system needs to be consistently trained with the clean and authentic data; this requires a huge effort from the humans and these aspects will not go away in the near future; no matter how advanced computing be in the near future.


Let's take a hypothetical scenario of "Humans" involvement for the constant training of the AI. As discussed above, the fundamental problem remains the same. Since the Humans are the main producers of the data, the AI is always dependent on the humans. More over, there's always a need for the humans to be involved in reviewing and making sure the data that is being trained is authentic or valid.

Some Fiction

It's possible the future of AI will have direct access to the human's brain, or it can analyze the thoughts and make some adjustments to the model for the production of better "Generative AI" purposes. While, at the moment, though it sounds theoretical, the today's AIs are already capable of understanding the human thoughts or emotions.


In this blog post, you are provided with the future vision of "Generative AI" and its challenges. Hope it all makes sense. Welcome to the era of "Generative AI". We are hoping it will bring some goodness to the mankind.

Top comments (3)

iamspathan profile image
Sohail Pathan

Nice Blog, Ranjan. I believe we have known Gen AI for a very long, what ChatGPT did is it gave UI for interacting. Specially to non tech audience.

ranjancse profile image
Ranjan Dailata • Edited

The Generative AI technology was an involving one, and it wasn't a perfect one until recently with the transformer architecture. Yes, the ChatGPT has significantly changed, however the industry was experimenting with it from a couple of years. At least as I understood, the OpenAI took more than 5 or 7yrs to come up with the solid "Generative AI" models that we see today!

iamspathan profile image
Sohail Pathan

Indeed, it is true. There has been a tremendous amount of effort put forth behind the scenes to create what we are witnessing today. Personally, I am very enthusiastic about what the future has in store for us.