Regardless of our opinion of whether artificial Intelligence is good or bad, AI is driving and transforming. In today’s world, where data is the new Oil, AI is sitting on top and getting things done. But how is it getting it done? By all of us feeding AI with data. Solutions like ChatGPT are growing at lightning speed based on users and data feeds. Fundamentally, LLMs are super-powerful text predictors. Given a prompt, they use machine learning to determine the most likely set of characters that follow the prompt. Companies like OpenAI spend much time and energy tweaking the models to improve the output. An army of human labellers “grade” the model's output, and the model learns and evolves. Once you have experimented with tools like Chat GPT or Bing's AI-powered search, you've probably noticed that the responses are maybe 80% correct, but they're said with absolute and unshakeable confidence. LLMs aren't able to validate their assumptions or test their hypotheses. They can only confirm whether what they’re saying is true or not with our help. They're playing a probability game and estimating that this set of questions seems compatible with this set from the prompt. Sometimes, parts of that response are nonsensical. The OpenAI team refers to these as “hallucinations”. As the technology improves, we can expect some rough edges to be smoothened down, but fundamentally, there will always be some inaccuracy. Accuracy will improve, but it'll never be “perfect”, as we all have different standards to define “perfect”. Lately, there have been some demanding calls on all AI labs to immediately pause the training of AI systems for at least six months more potent than GPT-4 as we are not ready with policies and compliance and train humans to differentiate what is driven by AI and what is not. AI research and development should be refocused on making today's robust, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal. AI Training isn’t easy, but it can be exciting. With our imagination, we understand, based on any situation, a dog might even ignore the cat and choose something else to chase. Therefore, train AI with morals and values.
For further actions, you may consider blocking this person and/or reporting abuse
Top comments (0)