AI is having a moment.
AI – or Artificial Intelligence for long -- has been a buzzword in the tech world for many years.
And how it’s been used to this point is more behind the scenes, everyday folks are now able to interact with AI and see how quickly it will transform our world.
Really, how it’s transforming our world right now.
Not later. Now!
So, let’s dive into first what AI is, and then look at three ways it’s changing how we think about art, writing, as well as how it works hand-in-hand with machine learning.
OK, by now we’ve all heard the term “AI” thrown around. And like other technologies we’ve seen come and go, it’s touted as a technological revolution that will change everything.
But first, let’s define AI.
AI defines AI
“Artificial intelligence (AI) is a branch of computer science that aims to create intelligent machines that can think and act like humans. It is an interdisciplinary field of science that combines computer science, mathematics, psychology, linguistics, engineering, and other sciences to create intelligent systems that can analyze large amounts of data, recognize patterns and relationships, and make decisions based on the information they have. AI has the potential to revolutionize many industries and is already being used to help solve complex problems in fields like healthcare, finance, and transportation.”
This was the answer given by AI itself, using ChatGPT. More on ChatGPT in a bit.
First and foremost, this is a fantastic answer. Secondly, it’s been made to “sound human,” and does.
How does it hold up to a definition written by a human, though?
Human definition of AI
This, from Ed Burns of Tech Target:
“Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision.”
And he continues:
“In general, AI systems work by ingesting large amounts of labeled training data, analyzing the data for correlations and patterns, and using these patterns to make predictions about future states. In this way, a chatbot that is fed examples of text chats can learn to produce lifelike exchanges with people, or an image recognition tool can learn to identify and describe objects in images by reviewing millions of examples.”
The two definitions are incredibly similar, and while one may prefer the human writing over the AI, another person may prefer the AI definition.
And, if you were to tell someone the first definition wasn’t written by a human, well, that’s a bit shocking because it’s so well-done.
3 Ways AI is already changing our world
OK, so we get the concept of AI.
Artificial Intelligence is what it sounds like: A computer which ingests a ton of data, then processes and organizes that data to create future predictions, including emulating human language. Of course, because it’s a computer system, the amount of information it can hold is many times what a human’s brain can hold, process and organize.
Therefore, it’s possible to envision a world in which AI-powered robots not only help us solve incredibly complicated problems, but also replace jobs in which critical, human thinking has always been needed.
ChatGPT is taking the world by storm.
The incredible AI chatbot, created by Open AI, not only answers questions but it does so in a human-like way.
That means, it’s very difficult – or even impossible – to know if what you’re reading was created by a human or by the chatbot.
From the Open AI playground:
“ChatGPT is a natural language processing (NLP) chatbot that utilizes the open-source GPT-3 language model. It is designed to generate human-like conversations and can be used for automated customer service, virtual assistants, and other chatbot applications.”
Simply, the technology is tremendous. This isn’t some half-witted design still in the beta phase, this is a polished, amazing chatbot which will only continue to improve in the future. In fact, GPT-4 is still in testing phase, but it will be out soon-ish.
Impacts of ChatGPT
So, on one hand, ChatGPT is incredible.
It can answer complicated questions in chatbots for humans.
We’ve all been there before; There’s a complicated question we have about our bill, and the chatbot doesn’t know exactly how to handle it. So, we have to get through many prompts before it offers up, “Speak to a representative.” And then the human on the other end understands what we’re asking and helps us through the issue.
That could be extended to kiosks in airports for international travelers, or even in an app which helps people get around a foreign place. And in many other applications.
But the downside is how good it actually is. Meaning, if a student were asked to write an essay on a topic, there’s no way a teacher would be able to know if the student wrote it, or if it were AI.
For instance, I asked ChatGPT to write a blog on AI. It gave a superb, detailed, amazing answer.
Wonderfully, ChatGPT even acknowledges the ethical implications of using AI here.
“Many people are concerned about the potential for AI to automate jobs, and there are also concerns about privacy and bias in AI systems,” ChatGPT itself wrote.
Simply, if ChatGPT can write this well, what does it mean for writers? Content creators? Bloggers?
If AI is so accomplished at writing, and it holds the knowledge of the entire universe inside itself: Why would humans become writers? How could a human be a better writer than AI?
And why would a child ever write their own essay when they can use this?
And this extends not just to creative writing, but into writing code, too. Which ChatGPT can do.
Here’s Replit using ChatGPT to write code and build a website in real time.
Will AI chatbots replace not only writers, but software developers? What are the ethics of employing a free program instead of a human, putting that human out of a job?
These are some of the questions generated by this sensational new technology.
Just like with writing, AI may end up replacing artists, too.
We’re talking painters, graphic artists and much more.
Basically, the same way AI ingests words and uses machine learning to speak in a “human-like way”, the AI simply ingests pictures. Thousands and even millions of pictures.
But, if the artists didn’t give consent to use their artwork, is it ethical to use their style?
As written about in NBC on Dec. 6, 2022:
“Days after illustrator Kim Jung Gi died in October, a former game developer created an AI model that generates images in the artist’s unique ink and brush style. The creator said the model was an homage to Kim’s work, but it received immediate backlash from other artists. Ortiz, who was friends with Kim, said that the artist’s “whole thing was teaching people how to draw,” and to feed his life’s work into an AI model was ‘really disrespectful.’”
And from that same piece comes a tweet, calling out Lensa.ai, the application being used by many to turn their personal photos into AI fantasy art.
As you can see, the original artists signatures are still in the photos. In other words, that artist’s style was completely ripped off and the person’s portrait was generated in that style by the AI.
Prisma, who owns Lensa has responded on twitter saying, “AI-generated images ‘can’t be described as exact replicas of any particular artwork.’” Again, per the NBC piece referenced above.
So, just like with writing, visual artists are already questioning the ethics of AI using their artwork.
Besides Lensa.ai, there’s also MidJourney, which allows anyone to give their AI a prompt to create images like this:
Our mascot/logo at Fathym is Thinky the Octopus, and this is how we like to think he could look if he was in the “Pixar style” as one of our developers gave the prompt, “Octopus on a computer in Pixar Style.”
So, we’ve talked extensively about AI, but what about machine learning?
ChatGPT defines machine learning as:
“Machine learning is a type of artificial intelligence that enables computers to learn from data, identify patterns, and make decisions with minimal human intervention. It is a subset of artificial intelligence that uses algorithms to analyze data and make predictions. It enables computers to learn from experience without being explicitly programmed. Machine learning can be used for a variety of tasks, such as facial recognition, natural language processing, and predictive analytics.”
To give a human’s take on it, this, from Jay Selig at expert.ai:
“Machine learning is an application of AI that enables systems to learn and improve from experience without being explicitly programmed. Machine learning focuses on developing computer programs that can access data and use it to learn for themselves.”
So, think of AI as the intelligence, the smarts of a computer system. The data.
But machine learning is how it uses that data to do predict future events or sound like a human. And simply, learn from the data.
So, how is AI and Machine Learning being used together?
Well, for starters, the aforementioned ChatGPT uses AI and ML.
From Tech Crunch:
“ChatGPT works by using machine learning algorithms to analyze and understand the meaning of text input, and then generating a response based on that input.”
It should be noted that ChatGPT explained that itself.
Which poses the question: Can AI be biased? Can AI use machine learning to keep certain information from us if it wants to?
Who else is using AI-ML together?
Drone Express is, per Drone DJ,
“The last-mile delivery provider says it’s on track to become one of the first companies in the US to attain a Part 135 FAA certification for autonomous drone delivery. But doing so wouldn’t have been possible without Microsoft Azure hosting Drone Express’s AI solutions and training its machine learning models.
Drone Express trains its machine learning models by feeding them photos of obstacles and scenarios that the drones could encounter and teaching them to identify safe delivery areas."
Habistack Weather Forecasting API
Fathym’s Habistack, also uses a combination of AI-ML for weather forecasting, specifically related to road conditions.
Habistack combines the world's best weather forecasts with statistics-based, machine-learning techniques to tackle the largest datasets, including road weather. Habistack offers developers comprehensive weather forecasting capabilities over freely chosen locations and routes across the globe. The API delivers a unique suite of highly specialized forecast variables derived through statistically based machine learning models.
That data is all ingested into Microsoft Azure and into the AI, and then the machine learning uses all that data to create forecasts (predictions) based on that data.
Conclusion, questions about AI and ethics
AI isn’t just something that will affect our everyday lives in the future. It’s something that’s impacting our lives today.
That’s both amazing and scary at the same time.
There are many more questions than answers at this time.
Some of them are:
Will AI chatbots phase out writers and other creatives?
Will AI chatbots be used to write code and therefore phase out developers, too?
What are the ethics of putting an artist’s work into AI and then creating art based on their works? Especially when the artists do not give permission?
Can AI chatbots be biased? And how would we know?
Should we trust every answer a chatbot gives us? Why?
And finally, are we ready for AI? Either way, it’s here to stay and will continue to change our lives in ways we haven’t even imagined yet. Both good and bad, unfortunately.
Top comments (3)
But what about copyright? It's not same problem like with Github Coopilot? I think yes, but lawyers and courts must solve this problem. Because I don't think about all text, source code, photography etc. is open source - will be a interesting problem, and I of course don't agree with using my own copyrighted code by this researchers. Reason is simple - they don't ask me and all others authors, photographers, painters, writters... Then it's not difference like any company use my code in own product.
I really don't have optimistics view like author about - "we don't need writters, developers, painters..." - because what really this "AI" doing it's derivation of already created things. Really can this "AI" solving problems what nobody do before? I think not. And this is difference between "AI" (not matter how big is their database model) and True creative AI. And about copyright - mentioned above, what is a difference when I merge parts of copyrighted code, books, articles... and I'll pass it off as my own? It's simply stealing someone else's work like this so-called artificial intelligence does.
P.S. I like AI research of course, but this hype about GPT it's "little" too much I think ;)
There are also too many questions about generated code. How to: maintain, debug, fix, integrate, optimize/refactor etc.? Pick a task of any grain.
Yep, at the end of a day it's just some "random" messing with already cooked data [with peer review at best].
Awesome article and very well written! I've learned a lot about GPT that I didn't even know before.