DEV Community

Hafsa Jabeen
Hafsa Jabeen

Posted on • Originally published at codesphere.com on

Generative AI: How Self-Hosting Can Help Your Bussiness

Generative AI: How Self-Hosting Can Help Your Bussiness

There is a lot of uproar about AI and its implications around the world. We do not believe that AI will be sentient anytime soon. Some of the hype around AI is just that, hype. However, companies need to understand the true effect of AI to stay ahead in the competitive market. Organizations should try to explore the possible use cases of generative and the way it can help their businesses.

We are going to discuss what artificial intelligence is and how generative AI has developed to its current state. We will also talk about the ways businesses can integrate self-hosted LLMs in their workflows and how Codesphere can help. So, let’s dive right in.

What is AI?

Artificial Intelligence is a field of computer science, that has gained insane popularity in the last two decades. AI tries to build a machine capable of making human-like intelligent decisions. AI systems are designed to analyze, reason, learn, and adapt. It allows them to perform tasks, such as problem-solving, language understanding, and decision-making.

An AI model is basically a program that can be trained on data and make predictions or decisions based on it. The field of AI encompasses an array of technologies including machine learning, deep learning, natural language processing, and robotics. AI's potential is vast, and it continues to shape the future of technology. AI offers solutions to complex problems and unlocks new horizons in automation and decision support.

What is Generative AI?

Generative AI is a sub-field of artificial intelligence where algorithms are used to generate plausible patterns for things like language or images. The general narrative about generative AI is that it makes machines able to come up with original ideas, like writing stories or making art. For example, it can write stories or generate images without being given specific examples. However, in reality, these machines are basically extremely efficient at recognizing patterns from the data they are trained on and recreating similar patterns. Generative AI is often used for things like making chatbots, translating languages, and creating content.

History of Generative AI

Much like every other field, generative AI has also had gradual and at times abrupt advances. If we try to trace back the history, we can see that it has been around since 1913, when the Markovian chain algorithm was developed. It was a statistical tool to generate new data. Here the input defined how the next sequence of data would look like.

However, the more useful generative AI era began after 2012. It was then we finally had powerful enough computers to train deep-learning models. These models were trained on large data sets. Deep learning models considerably improved many organizational tasks like language translation, image-to-text, and art generation.

In 2014, a game-changing algorithm by Ian Goodfellow was released. The development of the Generative Adversarial Networks enabled the deep learning models to self-evaluate using game theory. It’s said that Generative Adversarial Networks (GANs) gave computers the ability to be creative while generating content.

Innovations in Generative AI

Shortly after the GANs, another technique called transformer networks came. This technique was based on the idea of attention. It gave an “attention score” to different parts of speech. It immensely improved AI models in understanding and generating content. This technique combined with the power of deep learning, gave birth to Large Language Models that we see today.

What are LLMs?

Large Language Models (LLMs) are complex deep learning models that are trained on huge datasets. They have ability to understand and use human language in a really advanced way. They can effectively perform tasks like text generation, language translation, chatbot development, and content summarization. These models are highly adaptable and efficient, which is why they are important in the development of advanced AI-driven solutions for various industries.

Large Language Model Revolution

Previously there were a lot of smaller language models that were specific to different industries. These models needed months of work by AI experts. They were also extremely resource-intensive and time-consuming as they also took a lot of time to train. Moreover, the data required to train these models needed to be gathered, labeled, and prepared.

So the innovation of large language models made this technology general and easier to use. LLMs didn’t serve a specific field, so they could be used for several use cases. They didn’t require you to gather labeled datasets and specialized engineering capabilities. This made them reliable and robust to be used anywhere.

However, LLMs needed extremely high computing power to train and were expensive. This meant the big tech had a monopoly and the only option to use them was through the provided APIs. Many organizations could not use them because of several issues like lack of control over models, data protection, and data sovereignty.

Generative AI: How Self-Hosting Can Help Your Bussiness
Small language models vs Large language models

Things took a turn after Meta released LLAMA-2. It made top-notch LLM models available to everyone that could be used by anyone without compromising data integrity. The open-source community has been since rolling out many updated versions on a daily basis. Now, you can find a ready-to-use, fine-tuned version for almost any niche or use case.

Suggested Reading: 7 Open Source LLMs you need to know about

How Can You Use LLMs?

Now you may ask how to use LLMS on an organizational or business level. Let’s talk about different ways to do that.

Prompt Engineering

One way to work with LLMs is prompt engineering. It involves asking LLM the right questions. You can test different models rapidly using prompt engineering. So, It is a fast and efficient way to get the results you want.

Nonetheless, there are drawbacks to consider. It sends out data to OpenAI, which is not suitable for sensitive data. The other downside is that the prompt you can send out cannot be more than 2048 words, which is very limiting.

RAG (Retrieval-Augmented-Generation)

Retrieval Augmented Generation known as RAG is basically an AI framework. It tries to improve the quality and accuracy of responses generated by LLMs. This framework allows you to integrate an external database, which your LLM can fetch information from.

How it works is, that when a query is made, RAG first searches the database to find relevant information. The retrieved information is then used to craft a coherent response to the query. This makes these models factually accurate and relatively fast to implement.

Although, you will still need to deploy your own LLM if you don’t want to share your data with OpenAI. Also, whether you use self-hosted LLMs or API-based you would require ML OPs or DevOps engineers.

Fine-tunning

This approach involves improving or customizing an existing LLM model with your own unique data. Multiple experiments have shown that smaller LLMs trained with specific data perform as well as or better than larger LLMs.

The benefit of this approach is that you get complete data autonomy. It also allows you to have complete control over your LLM model as it won’t get impacted by any updates of terms from an API provider. Moreover, on a large scale, this could be the cheapest possible solution for you. The downside to this approach is that you need your own data center and DevOps experts to implement these models.

Comparison of API-Based vs Self-Hosted LLMs

If you look at the benefits and drawbacks of API-based vs self-hosted LLMs, these are the points that will stand out.

API-Based LLMs Highlights

  • You can start using them immediately and get a good performance from the get-go.
  • The initial pricing for these models is low and you don’t need ML Ops engineers to set up.
  • With managed LLMs you can only work only with very limited data, typically limited to a few thousand words only, even if data is injected via RAG and your company does not own the IP.
  • You have to send your sensitive data over the internet to large (maybe foreign) companies.
  • You do not have control over the model and can be affected by the changing policies or terms of the service provider.

Self-Hosted LLMs Highlights

  • These models are customizable and fast.
  • You do not risk any data leaving the country where your data centers are located.
  • You can work with unlimited amounts of data via fine-tuning.
  • Self-hosted LLMs typically require expensive hardware, typically a few A100 GPUs that cost ~15k per month.
  • These models also need constant infrastructure maintenance that can cost you around 20% of your project cost.
  • You need expensive DevOps Engineers for the setup, hosting, versioning, and application layer around the model. In addition to that to evaluate, fine-tune and improve the model you require ML Ops engineers.
  • After all these financial and workforce resources it can take up to 6 months until the infrastructure project is set up.

How Codesphere Helps Businesses Self-Host LLMs?

So, the two major issues when it comes to self-hosted LLMs are cost and complexity. Here is how Codesphere can help you navigate through these challenges.

Cost

Servers where you run LLMs usually have a fixed size regarding computing and memory. Running inference (serving requests/sending queries and getting an output) for AI models takes high computing resources per request. This is quite higher than what a normal website takes.

In LLMs, a single request from a user takes multiple seconds and (almost) fully occupies one server. If we want to serve multiple requests in parallel we need multiple servers or server cores (which refer to the number of virtual CPUs in a server).

The traffic and the number of requests your model gets fluctuate and are hard to predict accurately. What happens usually is that organizations estimate the peak traffic and pay for them to accommodate the peak hours. This for obvious reasons increases the cost.

Codesphere offers “off when un-used” server plans, which automatically turn off when you are not getting user requests. These servers have super fast cold starts and they can be up and running in (10-20ms in the future) 1-5 seconds currently. For example, if you have a chatbot for internal use in your organization, it will only cost you computing resources during office hours.

Our educated estimate suggests it is over 90% cheaper to run low-traffic models. Moreover, scaling your models can be 35% cheaper because you do not have to reserve the maximum computing capacity to accommodate peak traffic hours.

<!--kg-card-begin: html--><br> @import &quot;<a href="https://rsms.me/inter/inter.css">https://rsms.me/inter/inter.css</a>&quot;;</p> <p>:root {<br> font-family: &#39;Inter&#39;, sans-serif;<br> }</p> <p>@supports (font-variation-settings: normal) {<br> :root {<br> font-family: &#39;Inter var&#39;, sans-serif;<br> }<br> }</p> <p>.info_<em>container{display:flex;padding:40px;align-items:center;justify-content:center;position:relative;overflow:hidden;border-radius:19px;border:2px solid rgba(129,75,246,.4);background:linear-gradient(0deg,rgba(43,27,78,.4) 0%,rgba(43,27,78,.6) 100%),#080809}.content</em><em>wrap{display:flex;align-items:center;max-width:900px;flex-direction:row;gap:16px;justify-content:space-between}.blur-effect{width:728px;height:311px;position:absolute;left:-25px;bottom:-71px;fill:linear-gradient(140deg,rgba(175,44,187,.45) 0%,rgba(0,26,255,.45) 100%);opacity:.4000000059604645;filter:blur(52px)}.left __container{display:flex;position:relative;z-index:999;flex-basis:37%;flex-direction:column;justify-content:center;align-items:flex-start;gap:24px;align-self:stretch}.text</em>_ container{display:flex;flex-direction:column;align-items:flex-start;gap:16px;align-self:stretch}.text <strong>heading{display:flex;flex-direction:column;align-self:stretch;margin:0;color:#FFF;font-family:Inter;font-size:32px;font-style:normal;font-weight:700;line-height:96.023%}.text</strong> desc{display:flex;flex-direction:column;align-self:stretch;margin:0;color:#CCC;font-family:Inter;font-size:16px;font-style:normal;font-weight:400;line-height:normal}.btn{display:flex;padding:10px 40px;justify-content:center;align-items:center;gap:10px;border-radius:3px;background:#814BF6;color:#FFF!important;font-family:Inter;font-size:16px;font-style:normal;line-height:normal;text-decoration:none!important}.btn:hover{background-color:rgba(129,75,246,.75)}.right <strong>container{display:flex;position:relative;z-index:999;flex-basis:63%;flex-direction:column;justify-content:flex-end;align-items:flex-start;gap:32px;align-self:stretch}.usp</strong> container{display:flex;align-items:center;justify-content:start;gap:26px;align-self:stretch}.usp <strong>icon{display:flex;padding:8px;justify-content:center;align-items:center;border-radius:5px;border:1px solid #814BF6}.usp</strong> text span{color:#FFF;font-family:Inter;font-size:16px;font-style:normal;font-weight:700;line-height:normal}.usp <strong>text p{color:#CCC;font-family:Inter;font-size:16px;font-style:normal;margin:0}.usp</strong> svg{width:24px}.text_<em>container p{padding:0 0 0 0!important;margin:0 0 0 0!important}@media screen and (max-width:600px){.content</em><em>wrap{flex-direction:column;gap:40px}}@media screen and (min-width:950px){.left</em>_container{flex-basis:33%}}<br>













Zero config cloud made for developers

From GitHub to deployment in under 5 seconds.

Sign Up!

Review Faster by spawning fast Preview Environments for every Pull Request.

AB Test anything from individual components to entire user experiences.

Scale globally as easy as you would with serverless but without all the limitations.

Complexity

Setting up an AI model is easy on Codesphere. It takes under two minutes to deploy an open-source LLM model like LLAMA2. You do not need to have DevOps or Machine learning engineers to set it up or maintain it. So, there are no maintenance costs.

If you want to fine-tune your model, that can be done by anyone. This can be achieved by fine-tuning your model with the help of services like chatGPT. Moreover, setting up the whole application follows Google’s software development standards. It is pre-set and can be used without any configuration.

Suggested Reading: Software Development: Moving Away from Sequential Workflows

Conclusion

Generative AI has progressed exponentially in recent years in terms of performance as well as availability. The open-source LLM models are at par with the API-based LLMs and much more beneficial in most cases. It is neither complex nor expensive to host your own LLMs anymore. It is a great development for businesses and organizations who want to use the technology without giving up data autonomy and control. Codesphere, effectively resolves the cost and complexity issues that organizations face when they opt for self-hosting.

Top comments (0)