Chatbot technology has undergone a dramatic transformation, evolving from basic rule-based systems to sophisticated AI-driven platforms. This progression is largely attributed to advancements in machine learning and natural language processing (NLP).
Initially, chatbots were limited to responding to simple queries with predefined rules. The emergence of Large Language Models (LLMs) like OpenAI's GPT series, notably ChatGPT, revolutionized this field. These models, which process and generate language from extensive datasets, have enabled chatbots to engage in more complex, human-like conversations. Their ability to accurately and contextually respond to a wide range of queries has significantly broadened the utility of chatbots in diverse sectors, including customer service, healthcare, and entertainment, offering nuanced and intelligent interactions.
FastAPI is renowned for its modern architecture, offering high-speed performance and ease of use in API creation. FastUI provides a streamlined approach to front-end development, enabling the creation of user-friendly interfaces with minimal coding. Additionally, MistralAI has emerged as an innovative player in AI, offering both open-source models and customized cloud-based solutions.
In this tutorial, you will build a chatbot powered by MistralAI, having FastAPI as the backend and FastUI as the front end, and all in one single code base.
You can deploy and preview the applications from this guide by clicking the Deploy to Koyeb button below. View the application repository to view the project files and follow along with the guide.
Note: Be sure to set the MISTRAL_API_KEY
environment variable to the API key you generate for Mistral AI after subscribing to the platform and adding billing info.
Requirements
Before diving into the construction of the chatbot using FastAPI, FastUI, and MistralAI, it's crucial to understand and gather the requirements. This section outlines the key components, software, and skills needed to embark on this journey.
- Knowledge of Python: A solid understanding of Python programming is fundamental, as it is the primary language used in FastAPI and FastUI.
- Familiarity with Basic AI Interaction: A basic grasp of AI principles, especially in chatbot interaction, will be beneficial. This knowledge will help in understanding how MistralAI's LLMs function and interact within the chatbot framework.
-
Access to MistralAI: Having access to MistralAI is crucial. This involves:
- Creating a MistralAI account
- Subscribing to the platform
- Adding payment details to pay for usage
- Create an API to use with your application
- Koyeb Account: A Koyeb account will be required for deploying and managing the chatbot application in a cloud environment, taking advantage of Koyeb’s seamless integration and deployment capabilities.
- GitHub Account: A GitHub account is necessary for version control and management of the chatbot's codebase and to facilitate deployment to Koyeb.
Understanding the components
Overview of FastAPI
FastAPI is a contemporary, high-performance web framework for creating APIs using Python 3.6 and above. It leverages standard Python type hints to ensure a seamless development experience.
FastAPI's key features include exceptional speed, courtesy of its foundation on Starlette and Pydantic, and the automatic generation of API documentation via OpenAPI and JSON Schema. Additionally, it offers dependency injection and supports asynchronous programming. Renowned for its user-friendly design and beginner-friendly learning curve, FastAPI has established itself as an optimal choice for building scalable, maintainable APIs.
Introduction to FastUI
FastUI is an open-source framework for building user interfaces in a type-safe and declarative manner, using Python. It is inspired by popular frontend libraries like React and Vue.js, and it aims to provide a similar developer experience, but with the added benefits of Python's simplicity and readability.
FastUI allows developers to create responsive and reusable UI components with ease, using a combination of Python and HTML-like syntax. It features a virtual DOM implementation, which ensures efficient rendering and updates of the UI. Additionally, FastUI supports server-side rendering and comes with built-in support for popular web frameworks like Django and Flask, making it an ideal choice for building modern web applications with Python.
FastUI's focus on type safety and a declarative programming style helps to reduce bugs and improve code maintainability, while its active community and extensive documentation provide developers with the resources and support they need to build high-quality UIs quickly and efficiently. Overall, FastUI is a powerful and versatile framework that can help developers create engaging and user-friendly web applications with minimal effort.
Introduction to the MistralAI cloud platform
Mistral AI is a distinguished entity in the artificial intelligence sector, recognized for its sophisticated language models. The company's premier product, Mistral 8x7B, is a cutting-edge language model that exhibits remarkable proficiency in numerous practical applications. This model is celebrated for its comprehensive understanding of language and its capacity to produce text akin to that of a human, making it an excellent choice for diverse applications including content creation and chatbots.
Mistral AI has recently achieved a significant milestone by introducing its inaugural AI endpoints, currently available in early access. These endpoints present various performance and cost trade-offs, primarily concentrating on generative models and embedding models.
Steps
To build this Chatbot you will follow these few steps:
- Set up the environment: Here you will set up your project folder, install any dependencies, and prepare environment variables. You should also get the MistralAI API key in this step.
- Set up FastAPI and FastUI: In this section, you will start building the application by creating the FastAPI and FastUI base in terms of the necessary objects.
- Design the chatbot interface with FastUI: Here you will define the endpoints and creating the UI using the FastUI components.
- Integrate MistralAI with FastAPI: For this step, you will connect the SSE (Server Side Event) logic by calling the MistralAI API to provide the chat response.
- Integrate FastUI with FastAPI: Finally, you will create the "secret sauce" that allows FastUI to provide the front end while FastAPI provides the back end.
Set up the environment
First, let's start by creating a new project. To keep your Python dependencies organized you should create a virtual environment.
You can create a local folder on your computer with:
# Create and move to the new folder
mkdir ChatbotFastUI
cd ChatbotFastUI
# Create a virtual environment
python -m venv venv
# Active the virtual environment (Windows)
.\venv\Scripts\activate.bat
# Active the virtual environment (Linux)
source venv/bin/activate
Next, you can install the required dependencies:
pip install fastapi uvicorn fastui python-multipart mistralai python-decouple
Besides the expected libraries for FastAPI, FastUI, and MistralAI, this additionally installs python-multipart
which is necessary to submit forms on FastUI and python-decouple
for loading environment variables.
Don’t forget to save your dependencies to the requirements.txt
file:
pip freeze > requirements.txt
The next step is to create a .env
file to store the MistralAI API key:
# .env
MISTRAL_API_KEY=<YOUR MISTRALAI API KEY>
As mentioned above, you can sign up for MistralAI API early access here. After signing up, subscribe to the platform, enter billing information, and then generate a new API key.
Set up FastAPI and FastUI
Next, you can start creating the main file that will contain the logic for FastAPI and FastUI, as well as the integration with MistralAI.
This project will consist of a single file, so go ahead and start creating a file called main.py
:
# main.py
import asyncio
from typing import AsyncIterable, Annotated
from decouple import config
from fastapi import FastAPI
from fastapi.responses import HTMLResponse
from fastui import prebuilt_html, FastUI, AnyComponent
from fastui import components as c
from fastui.components.display import DisplayLookup, DisplayMode
from fastui.events import PageEvent, GoToEvent
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage
from pydantic import BaseModel, Field
from starlette.responses import StreamingResponse
# Create the app object
app = FastAPI()
# Message history
app.message_history = []
Let's describe this initial code that sets up FastAPI:
- First, it performs the necessary imports.
- It creates the FastAPI
app
object. - To store the chatbot message history, we attach a
message_history
list to theapp
object, essentially creating a global variable that will then be used in the different endpoints.
To start setting up the FastUI logic, you can add the following code to the bottom of the file:
# main.py
. . .
# Message history model
class MessageHistoryModel(BaseModel):
message: str = Field(title='Message')
# Chat form
class ChatForm(BaseModel):
chat: str = Field(title=' ', max_length=1000)
This code prepares:
- A Pydantic model that will store the message history, with a single field called
message
. - Another Pydantic model will receive the input from the user in a form, with a single field called
chat
.
Design the chatbot interface with FastUI
It is now time to start defining the UI interface with FastUI. FastUI uses a series of components that are used together to build a page layout. If you noticed on the imports, those components are being made available in the variable c
.
You will now create the root endpoint, which will be responsible for serving the page layout:
# main.py
. . .
# Root endpoint
@app.get('/api/', response_model=FastUI, response_model_exclude_none=True)
def api_index(chat: str | None = None, reset: bool = False) -> list[AnyComponent]:
if reset:
app.message_history = []
return [
c.PageTitle(text='FastUI Chatbot'),
c.Page(
components=[
# Header
c.Heading(text='FastUI Chatbot'),
c.Paragraph(text='This is a simple chatbot built with FastUI and MistralAI.'),
# Chat history
c.Table(
data=app.message_history,
data_model=MessageHistoryModel,
columns=[DisplayLookup(field='message', mode=DisplayMode.markdown, table_width_percent=100)],
no_data_message='No messages yet.',
),
# Chat form
c.ModelForm(model=ChatForm, submit_url=".", method='GOTO'),
# Reset chat
c.Link(
components=[c.Text(text='Reset Chat')],
on_click=GoToEvent(url='/?reset=true'),
),
# Chatbot response
c.Div(
components=[
c.ServerLoad(
path=f"/sse/{chat}",
sse=True,
load_trigger=PageEvent(name='load'),
components=[],
)
],
class_name='my-2 p-2 border rounded'),
],
),
# Footer
c.Footer(
extra_text='Made with FastUI',
links=[]
)
]
If you are familiar with FastAPI, you will notice some differences from the normal endpoint structure that are required for using FastUI inside FastAPI. Let's take a look at this code in more detail:
-
@app.get('/api/', response_model=FastUI, response_model_exclude_none=True)
: This decorator defines the function below it as an API endpoint. In this case, it explicitly sets theresponse_model=FastUI
to indicate that this endpoint will generate a UI when being loaded. -
def api_index(chat: str | None = None, reset: bool = False) -> list[AnyComponent]:
: The function receives optional parameters for the chat (message) and a reset flag that is used to clear the chat history. It returns a list of FastUI components. -
if reset
: This checks for this flag and clears the chat message history when set to true. - The UI is designed inside a
return
statement. It consists of ac.PageTitle
to give the page a title in the browser tab and a[c.Page](http://c.Page)
definition that constructs the equivalent of thebody
tag in HTML. This contains the content of the web page. - Inside the page there are a series of components that define the layout:
-
c.Heading
defines a header, similar to a<h1>
tag. -
c.Paragraph
defines a normal paragraph. -
c.Table
is used to show the list of messages from the chat history. It receives thedata
and specifies thedata_model
to use as a base to load the data. Thecolumns
parameter determines what to display. FastUI uses aDisplayLookup
component to show values inside a table. -
c.ModelForm
is used to show a form on the page. In this case, this is theChatForm
. Thesubmit_url
is defined to the same page (or endpoint in this case). Themethod
specifies that the values are passed but the page context is kept. -
[c.Link](http://c.Link)
displays a link that loads this same endpoint with thereset
flag set to true when clicked. -
c.Div
allows the placement of components inside (like an HTML div). Here it is being used to also define a border that contains the MistralAI response. - The
c.ServerLoad
is the main workhorse of the layout since it is responsible for calling the endpoint that generates the response with Server Side Events (SSE), giving that familiar chat-like appearing text. Theload_trigger
is used to control the initial load and thesse
is set to true for SSE. Thepath
is set to another endpoint that you will create later on, and it passes the input from the user to that endpoint.
-
- Finally,
c.Footer
defines a standard footer with a message and possible list of links.
In terms of the FastUI interface design, this is all that is needed. As you can see, the components are easy to use, and they serve a very specific purpose, allowing you to design UIs with simplicity.
It is also worth mentioning that FastUI is in a very early stage and, although functional, it has some bugs and will continue to evolve. Don’t expect to build a complete replacement of a UI defined in a more standard frontend library yet. That being said, even in this initial stage, it requires little effort to make a functional UI.
Integrate MistralAI with FastAPI
With the UI now defined and functional, it is time to work on the more "backend" logic of handling the SSE and integrating it with MistralAI.
To handle the SSE connection, create the following endpoint below the previous code:
# main.py
. . .
# SSE endpoint
@app.get('/api/sse/{prompt}')
async def sse_ai_response(prompt: str) -> StreamingResponse:
# Check if prompt is empty
if prompt is None or prompt == '' or prompt == 'None':
return StreamingResponse(empty_response(), media_type='text/event-stream')
return StreamingResponse(ai_response_generator(prompt), media_type='text/event-stream')
The construction of this endpoint might look familiar if you have experience with SSE handling with FastAPI. Let's look in more detail:
-
@app.get('/api/sse/{prompt}')
: This decorator defines a standard FastAPI endpoint. The endpoint receives the user message with aprompt
. -
async def sse_ai_response(prompt: str) -> StreamingResponse:
: Again, a standard FastAPI definition for a streaming endpoint, returning aStreamingResponse
. -
if prompt is None or prompt == '' or prompt == 'None':
: This is used when the chat history reset flag is set in order to clear the chatbot response. It calls the functionempty_response()
that handles the logic. This function will be defined next. - If not, it returns a streaming response from the
ai_response_generator
function that calls the MistralAI API. We will define this function below.
With the SSE endpoint in place, you can now work on the helper functions to show or clear the chatbot responses. You can start first with the logic for clearing the response:
# main.py
. . .
# Empty response generator
async def empty_response() -> AsyncIterable[str]:
# Send the message
m = FastUI(root=[c.Markdown(text='')])
msg = f'data: {m.model_dump_json(by_alias=True, exclude_none=True)}\n\n'
yield msg
# Avoid the browser reconnecting
while True:
yield msg
await asyncio.sleep(10)
Let’s examine the function in detail:
-
async def empty_response() -> AsyncIterable[str]
: The function returns an asynchronous iterable list of strings. -
m = FastUI(root=[c.Markdown(text='')])
: This creates a FastUI root component that contains ac.Markdown
component, allowing us to format the chatbot response with Markdown. -
msg = f'data: {m.model_dump_json(by_alias=True, exclude_none=True)}\n\n'
: Returns the JSON object of the FastUI component in the SSE stream. -
yield msg
: Sends the message in the stream.
Next, you will work on the helper function to show the chatbot response:
# main.py
. . .
# MistralAI response generator
async def ai_response_generator(prompt: str) -> AsyncIterable[str]:
# Mistral client
mistral_client = MistralClient(api_key=config('MISTRAL_API_KEY'))
system_message = "You are a helpful chatbot. You will help people with answers to their questions."
# Output variables
output = f"**User:** {prompt}\n\n"
msg = ''
# Prompt template for message history
prompt_template = "Previous messages:\n"
for message_history in app.message_history:
prompt_template += message_history.message + "\n"
prompt_template += f"Human: {prompt}"
# Mistral chat messages
mistral_messages = [
ChatMessage(role="system", content=system_message),
ChatMessage(role="user", content=prompt_template)
]
# Stream the chat
output += f"**Chatbot:** "
for chunk in mistral_client.chat_stream(model="mistral-small", messages=mistral_messages):
if token := chunk.choices[0].delta.content or "":
# Add the token to the output
output += token
# Send the message
m = FastUI(root=[c.Markdown(text=output)])
msg = f'data: {m.model_dump_json(by_alias=True, exclude_none=True)}\n\n'
yield msg
# Append the message to the history
message = MessageHistoryModel(message=output)
app.message_history.append(message)
# Avoid the browser reconnecting
while True:
yield msg
await asyncio.sleep(10)
Let's analyze the main parts of the above code in more detail:
-
mistral_client = MistralClient(api_key=config('MISTRAL_API_KEY'))
: Creates the integration with the MistralAI endpoints. The API key is retrieved from theMISTRAL_API_KEY
environment variable. - A
system_message
and aprompt_template
are created to provide the context to the chatbot. The system message determines the type of chatbot and the prompt template helps provide the chat message history to the chatbot. - The
mistral_messages
contains the prepared variables to be sent to the MistralAI API. -
for chunk in mistral_client.chat_stream(model="mistral-small", messages=mistral_messages)
: Calls the MistralAI chat stream with the AI model defined asmistral-small
and receives the stream that it then iterates through. - The message is sent with the same logic that was previously explained for the UI updates.
- Finally, the chat message (both input and output) is added to the chat message history with
message = MessageHistoryModel(message=output)
andapp.message_history.append(message)
.
Note that MistralAI provides 3 types of models that provide different price/performance tradeoffs. You can find more information in MistralAI's endpoint documentation.
Integrate FastUI with FastAPI
At this point, you have created the endpoints with FastAPI and FastUI. However, if you were to run the application at the moment, you would only be able to view the JSON that defines the components.
To fully integrate the FastAPI backend with the FastUI fronted, we need to tell FastAPI, which is returning the components to render the page, that we don't want "normal" API output.
This is achieved with the FastUI secret sauce:
# main.py
. . .
# Pre-built HTML
@app.get('/{path:path}')
async def html_landing() -> HTMLResponse:
"""Simple HTML page which serves the React app, comes last as it matches all paths."""
return HTMLResponse(prebuilt_html(title='FastUI Demo'))
To understand how it works, let's analyze the code:
-
@app.get('/{path:path}')
: This decorator attaches to all paths and allows the return of aprebuilt_html
HTML page from FastUI. This is responsible for calling the other endpoints to retrieve the components to show on the page. It returns aHTMLResponse
.
Finally, you can run the application with the normal command:
uvicorn main:app --reload
You should see an application similar to this:
<iframe
width="560"
height="315"
src="https://www.youtube.com/embed/YHA5w1_f5V8?si=3gIgDUjVmSW9Syye"
title="YouTube video player"
frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen
Deploy to Koyeb
Now that you have the application running locally you can also deploy it on Koyeb and make it available on the Internet.
Create a repository on your GitHub account, for instance, called ChatbotFastUI
.
You can download a standard .gitignore
file for Python from GitHub to exclude certain folders and files from being pushed to the repository:
curl -L https://raw.githubusercontent.com/github/gitignore/main/Python.gitignore -o .gitignore
Run the following commands in your terminal to commit and push your code to the repository:
echo "# ChatbotFastUI" >> README.md
git init
git add :/
git commit -m "first commit"
git remote add origin [Your GitHub repository URL]
git branch -M main
git push -u origin main
You should now have all your local code in your remote repository. Now it is time to deploy the application.
Within the Koyeb control panel, while on the Overview tab, initiate the app creation and deployment process by clicking Create Web Service. On the App deployment page:
- Select GitHub as your deployment method.
- Select your code's GitHub repository from the drop-down menu. Alternatively, you can enter our public FastUI chatbot example repository into the Public GitHub repository at the bottom of the page:
https://github.com/koyeb/example-fastui-chatbot
. - Select Buildpack as your builder option.
- Expend the Build and deployment settings section. Inside, click the override toggle associated with the Run command option and enter
uvicorn main:app --host 0.0.0.0
in the field. - Expand the Advanced section to view additional settings.
- Click Add Variable to add your MistralAI API key named
MISTRAL_API_KEY
. - Set the App name to your choice. Keep in mind it will be used to create the URL for your application.
- Finally, click Deploy.
Once the application is deployed, you can visit the Koyeb service URL (ending in .koyeb.app
) to access the chatbot interface.
Conclusion
This tutorial shows the basics of creating a front-end application with FastUI. There is more to explore in this library considering that it is a work in progress and currently doesn't even provide documentation. But the future possibilities are tremendous, as it will enable the creation of frontend and backend applications in a single code base.
The integration of FastAPI, FastUI, and MistralAI presents a formidable toolkit for modern web application development, especially in the realm of AI-driven chatbots. FastAPI provides a robust and efficient backend framework capable of handling asynchronous operations and high concurrency, essential for real-time, responsive applications. FastUI enhances this ecosystem by offering an intuitive means to create dynamic and user-friendly front-ends, seamlessly bridging the gap between the backend logic and the user interface. Meanwhile, the MistralAI library opens doors to advanced AI features, allowing developers to incorporate sophisticated AI models into their applications, which is particularly beneficial in creating intelligent and adaptable chatbots.
Together, these technologies can provide a comprehensive framework that empowers developers to build, deploy, and scale cutting-edge web applications. Whether for small-scale projects or enterprise-level solutions, this combination will offer the versatility, performance, and ease of development needed to meet the challenges of modern web and AI application development.
Top comments (0)