DEV Community

Cover image for Integrating Google Firebase Firestore with ChatGPT API. Saving money
Raphael Araújo
Raphael Araújo

Posted on

Integrating Google Firebase Firestore with ChatGPT API. Saving money

As commented in my previous post, I developed a services architecture to save a little when consuming the OpenAI API and the gpt-3.5-turbo model.

The final version of the architecture from my previous post.

Now I’m going to show you some of the code that I inserted in Firebase functions that is called every time a new question is inserted in Firestore.


This function will send the question data registered by the user to a service on Render.com where the API of the ChatGPT model will be consumed.

Structure of question on Firestore

It’s worth remembering why I didn’t do everything on the Firebase Cloud Function side:

Cloud Function charges based on how long your function runs, as well as the number of invocations and provisioned resources. As the ChatGPT API can be slow to respond depending on the complexity of your query, you could end up paying a lot for the time your function is waiting for the API response.

At the end of the process, the answer to the question will be updated in Firestore based on the data received from the ChatGPT API.

We can highlight some important snippets of the previous code:

  • Lines 13 and 14: These are custom methods that communicate with Pinecone and the OpenAI API. I suggest looking for more information at https://python.langchain.com/en/latest/use_cases/question_answering.html

  • Line 60: In the previous lines, the code is responsible for searching the database of questions already asked by users and finding the most similar question. Based on the most similar question ever asked before, line 60 is responsible for checking whether the similarity is so close (95%) that the answer from the previous question can be used to answer the new question. As I commented in my previous post, this comparison would not do very well to different questions such as: How much does 1 kg of your product cost?’ and ‘How much does 1g of your product cost?’.

  • Line 71: This part of the code solved my problem with the OpenAI API delay. Some may wonder why I haven’t used something related to background processing queues. But as I mentioned in the previous post, my goal, for now, is to look for cheaper alternatives. Hiring a Redis bank and a full-time worker is not my current plan. But changing that is definitely one of my future plans.

Documents that can help you in the development:

References:

Top comments (0)