Learn how to automate AI embedding creation using the PostgreSQL you know and love.
Managing embedding workflows for AI systems like RAG, search a...
For further actions, you may consider blocking this person and/or reporting abuse
Really excited about this challenge tomorrow
Thanks Ben, excited to see what you build in the OSS AI challenge!
exciting!!!!
Thanks Rob!
Really exciting
Thanks Melody!
Very interesting and timely...
We are building a RAG based app and this article certainly is going to help us
Great to know Sameer -- excited to hear what you think!
Does the image
pgai-vectorizer-worker
support Ollama? The documentation do not provide an example config using OllamaThe pgai-vectorizer-worker does not support Ollama at this time, only OpenAI. But you can still use Ollama for generation models and OpenAi as the embedding model. Our team will add Ollama support very soon!
Follow up question:
I have a vectorizer and timescale-ha in a docker compose but I'm always getting rate limit error from Open AI. I have set the concurrency to 1 but it still happens.
I tried to embed just a single PDF file (Attention is All You Need paper) for a RAG project but no matter how I configure the vectorizer, it just seems to always be hitting Open AI rate limit (I'm on Tier 1 account).
Is there a way to slow down the vectorizer? This is a bit of dilemma, I can't use Ollama in vectorizer and when I use Open AI, I always get rate limit error.
Thanks for the info!
anyhow, it uses paid openAI key?
then why do we use it?