DEV Community

Cover image for Building a Multimodal Search Engine with Amazon Titan Embeddings, Aurora Serveless PostgreSQL and LangChain
Elizabeth Fuentes L for AWS

Posted on • Originally published at community.aws

Building a Multimodal Search Engine with Amazon Titan Embeddings, Aurora Serveless PostgreSQL and LangChain

πŸ‡»πŸ‡ͺπŸ‡¨πŸ‡± Dev.to Linkedin GitHub Twitter Instagram Youtube
Linktr

Repo: https://github.com/build-on-aws/langchain-embeddings

Unlock the power of multimodal search with Amazon Titan Embeddings and LangChain. In this two-part series, I'll guide you through building a search engine that understands both text and images using Amazon Bedrocks and implement vector storage in Amazon Aurora PostgreSQL with the pgvector extension.

In the first part of this series, you'll dive deep into the core components of our multimodal search engine. Using a Jupyter Notebook environment, you'll explore how to:

  • Generate advanced text and image embeddings using Amazon Titan Embeddings models.
  • Leverage LangChain to segment text into meaningful semantic chunks.
  • Create and query local FAISS vector databases for efficient storage and retrieval
  • Develop a powerful image search application utilizing Titan Multimodal Embeddings.
  • Implement vector storage in Amazon Aurora PostgreSQL with the pgvector extension

Building a Multimodal Search Engine with Amazon Titan Embeddings, Aurora Serveless PostgreSQL and LangChain

βœ… AWS Level: Advanced - 200

Prerequisites:

  • Foundational knowledge of Python
  • AWS Account
  • Enable model Access for the following models:
  • Amazon Titan Embeddings V2
  • Anthropic Claude 3 models (Haiku or Sonnet).
  • Set up the AWS Command Line Interface (CLI)
  • Install boto3: with pip install boto3.
  • Configure AWS credentials
  • Install Langchain: with pip install langchain.
  • Optional: Bootstrap your account/region if this is your first CDK Project
  • Read about AWS CDK "Get started with Python"

Cost to complete:

πŸš€ Let's build!

Follow these steps:

Step 1: APP Set Up

βœ… Clone the repo

git clone https://github.com/build-on-aws/langchain-embeddings
Enter fullscreen mode Exit fullscreen mode

βœ… Go to:

cd notebooks
Enter fullscreen mode Exit fullscreen mode

βœ… Start browsing through the notebooks in the following order:

Semantic Search with LangChain, Amazon Titan Embeddings, and FAISS:

01_build_pdf_vector_db.ipynb

Jupyter notebook for loading documents from PDFs, extracting and splitting text into semantically meaningful chunks using LangChain, generating text embeddings from those chunks using Amazon Titan Embeddings G1 - Text models, and storing the embeddings in a FAISS vector database for retrieval.

Building a Multimodal Image Search App with Titan Embeddings and LangChain:

02_build_images_vector_db.ipynb

This notebook demonstrates how to combine Titan Multimodal Embeddings, LangChain and FAISS to build a capable image search application. Titan's embeddings allow representing images and text in a common dense vector space, enabling natural language querying of images. FAISS provides a fast, scalable way to index and search those vectors. And LangChain offers abstractions to hook everything together and surface relevant image results based on a user's query.

This image search application is a key component of our multimodal search engine. It demonstrates how you can integrate text-based queries with image retrieval, allowing users to find relevant visual content using natural language. This functionality is crucial for creating a seamless multimodal search experience, where users can easily navigate between textual and visual information.

By following the steps outlined, you'll be able to preprocess images, generate embeddings, load them into FAISS, and write a simple application that takes in a natural language query, searches the FAISS index, and returns the most semantically relevant images. It's a great example of the power of combining modern AI technologies to build applications.

Supercharging Vector Similarity Search with Amazon Aurora and pgvector:

03_build_pgvector_db.ipynb

In this Jupyter Notebook, you'll explore how to store vector embeddings in a vector database using Amazon Aurora and the pgvector extension. This approach is particularly useful for applications that require efficient similarity searches on high-dimensional data, such as natural language processing, image recognition, and recommendation systems.

🧹 Clean the house!:

If you finish testing and want to clean the application, you just have to follow these two steps:

  1. Delete the files from the Amazon S3 bucket created in the deployment.
  2. Deleting Aurora DB clusters and DB instances

Conclusion

In this post, I demonstrated how to build a powerful multimodal search engine using Amazon Titan Embeddings and LangChain in a Jupyter Notebook environment. You explored key components like generating embeddings, text segmentation, vector storage, and image search capabilities.

This foundation sets the stage for Part 2 of our series, where you'll transform this solution into a scalable, serverless architecture using AWS CDK and Lambda functions. You'll integrate with Amazon S3 and Aurora PostgreSQL to create a fully functional, production-ready multimodal search engine.

Stay tuned for the next installment to learn how to deploy and scale your search capabilities to new heights! Happy coding, and may your searches always find what you're looking for! πŸ˜‰

Thanks,
Eli

πŸ‡»πŸ‡ͺπŸ‡¨πŸ‡± Dev.to Linkedin GitHub Twitter Instagram Youtube
Linktr

Top comments (0)