DEV Community

Cover image for Creating Lucy: Developing an AI-Powered Slack Assistant with Memory

Posted on

Creating Lucy: Developing an AI-Powered Slack Assistant with Memory


The main idea was to build an AI agent named Lucy that can learn from our conversations and interact through a Slack interface. The plan was also to use no code tool ( and play with embedding and So the whole point was to build something that takes this:

Slack conversation example

And based on automatically creates table like this (I’m using here).

Memories saved to db

Next I’m able to use those memories to create vectors in pinecone database and use them later in conversations.

💡 Be aware that the idea right now is to keep conversation’s history only until the memories are not distilled from it. Later we are operating only on memories, from this conversation which might lead to loosing some informations. But we can tackle this later…

Slack API

I won’t dive into the nitty-gritty details here, but the basic idea is simple. Create bot app and enable events api. We will use it to publish events on new messages in certain channel. Also I have turned on direct messages to the app. Then create connection and voila.

Scenario - Lucy Slack conversation scenario for slack conversation

📘 Check blueprint here

Let’s break it down step by step.

  1. Custom Webhook - Start with a simple webhook to listen for events. Make sure to register this URL in your Slack app settings.
  2. Webhook response - there is filter on route checking that event.type = url_verification. This is to confirm and send back challange to slack.
  3. Next on router we check for event.type = message. Obviously we want to process only those events here.
  4. Data store (Get a record) - fetching user slack id here. For later use.
  5. Router - a few things here. Lets go one by one.

    1. Webhook response - first send back webhook response. This to let slack know that we succeed in processing the event and to prevent resending this to us. I know that we don’t finish processing yet but that’s the mistake I can live with. I would rather prefer Lucy to not respond due to some error then responding multiple times and creating noise.
    2. If the message if from the user - I’m just checking the author here. If its me → proceed. If the message comes from Lucy → Ignore.

      1. First fetch not synced conversation history. Aggregate it in format like:

        kuba szwajka (2024-05-22T22:57:16.000Z): Hi! Whats my name? 
        Lucy AI assistant (2024-05-22T22:57:16.000Z): Hey there! I don't have your name yet. What's your name? :smile: 
        kuba szwajka (2024-05-22T22:57:26.000Z): my name is Kuba. Hi! 
        Lucy AI assistant (2024-05-22T22:57:28.000Z): Hey Kuba! How's it going? :smile: 
      2. Then embed query sent to slack calling v1/embeddings endpoint with Make an API call. Next based on the resulting vector I’m querying my pinecone database for similar vectors (Assuming it will return vectors with informations that might be related to my query).

      3. More on those vectors and their metadata later but here I’m just taking vector []( and based on that fetching memories database by ids.

      4. GPT Completion - here is the completion itself. The context at this place is build on two things. First - not synced conversation history that might contain useful facts. Second - useful memories that were found in pinecone.

      5. Create slack message

    3. In the end, no matter who created message, I’m saving it with the author to ConversationHistory database and mark synced=false.

Scenario - Lucy find memories in conversation scenario for memories embedding

📘 Check blueprint here

This scenario aims to process previously created conversation history, create memories and save them as a vectors. Right now it is triggered every 8 hours. Tbh. I’m testing if this is enough 🤷.

Nothing fancy here. Lets go step by step:

  1. Fetch all records from ConversationHistory where synced=false.
  2. Aggregate them as a text.
  3. Let Chat distill the facts (facts overlapping to be implemented? Not sure if this will be the problem 🤔? Any thoughts? )
  4. For the list of facts/memories, create a records in memories table with synced=false.
  5. For each of these, create embedding calling v1/embeddings in OpenAI api.
  6. Save it to pinecone and mark as synced.


  • There are still a few missing pieces, like handling memory overlapping and preventing data loss during the conversation history to memory conversion.
  • I’ve put some blueprints if you want to try it out.

Want to Know More?

Stay tuned for more insights and tutorials! Visit My Blog 🤖

Top comments (0)