DEV Community

Cover image for 🗣️🤖 Ask to your Neo4J knowledge base in NLP & get KPIs
adriens for opt-nc

Posted on

🗣️🤖 Ask to your Neo4J knowledge base in NLP & get KPIs

🤔 Food for thought

With this content we'll start to uncover a fascinating subject, which is:

The cost/value balance of a question/answer... and "the illusion of gratuitousness"

As you will see, AI (and especially AGI) throws a lot of worries about task encroachment... but also helps challenging deep questions about what is the ratio between cost & intelligence, the impact it does already have on our collaborations... and most important :

"How we join our intelligence and creativity into a hybrid Humain/AI taskforce?"

🗞️ Neo4j, LLMs and data

Image description

In this context no-code integration has landed on NeoDash:

This post will show up:

  • 🧐 Why it matters
  • 🎯 What can be achieved with this new feature (and its potential impact on our organizations & workflows)

🚼 How it works

Before going further, we need to understand that all that we are going to do is:

  1. Ask a question in NLP (in english for now)
  2. Call openai's LLM API to query meta graph APOC and build the cypher query
  3. Get the cypher (the query)
  4. Locally run the cypher against the data
  5. Customize reports

🤔 Questions

On your daily activities, you may be challenged by the following kind of questions that look like (ie. follow the same pattern) :

  • How many GitHub users do we have ? (as it involves costs)
  • Who are the people managed by Mr X ?
  • How many GitHub repositories does Team X maintain ? (involves capacity to maintain a set of services)
  • How many virtual machine do we have ? (involves size of infrastructures and then costs,...)
  • How much do we spend on...
  • etc...

👉 Therefore we built an omniscient knowledge graph that is able to answer a very wide amount of questions about our organization.

Below some zoom-in into graph topology, so you (as well as the LLM we will use later) can understand how to deal with data:

Image description

Image description

Now we are able to transform a question into a dashboard widget:

🍿 Demo

⚖️ Further on strategies

With that first experimentation started to appear new kind of concerns, in particular the following one:

"A big omniscient giant vs. a myriad of little agents equipped of custom tools."

This section is dedicated to very briefly summarize these questions so we can talk about them and optimize the design of our solutions, the way we could manage & answer incoming questions... and most of our own learning path so we can delegate automated tasks to AI and...

focus on the ones that can't or are not yet automated.

💰 Benefits

As a matter of fact, pretty shortly you're willing to automate questions answering tasks and re-invest that time.

In short:

  • 🔗‍💥 People don't wait answers from your part : they go faster with more freedom, they don't make issues, etc...
  • 🧑‍🚀 You can focus on higher value added activities : you go further and faster & gain in expertise

📊 NO-Code Neodash

  • No skill required
  • Plug & play strategy
  • Could be quite expensive during query prototyping phasis (but keep in mnd that OpenAI is regularly reducing the price of its APIs while improving them)
  • Put in evidence that "Time is money"
  • Remove the data guys from a potential bottleneck situations where people expect them to produce more knowledge at a faster pace
  • Very convenient to prototype
  • Great for high level questions and technically affordable by non-tech people (which sets them free)
  • No need to know the knowledge graph design
  • 100% AI driven and meta-schema driven
  • Prone to make misunderstandings and cypher error generation (beware of large contexts, see below) and adequate LLM model setup

Image description

🦜🔗 Langchain & custom tools

Langchain and the implementation of Custom Tools also is a great (and very efficient) way to setup a dedicated Q&A (for example for chat purpose) agent.

Here are some few points to consider:

  • Skills required for implementation
  • Requires to know a preset of questions
  • Could fall back to the global approach when the chain is not able to find the answser... simply throw "I'm sorry, but with my current toolset, I was not able to find a suitable answer."

That's all for now, hopefully you'll have found this content useful and made you consider new use cases around AI adoption.

📘 Book

Last but not least, reading this book also highly helped considerate new way of thinking the work itself and prepare the future present:

Image description

Top comments (4)

Collapse
 
adriens profile image
adriens
Collapse
 
adriens profile image
adriens

Add better Error Messages for the LLM Errors #529

The errors coming back from the LLM extension is very generalised and we need a better way to communicate to the users exactly what caused the error.

a potential list of specialised error messages:

  • Tell the user they ran out of OpenAI credit
  • Tell the user the query timed out
  • Tell the user that the translation generated invalid Cypher
Collapse
 
adriens profile image
adriens

Collapse
 
adriens profile image
adriens