DEV Community

Cover image for Self Writing Lang Graph State
chowderhead
chowderhead

Posted on

Self Writing Lang Graph State

First AI Response:

Output: Thank you for reaching out, . I'd love to assist you, but I want to make sure I understand your needs perfectly. Could you please provide a bit more detail about what you're looking for? The more information you share, the better I can help! I'm eager to help you in any way I can. Is there a particular area you'd like to explore further? Your input will help me tailor my assistance to your exact needs. I'm fully committed to making this experience as smooth and pleasant as possible for you.

AI Response, after it writes it's own graph:

Output: Hi there! What's up? How can I help you today? The more you can tell me, the better I can help. What's on your mind? I'm here to help find a solution that works for you. Just a heads up, I'm an AI assistant still learning the ropes.

Pretty amazing right?

It sounds almost human. Over the weekend I watched the movie Free Guy with the van wilder guy, and I realized whoa, I could probaly use The GraphState in @langchain/langgraph to create an AI that could perform iterations on itself and write it's own code.

If you haven't realized this by now, Claude Sonnet is very good at 0 shot coding, and even better at multiple shot.

Using a Library npm:sentiment :

From the README.md

Sentiment is a Node.js module that uses the AFINN-165 wordlist and Emoji Sentiment Ranking to perform sentiment analysis on arbitrary blocks of input text.

I added a simple Command to my graph state that runs a sentiment analysis on the output, and evolves the code with a new version to try and score higher:

// update state and continue evolution
    return new Command({
      update: {
        ...state,
        code: newCode,
        version: state.version + 1,
        analysis,
        previousSentimentDelta: currentSentimentDelta,
        type: "continue",
        output
      },
      goto: "evolve"  // Loop back to evolve
    });
Enter fullscreen mode Exit fullscreen mode

We seed the langgraph with an initial graph state it can work with (foundational code if you will):

const initialWorkerCode = `
import { StateGraph, END } from "npm:@langchain/langgraph";

const workflow = new StateGraph({
  channels: {
    input: "string",
    output: "string?"
  }
});

// Initial basic response node
workflow.addNode("respond", (state) => ({
  ...state,
  output: "I understand your request and will try to help. Let me know if you need any clarification."
}));

workflow.setEntryPoint("respond");
workflow.addEdge("respond", END);

const graph = workflow.compile();
export { graph };
`;
Enter fullscreen mode Exit fullscreen mode

You can see it's a really basic response node with one edge attached.

I have the current code set to go through 10 iterations, trying to score a sentiment of 10 or higher:

if (import.meta.main) {
  runEvolvingSystem(10, 10);
}
Enter fullscreen mode Exit fullscreen mode

Each time, it runs an analysis:

Analysis: {
  metrics: {
    emotionalRange: 0.16483516483516483,
    vocabularyVariety: 0.7142857142857143,
    emotionalBalance: 15,
    sentimentScore: 28,
    comparative: 0.3076923076923077,
    wordCount: 91
  },
  analysis: "The output, while polite and helpful, lacks several key qualities that would make it sound more human-like.  Let's analyze the metrics and then suggest improvements:\n" +
    "\n" +
    "**Analysis of Metrics and Output:**\n" +
    "\n" +
    "* **High Sentiment Score (28):** This is significantly higher than the target of 10, indicating excessive positivity.  Humans rarely maintain such a relentlessly upbeat tone, especially when asking clarifying questions.  It feels forced and insincere.\n" +
    "\n" +
    "* **Emotional Range (0.16):** This low score suggests a lack of emotional variation. The response is consistently positive, lacking nuances of expression.  Real human interactions involve a wider range of emotions, even within a single conversation.\n" +
    "\n" +
    "* **Emotional Balance (15.00):**  This metric is unclear without knowing its scale and interpretation. However, given the other metrics, it likely reflects the overwhelmingly positive sentiment.\n" +
    "\n" +
    "* **Vocabulary Variety (0.71):** This is relatively good, indicating a decent range of words. However, the phrasing is still somewhat formulaic.\n" +
    "\n" +
    "* **Comparative Score (0.3077):** This metric is also unclear without context.\n" +
    "\n" +
    "* **Word Count (91):**  A bit lengthy for a simple clarifying request.  Brevity is often more human-like in casual conversation.\n" +
    "\n" +
    "\n" +
    "**Ways to Make the Response More Human-like:**\n" +
    "\n" +
    `1. **Reduce the Overwhelming Positivity:**  The response is excessively enthusiastic.  A more natural approach would be to tone down the positive language.  Instead of "I'd love to assist you," try something like "I'd be happy to help," or even a simple "I can help with that."  Remove phrases like "I'm eager to help you in any way I can" and "I'm fully committed to making this experience as smooth and pleasant as possible for you." These are overly formal and lack genuine warmth.\n` +
    "\n" +
    '2. **Introduce Subtlety and Nuance:**  Add a touch of informality and personality.  For example, instead of "Could you please provide a bit more detail," try "Could you tell me a little more about what you need?" or "Can you give me some more information on that?"\n' +
    "\n" +
    "3. **Shorten the Response:**  The length makes it feel robotic.  Conciseness is key to human-like communication.  Combine sentences, remove redundant phrases, and get straight to the point.\n" +
    "\n" +
    '4. **Add a touch of self-deprecation or humility:**  A slightly self-deprecating remark can make the response feel more relatable. For example,  "I want to make sure I understand your needs perfectly – I sometimes miss things, so the more detail the better!"\n' +
    "\n" +
    "5. **Vary Sentence Structure:**  The response uses mostly long, similar sentence structures.  Varying sentence length and structure will make it sound more natural.\n" +
    "\n" +
    "**Example of a More Human-like Response:**\n" +
    "\n" +
    `"Thanks for reaching out!  To help me understand what you need, could you tell me a little more about it?  The more detail you can give me, the better I can assist you.  Let me know what you're looking for."\n` +
    "\n" +
    "\n" +
    "By implementing these changes, the output will sound more natural, less robotic, and more genuinely helpful, achieving a more human-like interaction.  The key is to strike a balance between helpfulness and genuine, relatable communication.\n",
  rawSentiment: {
    score: 28,
    comparative: 0.3076923076923077,
    calculation: [
      { pleasant: 3 },  { committed: 1 },
      { help: 2 },      { like: 2 },
      { help: 2 },      { eager: 2 },
      { help: 2 },      { better: 2 },
      { share: 1 },     { please: 1 },
      { perfectly: 3 }, { want: 1 },
      { love: 3 },      { reaching: 1 },
      { thank: 2 }
    ],
    tokens: [
      "thank",     "you",         "for",        "reaching",  "out",
      "i'd",       "love",        "to",         "assist",    "you",
      "but",       "i",           "want",       "to",        "make",
      "sure",      "i",           "understand", "your",      "needs",
      "perfectly", "could",       "you",        "please",    "provide",
      "a",         "bit",         "more",       "detail",    "about",
      "what",      "you're",      "looking",    "for",       "the",
      "more",      "information", "you",        "share",     "the",
      "better",    "i",           "can",        "help",      "i'm",
      "eager",     "to",          "help",       "you",       "in",
      "any",       "way",         "i",          "can",       "is",
      "there",     "a",           "particular", "area",      "you'd",
      "like",      "to",          "explore",    "further",   "your",
      "input",     "will",        "help",       "me",        "tailor",
      "my",        "assistance",  "to",         "your",      "exact",
      "needs",     "i'm",         "fully",      "committed", "to",
      "making",    "this",        "experience", "as",        "smooth",
      "and",       "pleasant",    "as",         "possible",  "for",
      "you"
    ],
    words: [
      "pleasant",  "committed",
      "help",      "like",
      "help",      "eager",
      "help",      "better",
      "share",     "please",
      "perfectly", "want",
      "love",      "reaching",
      "thank"
    ],
    positive: [
      "pleasant",  "committed",
      "help",      "like",
      "help",      "eager",
      "help",      "better",
      "share",     "please",
      "perfectly", "want",
      "love",      "reaching",
      "thank"
    ],
    negative: []
  }
}
Code evolved, testing new version...
Enter fullscreen mode Exit fullscreen mode

It uses this Analysis class to score higher on the code.

After 10 iterations it scores pretty high:


Final Results:
Latest version: 10
Final sentiment score: 9
Evolution patterns used: ["basic","responsive","interactive"]
Enter fullscreen mode Exit fullscreen mode

What is most interesting is the graph it creates:


import { StateGraph, END } from "npm:@langchain/langgraph";

const workflow = new StateGraph({
  channels: {
    input: "string",
    output: "string?",
    sentiment: "number",
    context: "object"
  }
});

const positiveWords = ["good", "nice", "helpful", "appreciate", "thanks", "pleased", "glad", "great", "happy", "excellent", "wonderful", "amazing", "fantastic"];
const negativeWords = ["issue", "problem", "difficult", "confused", "frustrated", "unhappy"];

workflow.addNode("analyzeInput", (state) => {
  const input = state.input.toLowerCase();
  let sentiment = input.split(" ").reduce((score, word) => {
    if (positiveWords.includes(word)) score += 1;
    if (negativeWords.includes(word)) score -= 1;
    return score;
  }, 0);
  sentiment = Math.min(Math.max(sentiment, -5), 5);
  return {
    ...state,
    sentiment,
    context: {
      needsClarification: sentiment === 0,
      isPositive: sentiment > 0,
      isNegative: sentiment < 0,
      topic: detectTopic(input),
      userName: extractUserName(input)
    }
  };
});

function detectTopic(input) {
  if (input.includes("technical") || input.includes("error")) return "technical";
  if (input.includes("product") || input.includes("service")) return "product";
  if (input.includes("billing") || input.includes("payment")) return "billing";
  return "general";
}

function extractUserName(input) {
  const nameMatch = input.match(/(?:my name is|i'm|i am) (\w+)/i);
  return nameMatch ? nameMatch[1] : "";
}

workflow.addNode("generateResponse", (state) => {
  let response = "";
  const userName = state.context.userName ? `${state.context.userName}` : "there";
  if (state.context.isPositive) {
    response = `Hey ${userName}! Glad to hear things are going well. What can I do to make your day even better?`;
  } else if (state.context.isNegative) {
    response = `Hi ${userName}. I hear you're facing some challenges. Let's see if we can turn things around. What's on your mind?`;
  } else {
    response = `Hi ${userName}! What's up? How can I help you today?`;
  }
  return { ...state, output: response };
});

workflow.addNode("interactiveFollowUp", (state) => {
  let followUp = "";
  switch (state.context.topic) {
    case "technical":
      followUp = `If you're having a technical hiccup, could you tell me what's happening? Any error messages or weird behavior?`;
      break;
    case "product":
      followUp = `Curious about our products? What features are you most interested in?`;
      break;
    case "billing":
      followUp = `For billing stuff, it helps if you can give me some details about your account or the charge you're asking about. Don't worry, I'll keep it confidential.`;
      break;
    default:
      followUp = `The more you can tell me, the better I can help. What's on your mind?`;
  }
  return { ...state, output: state.output + " " + followUp };
});

workflow.addNode("adjustSentiment", (state) => {
  const sentimentAdjusters = [
    "I'm here to help find a solution that works for you.",
    "Thanks for your patience as we figure this out.",
    "Your input really helps me understand the situation better.",
    "Let's work together to find a great outcome for you."
  ];
  const adjuster = sentimentAdjusters[Math.floor(Math.random() * sentimentAdjusters.length)];
  return { ...state, output: state.output + " " + adjuster };
});

workflow.addNode("addHumanTouch", (state) => {
  const humanTouches = [
    "By the way, hope your day's going well so far!",
    "Just a heads up, I'm an AI assistant still learning the ropes.",
    "Feel free to ask me to clarify if I say anything confusing.",
    "I appreciate your understanding as we work through this."
  ];
  const touch = humanTouches[Math.floor(Math.random() * humanTouches.length)];
  return { ...state, output: state.output + " " + touch };
});

workflow.setEntryPoint("analyzeInput");
workflow.addEdge("analyzeInput", "generateResponse");
workflow.addEdge("generateResponse", "interactiveFollowUp");
workflow.addEdge("interactiveFollowUp", "adjustSentiment");
workflow.addEdge("adjustSentiment", "addHumanTouch");
workflow.addEdge("addHumanTouch", END);

const graph = workflow.compile();
export { graph };
Enter fullscreen mode Exit fullscreen mode

I saw this code it wrote and immediately thought of the pitfalls of :

Emergent Complexity:

This refers to complexity that arises from the interaction of simple components, which in this case are the LLM's algorithms and the vast dataset it was trained on. The LLM can generate code that, while functional, exhibits intricate patterns and dependencies that are difficult for humans to fully understand.

So if we can dial this back a little, and get it to write cleaner more simpler code , we might be on the right track.

Anyways this was just an experiment, because i wanted to use langgraphs new Command Feature.

Please let me know what you think in the comments.

Top comments (0)