DEV Community

Cover image for Building a Cloud-Native Spreadsheet Copilot with Winglang and LangChain
Avi Chawla for Winglang

Posted on

Building a Cloud-Native Spreadsheet Copilot with Winglang and LangChain

The interest in building “AI copilots” is higher than ever before:

Avi art

Note: A copilot is an AI-powered application where users can ask questions in natural language and get responses specific to their context. The context could be the details in their dashboard, the code in the editor, etc.

Almost every technology company wants to integrate AI into their products.

As a result, it’s become essential to understand the workflow of how these applications are built and what common technologies power them.

Thus, in this article, we’ll build an AI-powered Spreadsheet copilot with user-interactive capabilities.

Here’s the tool stack we shall use:

We will build the UI of our spreadsheet chatbot application using Next.js, a React framework.

We will use WingLang for cloud capabilities.

We will use LangChain to interact with LLM providers.

Let’s begin!


Application Workflow with LangChain + Winglang

Avi art2

The above diagram provides a high-level overview of the application workflow:

  • The user will enter a prompt in the chatbot interface, something like:
  1. Add a row for Sara, whose age is 29, who works in sales.
  2. Add a row for Arya, whose age is 25, who works in marketing.
  • The prompt will be sent as an input to the LLM models through LangChain to an LLM provider, say, OpenAI’s GPT-3.5 or GPT-4, where it will perform function calling.
  • The model will provide a JSON response, which will be displayed on the app’s frontend to the user.
  • Moreover, the response object from LLM will also be stored in a cloud bucket with Wing so that we are not tied to a single cloud provider and can migrate to any one of them whenever needed.

Done!


LangChain Integration Walkthrough

Now, let’s look at the implementation where we integrate LangChain runnables and chains as REST API functionalities.

To get started, import the remote runnable from long-chain/runnable/remote.

Avi art 3

Next, we define the remoteChainToAction function shown below, which allows us to deploy our LLM chains built with LangChain as REST APIs on the server side and invoked as remote runnables on the client side.

Avi art 4

Here’s a breakdown of the above code:

  • The function accepts a LangChain chain object as a parameter.
  • It creates a runnable with the chain URL and handler function, as shown in line 5.
  • At line 13, it infers and sets parameters if they are not provided.

Next, from lines 14–28 (shown below), it converts the chain object into a backend action object that will call the LangChain service with the provided input.

Avi art 5

Almost done!

Finally, we use Winglang to store the output received from the LLM in a bucket, as demonstrated below:

Image description

For starters, Winglang is an open-source programming language that abstracts away the differences between cloud providers by defining a standard infrastructural API.

In other words, Winglang provides an API that’s common to all providers.

For instance, when defining a data storage bucket in Winglang, there’s nothing like “S3 Bucket” or “Blob storage” or “Google Cloud Storage” we specifically tailor the application code to.

Instead, there’s a common abstraction called “Bucket,” and we implement the application code specific to this “Bucket” class. This can then be compiled and deployed to any cloud provider.

Avi art 6

The same abstraction is available for all cloud-specific entities like functions, queues, compute units, etc.

After developing the app in Winglang, the entire application code and the infrastructure we defined can be compiled to any cloud provider in a one-line command, and Winglang takes care of all backend procedures.

Avi art 7

This way, one can just focus on the application logic and how their app interacts with the infrastructural resources rather than the specificities of a cloud platform.

And we are done with the high-level implementation details.

To recap, we converted a LangChain chain object into an action object that our backend can use to integrate a remote LangChain process, which in turn invoked a remote service to process input data and return the result. Finally, we stored the response from the LLM into a cloud Bucket defined in Winglang.


Spreadsheet Copilot Demo

In this section, let’s do a demo of the AI copilot.

Here’s our spreadsheet chatbot, built using the Next.js framework, where we can perform multiple operations by entering prompts in the chat, such as adding a row, deleting a row, multiplying values, or adding a month to a date, similar to Excel.

Let’s enter this prompt: “Add a row for Sara, whose age is 29, who works in sales.

Alt Text

We get the desired result within a few seconds.

Next, let’s enter this prompt: “Delete the row with Sara’s information.

Alt Text

As depicted above, the row with Sara’s information has been deleted.

That’s it!

This is how we can build our AI-powered applications on top of LLMs with the above underlying architecture.


Conclusion

With that, we come to the end of this article, where we learned how to build AI copilots powered by Winglang and LangChain.

To recap:

  • We built the UI of our spreadsheet chatbot application using Next.js, a React framework.
  • We used WingLang for cloud capabilities.
  • We used LangChain to interact with LLM providers.

…And all of this with just a few lines of code.

Stay tuned for more insights and developments in AI and cloud computing.

Thanks for reading!

Top comments (2)

Collapse
 
ayush2390 profile image
Ayush Thakur

Wing is amazing

Collapse
 
jakepage91 profile image
Jake Page

Love it!