DEV Community

Cover image for window.ai - running AI LOCALLY from DevTools! 🤯
GrahamTheDev
GrahamTheDev

Posted on

window.ai - running AI LOCALLY from DevTools! 🤯

On-device AI in the browser is here - kinda.

It is currently in Chrome canary which means, it will be here soon(ish).

In this article I will show you how to get it running on your device, so that you can have a play with it and see what use cases you can think of.

And I will just say this: Running window.ai from DevTools without an internet connection is pretty fun, even if the results are "meh"!

Setup

Getting up and running only takes 5 minutes!

1. Download Chrome Canary

Go to the Chrome Canary site and download Chrome Canary.

2. Enable "Prompt API for Gemini Nano".

Open Chrome Canary and type "chrome://flags/" in the URL bar and press enter.

Then in the search box at the top type "prompt API"

You should see "Prompt API for Gemini Nano" as the only option

prompt API in search box on chrome experiments page, there is one item highlighted called

Switch that to "enabled".

3. Enable "Enables optimization guide on device"

While you are on the "chrome://flags" page, you need to enable a second item.

Remove your previous search and search for "optimization guide on".

You should see "Enables optimization guide on device" as your only option.

This time you want to enable it, but with the "Enabled ByPassPerfRequirement" option.

4. Install Gemini Nano

Last step, we need to install Gemini Nano on our device.

This is actually part of a bigger tool, but we don't need to worry about that, except for the fact that it helps us know what to download.

Warning: This file is 1.5gb. It doesn't tell you that anywhere so if you have a slow connection / pay per Gb of data / low storage space you may not want to do this!

Head to: "chrome://components/".

Hit Ctrl + f and search for "Optimization Guide".

You will see an item "Optimization Guide On Device Model".

Click "Check for Update" and it will install the file.

On the chrome components page, search box is showing with

5. DONE!

Last step: Restart Chrome Canary for the changes to take effect.

Add that is it, now we can move on to using AI locally!

Using window.ai

If everything worked as expected then you should now be able to open DevTools (F12), go to the "Console" tab and start playing!

The easiest way to check is to type window. into the console and see if ai comes up as an option.

If not, go back and check you didn't miss a step!

Creating our first session.

Just one command is needed to start a session with our AI model.

const chatSession = await window.ai.createTextSession()
Enter fullscreen mode Exit fullscreen mode

Tip: Don't forget the await. I did originally 🤦🏼‍♂️!

There is also an option of createGenericSession() but I haven't worked out what the difference is yet!

Now we can use that session to ask questions.

Sending a prompt

For this we just use the .prompt function on our chatSession object!

const result = await chatSession.prompt("hi, what is your name?")
Enter fullscreen mode Exit fullscreen mode

Yet again, all async, don't forget the await (I didn't make the same mistake twice...honest!).

Depending on the complexity of your prompt and your hardware this can take anywhere from a few milliseconds to several seconds, but you should eventually see undefined in the console once it has done.

Getting the response.

Now we just have to console.log the result!

console.log(result)
Enter fullscreen mode Exit fullscreen mode

And we get:

  As a large language model, I do not have a name.
Enter fullscreen mode Exit fullscreen mode

Pretty underwhelming, but at least it works!

Quick and Dirty Reusable example

Obviously you don't want to have to keep sending multiple commands, so you can copy and paste this function into your console to make things easier:

  async function askLocalGPT(promptText){
    if(!window.chatSession){
      console.log("starting chat session") 
      window.chatSession = await window.ai.createTextSession()
      console.log("chat session created") 
    }

    return console.log(await window.chatSession.prompt(promptText)) 
  }
Enter fullscreen mode Exit fullscreen mode

And now you can just type askLocalGPT("prompt text") into your console.

I personally have that saved as a snippet in Sources > snippets for quick access when I want to play with it.

Have fun!

Is it any good?

No

Really? It isn't any good?

I mean, it depends on the measuring stick you are using.

If you are trying to compare it to Claude or ChatGPT, it is terrible.

However for local playing and experimentation it is awesome!

Also bear in mind that each time you ask a question, it does not automatically have memory of what you asked previously.

So if you want to have a conversation where the model "remembers" what was said previously you need to feed previous questions and answers in with your new question.

Is it fun to play with?

Yes.

The fact I can get it to work locally in my browser is pretty cool. Plus it can do simple coding questions etc.

And the beauty is no big bills! You can use the full 32k context window as often as you want without worrying about racking up a big bill by mistake.

Oh and while I said it isn't very good, it can do summaries quite well:

  askLocalGPT("can you summarise this HTML 
for me please and explain what the page is 
about etc, please return a plain text response 
with the summary and nothing else:" + document.querySelector('article').textContent.toString())
Enter fullscreen mode Exit fullscreen mode

And with a little playing it outputs:

This article explains how to run window.ai locally in your browser using Google's large language model (LGBL).

It describes the necessary steps, including enabling the "Prompt API for Gemini Nano" and "Optimization Guide on Device Model" flags in Google Chrome Canary, installing Gemini Nano, and restarting Chrome Canary.

The article then demonstrates how to use window.ai by creating a text session, prompting the AI model, and receiving the response. It concludes by discussing the possibilities and future enhancements of window.ai.

What will you build?

I have only just scratched the surface of the new API, but I can see it being really handy for creating "custom GPTs" for your own use for now.

In the future once AI is available in-browser for everybody, who knows what amazing things will be created.

Final thought

While I find this exciting as a developer and the possibilities it opens up, there is a large part of me that dislikes / is cautious of this.

People are already throwing "AI" into everything for no reason. Having it run locally on people's machines will only encourage them to use it for even stupider things!

Plus there are probably about 50 other things around security, remote AI farms, etc. etc. that are likely to make me cry in the future the more I think about it.

Top comments (29)

Collapse
 
best_codes profile image
Best Codes

Opera browser developer has a much better setup for this than Chrome. You can use literally over 100 AI models locally with no internet through Aria, Opera's built in AI.

GPT4ALL.io and Ollama are great for running models from huggingface locally on Linux, Windows, or macOS.

Nice article!

Collapse
 
grahamthedev profile image
GrahamTheDev

That is cool to know, I will have to check that out! 💗

Collapse
 
best_codes profile image
Best Codes

😁 Let me know what you think!

Collapse
 
mindplay profile image
Rasmus Schultz

Also bear in mind that each time you ask a question, it does not automatically have memory of what you asked previously.

There is an API for conversation/chat as well.

It doesn't look like there's any documentation for the JS API yet though. I'm not sure this is open source? so we might not even be able to reference the C++ code.

Based on the announcement from Google, they want us to use the API wrapper for their hosted inference, which has a built-in adapter for the JS API, which can be used with their hosted inference as a fallback.

I'd love a reply if you can find the docs or code?

Collapse
 
grahamthedev profile image
GrahamTheDev

I had a good look, but didn't find anything when I was writing this.

I imagine documentation will come once it all starts to filter down towards production.

If I do find anything I will let you know! 💗

Collapse
 
mariusbongarts profile image
Marius Bongarts

Does anyone know if it's certain that this will become a standard feature in Chrome browsers? Is it safe to start building with this API for the future?

I can't imagine all Chrome users downloading the large Gemini model. How does Chrome plan to handle this?

Collapse
 
grahamthedev profile image
GrahamTheDev

I certainly wouldn't recommend building anything substantial with an API that has not been announced or documented in any meaningful way. It is likely to change, evolve, features get deprecated, change model etc. etc. 💗

Collapse
 
devarshishimpi profile image
Devarshi Shimpi

Time to use it to summarise this article haha

Collapse
 
grahamthedev profile image
GrahamTheDev

Already done that, so that tells me you didn't read it all! hahahahah. 💗

Collapse
 
bobjames profile image
Bob James

Is it any good?
No
Is it fun to play with?
Yes
laugh out loud on this, the story of my life 🤣🤣

Collapse
 
grahamthedev profile image
GrahamTheDev

hehe, well I can't go writing articles on anything that is actually useful, it would ruin my reputation! 🤷🏼‍♂️🤣💗

Collapse
 
bobjames profile image
Bob James

i like fun things 💛💛

Collapse
 
henry_rutte profile image
Henry Rutté

Hi, I'm on chrome://components/ but it doesn't appear "Optimization Guide On Device Model". Do you know why this could be happening?

Collapse
 
grahamthedev profile image
GrahamTheDev

Check your Canary Chrome is up to date, other than that I am not sure I am afraid. 💗

Collapse
 
henry_rutte profile image
Henry Rutté • Edited

Image description

Image description

Collapse
 
ehsankey profile image
EhsanKey_

Hi, did you find a solution?
I have this problem too

Collapse
 
lyatziv profile image
Lavi Yatziv

This is pretty neat. This could open a lot of doors for accessibility and language translation.

Collapse
 
loris307_95 profile image
Loris Galler

Is this still working in the current version of 128? I tried everything but cant make it work

Collapse
 
grahamthedev profile image
GrahamTheDev

Yes still working.

Chrome version 128.0.6580.0 - up to date

Collapse
 
neolefty profile image
Bill Baker

In the console in 128, I had to do something like this instead:

let session = undefined
window.ai.createTextSession().then(s => session = s)
Enter fullscreen mode Exit fullscreen mode

Then I could do this:

await session.prompt("Say hello.")
Enter fullscreen mode Exit fullscreen mode
Collapse
 
grahamthedev profile image
GrahamTheDev

This is exactly the same, just in a more complicated way.

My guess is you missed an await the first time when creating the session constant. 💗

Collapse
 
irfnrdh profile image
Irfannur Diah

Linux is not supported. haha

Collapse
 
grahamthedev profile image
GrahamTheDev

Oh Really? That is a shame. 💗

Collapse
 
jeeva_d_603e80c912a5ba1d5 profile image
JEEVA D

is this not supported for ubuntu linux os.

Collapse
 
grahamthedev profile image
GrahamTheDev

I am afraid I have no idea, sorry. If you have run the code above on Chrome Canary and it does not function, then for some reason it would seem not.