DEV Community

Cover image for Experiment: Playing a non-English visual novel through Google Translate's camera
Abdeldjalil Hebal
Abdeldjalil Hebal

Posted on • Updated on

Experiment: Playing a non-English visual novel through Google Translate's camera

I'm a big fan of a video game series called Danganronpa--a Japanese visual novel about Hope and Despair, and a "robotic bear" forcing students to kill one another.

The other day I came across a trailer for a game which I was hooked on right away because, as Reddit user u/Little-Big-Smoke commented,

Have to admit, that trailer indeed screams "Neon Danganronpa"

After doing some research, I learned that it's called Zetsubou Prison (Despair Prison), developed by Studio Wasabi, and is available for Android, iOS, and Windows but only in Japanese and Chinese.

I desperately wanted to play it... and that's what I've done.

This post tries to follow this structure:

  • How I actually played it
  • What I tried
  • Two ideas for automating the process
  • Some conclusions

How I played it

Being a Linux user (Kubuntu v18.4), I ended up downloading Steam and using Steam Play (Proton v5) to run the game on PC.

For some reason, the Steam version provides only the Chinese version.

Anyway, I used Google Translate's instant camera translation functionality (on my Android phone) to read the screen. (One could say I played "through the looking glass." #funny 🤦)

Google Translate does a surprisingly well job in translating from Chinese to English.
A few translation mistakes were obvious but I "mentally fixed them" from the context.

Although Chinese and Japanese use the same characters (kanji) to write names, they differ in pronunciation. This is why I sometimes would switch to Japanese to learn the characters' names and then switch back to Chinese to continue reading dialogues.

And that's how I played the first chapter of Despair Prison... like a savage... and I enjoyed it.

Screenshots

Screenshot

Precious bois, from left to right: Rui, Shiro, and Kisuke.

Shiro: "Huh? What did you recall/think of?"

Screenshot Screenshot Screenshot
Shiro's introduction: "I'm Fuyutsuki Shiro! I look forward to your guidance/advice." Shiro's profile: He seems like a best boi but a little suspicious. Shiro's enjoying the situation

Tried: Screen translator apps

On Android.

Of the screen translator apps I tried, Tranit was the most promising one and which had the nicest UX.

Unfortunately, it didn't work in this game.

Since Tranit used Android accessibility features to read/translate the screen, I tried to use axe for Android to "debug" it but axe couldn't detect anything. I assume the game developers did what's equivalent to drawing using the Canvas API instead of using native and semantic elements.


Idea: Doppelganger.js

Why not recreate the "instant camera translation" thingie on the desktop?

This should be doable using Screen Capture API, Google Vision and Translate APIs, Electron, and maybe RobotJS.

This works best for Japanese and Chinese visual novels and text-focused JRPGs with sparse animations and non-animated sprites.



/**
 * Doppelganger.js
 * As in those doppelgangers we find in Despair Prison
 * Pseudo-code for a program that adds a live translation overlay for the desktop
 */

const electron = require('electron');
const robot    = require('robotjs');

const Vision    = require('@google-cloud/vision');
const Translate = require('@google-cloud/translate');
const toEnglish = (text) => Translate.translate(text, 'en');

// The overlay shoud support input forwarding (i.e. mouse and keyboard events)
// either manually (listen for these events and emulate them in the original window using RobotJS)
// or hopefully automatically (see https://www.electronjs.org/docs/api/frameless-window )
const overlayWindow = new electron.BrowserWindow({ transparent: true, frame: false });

// Actually, only update the overlay if there is a change in the capture stream
// this is better for performace and to respect the Google's limitations
while (true) {
  // Or maybe use https://developer.mozilla.org/en-US/docs/Web/API/Screen_Capture_API
  const capture = robot.screen.capture();
  const textAnnotations = Vision.annotateImage(capture);
  const translatedAnnotations = textAnnotations.map(obj => ({
    ...obj,
    text: toEnglish(obj.text)
  }));

  // Update the "doppelganger" window's DOM with text elements that "overlay" on the original texts
  // (with the help of annotation's metadata like the bounding box)
  updateOverlay(overlayWindow, translatedAnnotations);
}


Enter fullscreen mode Exit fullscreen mode

Idea: Emulate the Google Translate app

Trick the Google Translate app into translating the screen for us:

  1. Configure the operating system to allow creating "virtual video devices" (the v4l2loopback project on GitHub).

  2. FFmpeg: Record a specific window (question on Stack Overflow).

  3. FFmpeg: Use the desktop (or just that specific window) as a "fake webcam" (answer on Super User).

  4. Android Studio: Set that "fake webcam" as the camera in the Android emulator (answer on Stack Overflow).

  5. Install Google Translate in the Android emulator (as a tablet or TV device, to have a large screen).

  6. Open the game and Android emulator windows side by side.

  7. Enjoy(?)


Conclusion

So what did we learn?

  • Designing with accessibility and inclusivity in mind helps everyone. (Tranit issue)
  • "There's more than one way to skin a cat."
  • Solving problems is as fun as playing the actual thing.

I probably will not implement these solutions/ideas but I might update this post and add some diagrams to better explain them just because.

Thank you for reading.

References

Top comments (0)