DEV Community

Yuiko Koyanagi
Yuiko Koyanagi

Posted on • Updated on

Using OpenCV: Developed a web app to convert images to manga style

Hey guys ๐Ÿ‘‹๐Ÿ‘‹๐Ÿ‘‹

I have developed a web app, that converts images to Manga style.

image

website: https://manga.art-creator.net/
github:
frontend: https://github.com/yuikoito/manga-creator-frontend
backend: https://github.com/yuikoito/manga-backend

Like this!

4

5

โ€ป This article is the ninth week of trying to write at least one article every week.

Usage

Visit https://manga.art-creator.net/, then upload an image whatever you want to convert manga style.

Just four step.

  • Select an image

image

  • Choose effects you want

image

Once you have selected an image, you are free to choose from a variety of effects.
You can choose to put the effects on the background of the person or on top of the image.
By default, no effect is selected, so you can change it if necessary.

If you want to add an effect to the background, you need to choose an image that has a clear boundary between the person and the background. This is because I am not using machine learning to detect people, but rather implementing contour extraction and replacing the background.

  • Click the Convert button

image

  • Then, wait

After selecting the effect, click the Convert button and wait a few seconds and it will be converted.
Since I don't use machine learning this time, I think the conversion will be quite fast.

You don't even need to log in, so have a look and enjoy freely!

Composition

It consists of the following

Front . Nuxt.js+tailwind css
Backend.... .Python
APIization.... AWS
Hosting.... Vercel

The same configuration has been used for almost all of the following applications I've released so far.

The way to make API, you can see this article.

How to add effects

For the background effect, the outline is extracted, cropped, and then combined with the image.

# Add a background effect
def back_filter(src, manga, effect, th):
    # Grayscale conversion
    img = cv2.bitwise_not(src)
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    effect = cv2.cvtColor(effect, cv2.COLOR_BGR2GRAY)

    # Resize the screen tone image to the same size as the input image.
    effect = cv2.resize(effect,(img_gray.shape[1],img_gray.shape[0]))

    # binarization
    ret, img_binary = cv2.threshold(img_gray, th, 255,cv2.THRESH_BINARY)

    # Contour extraction
    contours, _ = cv2.findContours(img_binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

    # Get the contour with the largest area
    contour = max(contours, key=lambda x: cv2.contourArea(x))

    mask = np.zeros_like(img_binary)
    cv2.drawContours(mask, [contour], -1, color=255, thickness=-1)
    effect = np.where(mask == 0, effect, manga)

    # Combine tri-level image and contour image
    return effect
Enter fullscreen mode Exit fullscreen mode

The effect on top of the image is much simpler, it's just a composite.

# front effect
def front_filter(manga, effect):

    effect = cv2.resize(effect,(manga.shape[1], manga.shape[0]))
    # Mask Image Generator
    mask = effect[:,:,3]

    # Grayscale the effect
    effect = cv2.cvtColor(effect, cv2.COLOR_BGR2GRAY)

    # Combine tri-level image and contour image
    manga = np.where(mask == 0, manga, effect)
    return manga
Enter fullscreen mode Exit fullscreen mode

For the above configuration, the background effect is saved as jpg, and the effect on the image as png, transparent background image.

That's it!

Thanks for reading.
This is some kind of a joking app, but I am very happy if you enjoy it!

๐ŸŽ๐ŸŽ๐ŸŽ๐ŸŽ๐ŸŽ๐ŸŽ

Please send me a message if you need.

yuiko.dev@gmail.com
https://twitter.com/yui_active

๐ŸŽ๐ŸŽ๐ŸŽ๐ŸŽ๐ŸŽ๐ŸŽ

Top comments (0)