DEV Community

Clarifai Team for Clarifai

Posted on • Originally published at blog.clarifai.com on

Introducing Landscape Quality, Portrait Quality, and Textures & Patterns Visual Recognition Models

Being in the business of computer vision, we deal a lot with photos – good and bad. But what makes a photo “good” vs. “bad” – the composition? The lighting? The way it makes you feel? We decided to try and distill those elements into an algorithm to help photographers and media managers sort through large volumes of content to find the highest quality images and video.  

Show me the models

Being in the business of computer vision, we deal a lot with photos. These photos can range from selfies using a cell phone camera, to computer-generated images created by designers, to professional photographs using high-end DSLR cameras. Our broad range of models helps computers understand “what’s in an image” and “where the object is located in an image”. For the first time, we’re releasing models that help computers understand image quality, or “is this image good or bad?” We are happy to release Landscape Quality Model and Portrait Quality Model into Beta which understands the quality of an image, and responds back with the confidence level of whether an image is “high quality” or “low quality”.

Good quality photo attributes:

  • Good lighting
  • Sharp and in focus
  • If retouching is present, it is not obvious (no completely airbrushed skin)
  • Not too much grain/noise (**unless it’s the artist’s intention)

Poor quality photo attributes:

  • Severe chromatic aberration
  • Red eyes
  • Extremely backlit
  • Unnatural vignetting, often digitally added

“With our computer vision capabilities, we want photographers to focus on what they do best: capture amazing moments.”Week

Professional photographers and even photography enthusiasts can take thousands (if not tens of thousands) of photos on a daily basis. They would then go through each photo and decide on whether or not that picture is worth post-processing or not. Assuming a photographer takes 5,000 photos in a day, and spends 10 seconds to figure out whether or not the photo should be post processed or not, this filtration process could take over 13 hours for one day’s worth of photos. With our computer vision capabilities, we want photographers to focus on what they do best: capture amazing moments.

Speaking from personal experience, I was overwhelmed by the number of photos I had on my camera after my wildlife photography trip to Nairobi a few years ago. To this date, I still haven’t had the chance to go through every single image to filter high quality vs. low quality photo. Speaking to some professional fashion photographers, they spend hours manually going through each image that they’ve captured during a ramp show or a photoshoot. “Having computers make the initial pass at filtering would save me tens of hours on a weekly basis”, said Kimal Lloyd-Phillip, a photographer for Toronto Women’s and Men’s Fashion Week.

“Having computers make the initial pass at filtering would save me tens of hours on a weekly basis.” – Kimal Lloyd-Phillip, photographer for Toronto Women’s and Men’s Fashion Week

Our Developer Evangelism team [need link] hacked away at these models and created a tool that would group your photos within a folder into two separate folders: good and bad.

In addition to the Landscape and Portrait Quality Models, we are also introducing a Textures & Patterns Model that helps photographers, and designers identify common textures (feathers, woodgrain), unique/fresh texture concepts (petrified wood, glacial ice), and overarching descriptive texture concepts (veined, metallic).

We have partnered with a global consumer apparel manufacturer to integrate the Textures & Patterns Model into their design workflow. They are using Textures & Patterns model to inspire creativity amongst their designers, and further develop their design ideas. They indexed their design database (internal and external images) using our model; they then inputted any new, raw design ideas into our platform, ran our Visual Search tool to explore and discover various ways the design could evolve.

We’re excited to apply artificial intelligence to the arts and provide tools that would empower creators to be more effective at their work. We hope our broader customers enjoy using the new set of models as much as our initial testers did. If there’s any feedback and/or additional requests, feel free to shoot us a message at feedback@clarifai.com.

Show me the models!

The post Introducing Landscape Quality, Portrait Quality, and Textures & Patterns Visual Recognition Models appeared first on Clarifai Blog | Artificial Intelligence in Action.

Oldest comments (0)