Forground Background Segmentation for style transfer
Project Description
We have many movies and TV shows' IPs. We want to convert existing movies to cartoon style. One issue we have encountered is background flickering, which can attract attentions from the characters. So we want to segment out foreground (characters) and background. Then we can control the style transfer for foreground and background separately.
We first run a human detector and get the bounding box for each characters in the foreground. Then we apply Segment Anything Model (SAM2) to segment out all the detected humans. I built up the automatic pipeline, which take a video as input, and output 3 corresponding videos (foreground, background and mask videos).
Since the project is for commercial use, I choose a few clips and demo here.