Face editing using sketches and images
DeepFaceEditing allows users to intuitively edit a face image to manipulate its geometry and appearance, requiring a representation that disentangles these properties. Given (a) an input image, (b) shows two examples of using a reference image to edit the appearance of the input image. Alternatively, (c) first shows an example replacing the geometry of the face with a sketch while keeping the appearance, followed by an example using both a sketch and a reference image to edit the geometry and appearance of the input image.
Hair editing using text and images
HairManip is based on the hierarchical division of facial attributes in StyleCLIP, and further divides the hair information into four categories: coarse, medium, fine, and extra fine. Based on our experience, coarse and medium-level semantic information corresponds to hairstyle attributes, while fine and extra fine-level semantic information corresponds to hair color attributes. We train hairstyle and hair color editing sub-networks to handle these sources of information independently. Our approach supports hairstyle and hair color editing individually or jointly, and conditional inputs can be either images or text.
More details are given in:
-
S.Y. Chen, F.L. Liu, Y.K. Lai, P.L. Rosin, C. Li, H. Fu, L. Gao,
"DeepFaceEditing: Deep Face Generation and Editing with Disentangled Geometry and Appearance Control",
ACM Transactions on Graphics, vol. 40, no. 4, art. 90, pp. 1–15, 2021.
Post-print|DOI: 10.1145/3450626.3459760
-
H. Zhao, L. Zhang, P. L. Rosin, Y.K. Lai, Y. Wang,
"HairManip: High Quality Hair Manipulation via Hair Element Disentangling",
Pattern Recognition, vol. 147, art. 110132, 2024.
Post-print|DOI: 10.1016/j.patcog.2023.110132
return to Paul Rosin's homepage