Cargando…

Arbitrary Font Generation by Encoder Learning of Disentangled Features

Making a new font requires graphical designs for all base characters, and this designing process consumes lots of time and human resources. Especially for languages including a large number of combinations of consonants and vowels, it is a heavy burden to design all such combinations independently....

Descripción completa

Detalles Bibliográficos
Autores principales: Lee, Jeong-Sik, Baek, Rock-Hyun, Choi, Hyun-Chul
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8950682/
https://www.ncbi.nlm.nih.gov/pubmed/35336547
http://dx.doi.org/10.3390/s22062374
_version_ 1784675201248657408
author Lee, Jeong-Sik
Baek, Rock-Hyun
Choi, Hyun-Chul
author_facet Lee, Jeong-Sik
Baek, Rock-Hyun
Choi, Hyun-Chul
author_sort Lee, Jeong-Sik
collection PubMed
description Making a new font requires graphical designs for all base characters, and this designing process consumes lots of time and human resources. Especially for languages including a large number of combinations of consonants and vowels, it is a heavy burden to design all such combinations independently. Automatic font generation methods have been proposed to reduce this labor-intensive design problem. Most of the methods are GAN-based approaches, and they are limited to generate the trained fonts. In some previous methods, they used two encoders, one for content, the other for style, but their disentanglement of content and style is not sufficiently effective in generating arbitrary fonts. Arbitrary font generation is a challenging task because learning text and font design separately from given font images is very difficult, where the font images have both text content and font style in each image. In this paper, we propose a new automatic font generation method to solve this disentanglement problem. First, we use two stacked inputs, i.e., images with the same text but different font style as content input and images with the same font style but different text as style input. Second, we propose new consistency losses that force any combination of encoded features of the stacked inputs to have the same values. In our experiments, we proved that our method can extract consistent features of text contents and font styles by separating content and style encoders and this works well for generating unseen font design from a small number of reference font images that are human-designed. Comparing to the previous methods, the font designs generated with our method showed better quality both qualitatively and quantitatively than those with the previous methods for Korean, Chinese, and English characters. e.g., 17.84 lower FID in unseen font compared to other methods.
format Online
Article
Text
id pubmed-8950682
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-89506822022-03-26 Arbitrary Font Generation by Encoder Learning of Disentangled Features Lee, Jeong-Sik Baek, Rock-Hyun Choi, Hyun-Chul Sensors (Basel) Article Making a new font requires graphical designs for all base characters, and this designing process consumes lots of time and human resources. Especially for languages including a large number of combinations of consonants and vowels, it is a heavy burden to design all such combinations independently. Automatic font generation methods have been proposed to reduce this labor-intensive design problem. Most of the methods are GAN-based approaches, and they are limited to generate the trained fonts. In some previous methods, they used two encoders, one for content, the other for style, but their disentanglement of content and style is not sufficiently effective in generating arbitrary fonts. Arbitrary font generation is a challenging task because learning text and font design separately from given font images is very difficult, where the font images have both text content and font style in each image. In this paper, we propose a new automatic font generation method to solve this disentanglement problem. First, we use two stacked inputs, i.e., images with the same text but different font style as content input and images with the same font style but different text as style input. Second, we propose new consistency losses that force any combination of encoded features of the stacked inputs to have the same values. In our experiments, we proved that our method can extract consistent features of text contents and font styles by separating content and style encoders and this works well for generating unseen font design from a small number of reference font images that are human-designed. Comparing to the previous methods, the font designs generated with our method showed better quality both qualitatively and quantitatively than those with the previous methods for Korean, Chinese, and English characters. e.g., 17.84 lower FID in unseen font compared to other methods. MDPI 2022-03-19 /pmc/articles/PMC8950682/ /pubmed/35336547 http://dx.doi.org/10.3390/s22062374 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Lee, Jeong-Sik
Baek, Rock-Hyun
Choi, Hyun-Chul
Arbitrary Font Generation by Encoder Learning of Disentangled Features
title Arbitrary Font Generation by Encoder Learning of Disentangled Features
title_full Arbitrary Font Generation by Encoder Learning of Disentangled Features
title_fullStr Arbitrary Font Generation by Encoder Learning of Disentangled Features
title_full_unstemmed Arbitrary Font Generation by Encoder Learning of Disentangled Features
title_short Arbitrary Font Generation by Encoder Learning of Disentangled Features
title_sort arbitrary font generation by encoder learning of disentangled features
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8950682/
https://www.ncbi.nlm.nih.gov/pubmed/35336547
http://dx.doi.org/10.3390/s22062374
work_keys_str_mv AT leejeongsik arbitraryfontgenerationbyencoderlearningofdisentangledfeatures
AT baekrockhyun arbitraryfontgenerationbyencoderlearningofdisentangledfeatures
AT choihyunchul arbitraryfontgenerationbyencoderlearningofdisentangledfeatures