This is the Tensorflow implementation of the SKFont: Skeleton-Driven Korean Font Synthesis with Conditional Deep Adversarial Networks.
Will be added soon
- Windows
- CPU or NVIDIA GPU + CUDA cuDNN
- python 3.6.8
- tensorflow-gpu 1.13.1
- pillow 6.1.0
-
conda create --name tutorial-TF python=3.6.8 -
conda activate tutorial-TF or activate tutorial-TF -
conda install -c anaconda tensorflow-gpu=1.13.1 -
conda env update --file tools.yml
Our model consists of three sub models namely F2F-F2S-S2F. For each model we have to prepare a paired dataset. i.e. a source to target font paired dataset, a target font to corresponing skeleton dataset, and a target skeleton to corresponding font dataset. To do this place any korean font in scr_font directory and N number of target fonts in the trg_font directory. Then run the below commands for data preprocessing.
-
Generate Source font images
python ./tools/src-font-image-generator.py -
Generate Target font images
python ./tools/trg-font-image-generator.py -
Generate Target font skeleton images
python ./tools/trg-skeleton-image-generator.py -
Combine source, target, and target skeletons
python ./tools/combine_images.py --input_dir src-image-data/images --b_dir trg-image-data/images --c_dir skel-image-data/images --operation combine -
Convert images to TFRecords
python ./tools/images-to-tfrecords.py
python main.py --mode train --output_dir trained_model --max_epochs 25
To learn an unseen font style you can fine tune an already pre-trained model with the below command. If you want to generate the already learnt font styles just skip the below command.
python main.py --mode train --output_dir finetuned_model --max_epochs 500 --checkpoint trained_model/
Generate images just like before but this time use a different module for creating testing TFRecords with the below mentioned command.
- Convert images to TFRecords
python ./tools/test-images-to-tfrecords.py
python main.py --mode test --output_dir testing_results --checkpoint finetuned_model
This code is inspired by the pix2pix tensorflow project.
Special thanks to the following works for sharing their code and dataset.
Citation will be added soon. Please cite our work if you like it.
The code and other helping modules are only allowed for PERSONAL and ACADEMIC usage.



