Cartoonify your selfies
Yourself was made by
for fun and amusement using deep learning and Generative Adversarial Networks (StyleGAN for image pairs and Pix2PixHD for training).
a month ago
Last touched September 01, 2020 TLDR: If you want a Colab Notebook to toonify yourself click here: If you're interested in how the website Toonify Yourself works, see this followup post In a previous post I introduced the idea of Layer Swapping (or more generally network blending) for StyleGAN models.
Making Toonify Yourself
Last touched September 20, 2020 If you'd like to keep Toonify Yourself free for everyone to play with, please consider donating to cover running costs at Ko-fi: So Doron Adler and I recently released our toonification translation model at our Toonify Yourself website.
I Have Seen the Face of God
Artists have trained a machine to reveal its psychedelic inner visions, unleashing human-Pixar hybrids which provoke profound fear, disgust, revulsion, disgust, nausea, fear, and an immediate recognition of humanity's highest potential. Dubbed "Toonify," the project allows us to stare into the eyes of God.
Would you recommend this product?
Founder, Product Hunt
Emily Snowdon (née Hodgins)
Head Of Operations @ Product Hunt
Amazing StyleGAN project. Fun for all the family. *the website does not store image data* How does this work? This toonification system is made using deep learning. It's based on distillation of a blended StyleGAN models into a pix2pixHD image to image translation network. How did you come up with this idea? This all started from some earlier experiments Doron shared on Twitter that got a lot of interest. For a description of those early experiments see this blog post on self toonification, the idea is the same, but the method is different (and not really suitable for hosting as a webapp). Going even futher back in time, this was all based on some new methods of mixing StyleGAN models I came up with trying to make realistic Ukiyo-e portraits How are you paying for this? Wasn't it costing too much? We are now generously supported by DeepAI who are running the neural network backend for us. Although DeepAI are now doing most of the heavy lifting, there is still a bit of cost in running the site, and we're very grateful for those who have supported us on Ko-fi. How do I get good results? The algorithm works best with high resolution images without much noise. Looking straight on to the camera also seem to work best. Something like a corporate headshot tends to work well. Do you store my photo? We don't store any of the images uploaded or generated. We send the image to DeepAI's servers to run the network and then show you the results. No imagery is stored on our systems. My face wasn't found! We use the open source dlib face detector to find faces, it's designed to pick up frontal faces but isn't perfect. Where did my glasses go? Not many characters from animated films wear glasses so the model seems to have learnt to mostly remove them. Can I use this model for my own project? DeepAI provide an API that you can use to integrate this into your own project. It's what we're using to run this site! Original tweet by Doron:
See additional coverage here:
A technical deep dive:
I make funny content
I feel devastated looking at the result.
CTO at MySky
Good work and idea, like :] Result for me is not so good as I think. Will return in few month to try again)
Playing with deep learning
Well it's been a few weeks and there's a prototype of an upgraded high resolution version you can try out at:
Fun! Great creativity
Programmer at Glide Talk, Ltd.
Thank you 😊
Hunting down comments...