Paper Insight: Image-to-image translation - Pix2pix and Cycle GAN
Many problems in computer graphics are a translation of an input image into a corresponding output image like colorizing black and white image and converting horses to zebras and similar. This article will cover two very interesting approaches based on generative adversarial networks - GANs.
Paper Insight: Recent Progress in Self-Supervised Image Animation
In this paper insight we discuss two recent papers that deal with the following problem: how to animate an image given a target video with motion in a self-supervised manner. This involves unsupervised keypoint detection, segmentation and optical flow estimation along the way.
Learning to play Heroic - Magic Duel with Deep RL
We trained a Deep RL agent for our 1 v 1 real-time action strategy game, Heroic - Magic Duel, via ensemble self-play with only a simple reward of +-1. The agent obtains over 50% win rate against a top human player and over 60% against the existing AI.
Contextual Bandits for In-App Recommendation
In this post, we introduce some theory of contextual bandits, which have become a popular approach to recommender problems, where there are many possible items to offer and there is frequent interaction with the users, e.g. via click/no-click or purchase/no-purchase events.
Paper Insight: Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
MuZero is a new reinforcement learning algorithm that achieves state-of-the-art results in Atari benchmarks, and matches performance of AlphaZero in chess, shogi and go. It does all of this with greater sample efficiency and, consequently, in less training time.
Paper Insight: Face-to-Parameter Translation via Neural Network Renderer
In this post, we will focus on talk held at GDC 20 for a MMORPG game called Justice by NetEase currently published in China. Authors presented an algorithm that can be used to generate an in-game rendered face for a character from a given input image (from real life).
Paper Insight: Generating Motion with Neural Networks & Motion Capture
In this post, we give an overview of some of the recent advancements in deep learning-based character animation. In particular, we discuss two papers that provide a way to use neural networks to turn motion capture data into interactive character controllers.
The AAA Graphics of Spellsouls: The Journey to 60FPS on Mobile - Unite Austin 2017
Listen in on how we created a PBR-like shader that runs on mobile, scaled the game across a wide range of devices (low-end toasters included), and all the other various optimizations we implemented necessary to reach silky-smooth 60FPS.