Doom was launched on everything – a calculator, a lawn mower, inside standard Windows applications and more. But with the help of several generative neural networks? This is the first time this has happened to the legendary shooter from id Software.

Image source: id Software

A group of four former and current employees of Google Research and Google DeepMind presented GameNGen, a game engine based on a neural model, capable of creating complex interactive scenes in high quality.

GameNGen can interactively simulate Doom gameplay at 20 fps. When predicting the next frame, the peak signal-to-noise ratio reaches 29.4 (comparable to lossy JPEG compression).

Image source: Google Research

GameNGen was trained in two stages: in the first, the AI ​​agent played Doom (sessions were recorded), and in the second, the diffusion model was trained to create the next frame, conditioned by the sequence of previous ones and input commands.

In other words, GameNGen does not generate the game on the fly, but only reproduces what it has already seen. Instead of rendering, the neural model creates a sequence of frames that changes according to the player’s actions.

So far, GameNGen suffers from a number of limitations, such as a very short memory (a little over three seconds) and differences in the behavior of the agent (during training) and the real player.

Although GameNGen is currently far from ideal, its creators hope that in the future these developments will help make the video game production process less expensive and more accessible.

Leave a Reply

Your email address will not be published. Required fields are marked *