News Liste Synergy

Devlog #5: Shadows and Depth
Synergy
10.11.23 17:00 Community Announcements
Hello everyone!


Welcome to this new Devlog, which will be a little more technical than the others as we will tell you about the process of creating shadows and depth in Synergy!


While creating the game, we faced a common challenge in top-down 2D games - sorting rendering elements. The go-to strategy is drawing elements from top to bottom based on the texture's baseline, but it doesn’t work in some cases!

Buildings and environmental elements vary in size and don't always have a square base. As a result, larger elements may need to be rendered behind smaller decorations, even when they are lower on the screen. Simple fixes, like dividing long decorations into sections, can resolve these issues.



But this approach isn't quite suitable for us because in Synergy, the buildings are animated. Cutting them into sections would complicate production.

We opted for a solution similar to what is done in 3D games: sorting the entire game environment with a Z-buffer.



This not only allows us to sort buildings and the environment without issues but also gives us the ability to position characters realistically within the buildings!



To have a functional Z-buffer, we needed a basic 3D model of each building or environmental element and to store depth information in a texture.



We use Blender for our 3D models, and an in-house tool makes a standardized file for each of them, ensuring consistent depth textures for all buildings. After that, we process the texture to smoothly integrate it into the engine, using a filter in Substance Designer.



With a depth representation of the building, we can explore various visual effects by accurately determining the world position of each pixel in the environment!



This allowed us to implement our atmosphere and fog system, enhancing the world's relief and visual appeal.



Access to 3D information in the game allows for dynamic shadow casting on the screen. The process is a bit intricate.

In typical 3D games, shadows are rendered by comparing the Z-buffer from the game camera to the light source's perspective. However, our depth textures are limited to the game's view, not the light source's. So, we use rays from surfaces to the light until they hit a building for shadows.

It may not be the most efficient, but it provides interesting results in our 2D world. To maintain smooth gameplay, we generate intermediate images to estimate distance for optimal performance.



Simply having depth information doesn't cover the thickness of a decorative element. For the most accurate shadow, we have to estimate thickness; otherwise, the shadow appears too large and incorrect.

The building's thickness is estimated based on the rendered element's width. This way, a pole or column looks less solid than the body of a house.



There is still some work left to enhance shadow quality and performance, but it's a visual bonus well worth the effort.

We hope you enjoyed this 5th Devlog and thank you for your support during this project. See you soon for the next one!

https://store.steampowered.com/app/1989070/Synergy/