Tilemap: Use vertex shader based autotile animation strategy
Previously, we would just stuff the entire tilemap vertex data four times into the buffers, with only the autotile vertices offset according to the animation frame. This meant we could prepare the buffers once, and then just bind a different offset for each animation frame without any shader changes, but it also lead to a huge amount of data being duplicated (and blowing up the buffer sizes). The new method only requires one buffer, and instead animates by recognizing vertices belonging to autotiles in a custom vertex shader, which offsets them on the fly according to the animation index. With giant tilemaps, this method would turn out to be a little less efficient, but considering the Tilemap is planned to be rewritten to only hold the range of tiles visible on the screen in its buffers, the on the fly offsetting will become neglient, while at the same time the amount of data we have to send to the GPU everytime the tilemap is updated is greatly reduced; so a net win in the end.
This commit is contained in:
parent
033d46a293
commit
56226c40c6
6 changed files with 136 additions and 76 deletions
|
@ -40,6 +40,7 @@
|
|||
#include "../simple.vert.xxd"
|
||||
#include "../simpleColor.vert.xxd"
|
||||
#include "../sprite.vert.xxd"
|
||||
#include "../tilemap.vert.xxd"
|
||||
|
||||
#ifdef RGSS2
|
||||
#include "../blur.frag.xxd"
|
||||
|
@ -422,6 +423,22 @@ void PlaneShader::setOpacity(float value)
|
|||
}
|
||||
|
||||
|
||||
TilemapShader::TilemapShader()
|
||||
{
|
||||
INIT_SHADER(tilemap, simple, TilemapShader);
|
||||
|
||||
ShaderBase::init();
|
||||
|
||||
GET_U(aniIndex);
|
||||
}
|
||||
|
||||
void TilemapShader::setAniIndex(int value)
|
||||
{
|
||||
gl.Uniform1f(u_aniIndex, value);
|
||||
}
|
||||
|
||||
|
||||
|
||||
FlashMapShader::FlashMapShader()
|
||||
{
|
||||
INIT_SHADER(simpleColor, flashMap, FlashMapShader);
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue