-
Notifications
You must be signed in to change notification settings - Fork 0
Client side Renderer
Aeonetica features an OpenGL-based 2D general purpose renderer as the main part of the client application.
- Basic Usage
- Textures
- Shaders
- Material Systemngs
- Custom Render Pipeline
- Advanced Features
- Internal Workings
In the center of the whole rendering process stands the Renderer
object. You can acquire an instance by implementing a Layer
.
Layer
s hold all data about the rendering process. They also handle window events and are responsible to decide which ClientHandles
get drawn.
A basic layer implementation may look like this:
// mods/mymod/src/client.rs
use aeonetica_client::renderer::{Renderer, Layer};
use aeoneitca_engine::math::camrea::Camera;
struct MyLayer {
}
impl Layer for MyLayer {
fn instanciate_camera(&self) -> Camera {
// create a Camera instance
// Example: give this camera a field of view of 48 horizontal and 27 vertical (16:9 aspect ratio)
// and align the coordinate center to the center of the screen
Camera::new(-24.0, 24.0, -13.5, 13.5, -1.0, 1.0)
}
}
Optionally, you can implement the following functions to access more functionality:
// in mods/mymod/src/client.rs
// ...
impl Layer for MyLayer {
// ...
// run once on layer creation, initialize you rendering data and settings here.
fn attach(&mut self, renderer: &mut Renderer) {}
// run once on client exit
fn quit(&mut self, renderer: &mut Renderer) {}
// run once every frame before rendering to update the camera position, roatation or FOV
fn update_camera(&mut self, store: &mut DataStore, camera: &mut Camera, delta_time: f64) {}
// run once every frame before ClientHandles receive their render call
fn pre_handles_update(&mut self, store: &mut DataStore, renderer: &mut Renderer, delta_time: f64) {}
// run once every frame after ClientHandles have received their render call
fn post_handles_update(&mut self, store: &mut DataStore, renderer: &mut Renderer, delta_time: f64) {}
// run on window event; return if the event was processed
fn event(&mut self, event: &Event, store: &mut DataStore) -> bool { false|true }
// set activeness of the layer, if the return value is false, the layer will no longer receive event and update calls
fn active(&self) -> bool { true|false }
// set the name of the layer, this is especially useful for debugging
fn name(&self) -> &'static str { "MyLayer" }
// check if the layer is marked as an overlay (UI layer)
fn is_overlay(&self) -> bool { true|false }
}
Window events are processed in the Layer
s event()
function. The following events exist:
KeyPressed(KeyCode) // emitted on keyboard key press, `KeyCode` denotes the pressed key.
KeyReleased(KeyCode) // emitted on keyboard key release, `KeyCode` denotes the released key.
MouseButtonPressed(MouseButton) // emitted on mouse button press
MouseButtonRelease(MouseButton) // emitted on mouse button release
MouseScrolled(Vector2<f32>) // emitted on mouse scrolling, value denotes the amount that was scrolled, vertical and horizontal
MouseMoved(Vector2<f32>) // emitted on mouse movement, value denotes the position of the mouse pointer relative to the window
WindowClose() // emitted on window close
WindowResize(Vector2<i32>) // emitted on window resiye, value denotes the new window size in pixels
Note If an event was handled, return
true
from theevent()
function to mark the event as handled. This way it will not be processed further by otherLayer
s.
VSync capabilites are provided by the underlying glfw3
windowing library. Set the command-line flag AEONETICA_VSYNC
to 1
or TRUE
to enable vsync.
On linux, running the client executable then looks like this:
$ AEONETICA_VSYNC=1 /path/to/client
On windows, you have to set the variable beforehand:
$ set AEONETICA_VSYNC=1
$ C:\...\path\to\client.exe
The easiest way of using the Aeonetica rendering system is via the so-called Renderable
objects. The following objects are built-in by default:
Quad // draw quads and rectangles either with a flat color, a texture or a sprite.
Line // draw simple flat-colored lines
TextArea // draw simple, flat-colored strings of text
ParticleEmitter // basic particle system implementation, *WIP*
The Renderer
interacts with Renderable
s using the following four functions:
// client/src/renderer/mod.rs, in impl Renderer
fn add(renderer: &mut Renderer, item: &mut impl Renderable); // add an item to the render buffers
fn modify(renderer: &mut Renderer, item: &mut impl Renderable) -> ErrorResult<()>; // apply modifications of an item to the render buffers
fn draw(renderer: &mut Renderer, item: &mut impl Renderable) -> ErrorResult<()>; // add or modify an item as needed
fn remove(renderer: &mut Renderer, item: &mut impl Renderable); // remove an item from the render buffers
Example: creating a simple line and adding it to the renderer:
// mods/mymod/src/client.rs, in impl Layer for MyLayer, function attach()
// arguments: start position, end position, weight, z_index, color
// returns: Line
let mut line = Line::new(Vector2::new(0.0, 0.0), Vector2::new(20.0, 10.0), 0.1, 2, [1.0, 0.0, 1.0, 1.0]);
renderer.add(&mut line);
Note Creating custom
Renderable
implementations is supported. See more in 6. Advanced Features
In the renderer, Texture
s are used to display images and text. In combination with a material supporting textures, they can be attached to most Renderable
items.
(e.g.: Creating a textured quad can be done using the type Quad<FlatTexture>
with the with_texture
or with_sprite
constructor function.)
In general, textures are represented using the Texture
struct. Most renderer functions however will accept only the ID of the texture in the RenderID
type.
Use Texture::id()
to receive this id.
Textures can be loaded from images either directly from a file:
fn from_file(image_path: &str) -> ErrorResult<Texture>;
// Example usage:
let texture = Texture::from_file("/your/image/path").expect("error loading image");
Or from a inline stream of bytes, best used in conjunction with include_bytes!()
:
fn from_bytes(bytes: &[u8]) -> ErrorResult<Texture>;
// Example usage:
let texture = Texture::from_bytes(include_bytes!("your/image/path")).expect("error loading image");
Note For loading images, the only supported formats currently are
rgba8
(32 bits) orrgb8
(24 bits).
Optionally, you can also create an empty texture:
fn create(size: Vector2<u32>, format: Format) -> Texture;
// Example usage:
let empty_texture = Texture::create(Vector2::new(64, 64), Format::RgbaU8);
In this function, format
denotes the data type of the pixels. Supported data types are:
-
RgbaU8
32 bits, transparency, 4 channels, 8 bits per channel -
RgbU8
24 bits, no transparency, 3 channels, 8 bits per channel -
RgbF16
64 bits, transparency, 4 channels, 16 bits per channel
Writing data to a texture works via the Texture::set_data()
method:
fn set_data(&Texture, data: &[u8]);
// example usage:
let texture = ...; // create texture here
let data = ...; // create image data here
let bytes = data.as_bytes(); // convert data to bytes (might be different depending on data type)
texture.set_data(bytes);
Note Textures may also be used as a render target. See under
FrameBuffer
in5. Custom render Pipeline
Since switching between many textures during rendering is slow, most games use so-called sprite-sheets to combine many sprites into a single big texture.
By ordering all sprites in a fixed grid, you can access each one individually using the SpriteSheet
wrapper around a simple Texture
.
A correct sprite-sheet may look like this:
To create a SpriteSheet
from a texture use:
fn from_texture(texture: Texture, sprite_size: Vector2<u32>) -> ErrorResult<SpriteSheet>;
// example usage
let texture = ...; // load texture here
let sprite_sheet = SpriteSheet::from_texture(texture, Vector2::new(16, 16)).expect("error creating sprite-sheet from texture");
The sprites now get indexed on the texture from left to right and up to down and can be retrieved using get()
:
fn get(&SpriteSheet, index: u32) -> Option<Sprite>;
// example usage
let sprite_sheet = ...; // create sprite-sheet here
let sprite = sprite_sheet.get(0 /*any valid sprite index*/).expect("error retrieving sprite 0");
Here, the successful return value is of type Sprite
. This struct contains all important information for the renderer to know where and on which texture the sprite is located.
To, for example, create a Quad
using a sprite, use the with_sprite
funcion:
fn with_sprite(position: Vector2<f32>, size: Vector2<f32>, z_index: u8, sprite: Sprite) -> Quad<FlatTexture>;
// example usage
let sprite = ...; // create sprite here
let quad = Quad::with_sprite(Vector2::new(0.0, 0.0), Vector2::new(1.0, 1.0), 0, sprite);
Working similar to sprite-sheets, aeonetica comes with a basic font renderer based on bitmap textures.
Note When referring to
bitmap
, we currently mean monochrome textures, as they still use theRgbaU8
layout right now. (subject to change)
For drawing text, you first need to load a font. Aeonetica fonts consist of two parts: The font texture and character lookup table.
The character lookup table can either be a .ron
(here renamed to .bmf
) file or a simple Hashmap<char, u32>
.
Example font texture:
Example font character lookup: (.bmf
/.ron
file)
// font.bmf
(
texture: "<your font>.png",
monospaced: false, // set to true if monospaced
char_size: (5, 10), // monospaced character size, comparable to the sprite size
characters: { // character lookup (ascii char and sprite index on the texture)
"A": 0, "B": 1, "C": 2, "D": 3, "E": 4, "F": 5, "G": 6, "H": 7, "I": 8, "J": 9,
"K": 10, "L": 11, "M": 12, "N": 13, "O": 14, "P": 15, "Q": 16, "R": 17, "S": 18, "T": 19,
"U": 20, "V": 21, "W": 22, "X": 23, "Y": 24, "Z": 25,
"a": 40, "b": 41, "c": 42, "d": 43, "e": 44, "f": 45, "g": 46, "h": 47, "i": 48, "j": 49,
"k": 50, "l": 51, "m": 52, "n": 53, "o": 54, "p": 55, "q": 56, "r": 57, "s": 58, "t": 59,
"u": 60, "v": 61, "w": 62, "x": 63, "y": 64, "z": 65,
"0": 80, "1": 81, "2": 82, "3": 83, "4": 84, "5": 85, "6": 86, "7": 87, "8": 88, "9": 89,
"(": 100, ")": 101, "[": 102, "]": 103, "{": 104, "}": 105, "#": 106, "'": 107, "`": 108, "´": 109,
"\"": 110, "°": 111, "^": 112, "|": 113, ".": 114, ",": 115, ":": 116, ";": 117, "!": 118, "?": 119,
"/": 120,"\\": 121, "*": 122, "+": 123, "-": 124, "<": 125, ">": 126, "~": 127, "@": 128, "&": 129,
" ": 160
}
)
To now create a font from these files, use BitmapFont::from_texture_and_fontdata()
:
fn from_texture_and_fontdata(texture: Texture, fontdata: &str) -> ErrorResult<BitmapFont>;
// example usage:
let font_texture = ...; // load font texture here
let font = BitmapFont::from_texture_and_fontdata(font_texture, include_str!("/path/to/font.bmf")).expect("error generating bitmap font");
Drawing text is made easy via the TextArea
Renderable
implementation:
fn with_string<S: Into<String>>(position: Vector2<f32>, z_index: u8, font_size: f32, spacing: f32, font: Rc<BitmapFont>, material: Rc<FlatTexture>, string: S) -> TextArea<_, _>;
// example usage:
let font = Rc::new(...); // load font here
// generics allocate the buffer space for the renderer:
// - first is the number of vertices, have to be exactly 4x the number of chars (compile-time checked)
// - second is the number of characters
let mut text_area = TextArea::<48, 12>::with_string(Vector2::new(0.0, 0.0), 0, 10.0, 2.0, font, FlatTexture::get(), "Hello, World");
// adding the text_area to the renderer
let renderer = &mut ...; // retrieve renderer from somewhere
renderer.add(&mut text_area);
Note If the string is shorter than the allocated space, the rest will get filled with spaces. Make sure your texture provides a
' '
character!
Shaders make out the central part of the rendering process. Shaders are little programs that run directly on your graphics card which determine the color of each individual pixel.
In aeonetica, shaders are written in the OpengGL Shading Language (GLSL)
.
References for learning GLSL
:
Aeonetica supports two types of shaders: Vertex
and Fragment
shaders (see 3.1 / 3.2 respectively). Both shaders are located in the same source file. To differentiate between different shaders, tag
s are used to denote the start of a specific shader.
The following tags are required to always be present in a shader source file:
-
#[vertex]
: beginning of the vertex shader -
#[fragment]
: beginning of the fragment shader
You can however have an arbitrary amount of tags in your code, to add descriptions, longer comments, etc.
For example, the shader used in the built-in FlatColor
material looks like this:
// client/assets/default-flat-color.glsl
#[description]
default shader for the FlatColor material
#[vertex]
#version 450 core
layout (location = 0) in vec2 a_Position;
layout (location = 1) in vec4 a_Color;
uniform mat4 u_ViewProjection;
out vec4 v_Color;
void main() {
v_Color = a_Color;
gl_Position = u_ViewProjection * vec4(a_Position, 0.0, 1.0);
}
#[fragment]
#version 450 core
in vec4 v_Color;
layout (location = 0) out vec4 r_Color;
void main() {
r_Color = v_Color;
}
Vertex Shaders get run once per vertex, so per point in space (for example, a normal quad has 4 vertices). Its task is to accept all data from the vertex attributes, usually set by the used material and to output the correct position for the vertex to lie on by applying the camera's projection matrix.
Aeonetica provides support for the following vertex attribute data types by default:
Name | rust type |
GLSL type |
Usage |
---|---|---|---|
Vertex |
[f32; 2] |
vec2 |
2D points in space |
TexCoord |
[f32; 2] |
vec2 |
2D point on a texture |
Color |
[f32; 4] |
vec4 |
RGBA color in fp-representation |
TextureID |
Sampler2D(0) |
int |
OpenGL texture slot (0..16) |
Float |
f32 |
float |
Single-presicion floating-point number |
You can add more types to this list by implementing the ShaderLayoutType
trait.
Note
Sampler2D(0)
types will get filled automatically by the renderer. So just create a value using default-zero:Sampler2D(0)
Fragments Shaders get executed after the vertex shader once per pixel. Its task is to calculate the correct color value for the current pixel using data provided via uniforms, the vertex shader or textures. You can also program it to calculate lighting, water effects and much more.
Note: Keep in mind that this shader runs once for every pixel. This means this part of the code has to be especially performant. If possible, precompute all possible values on the CPU or in the vertex shader.
In code, shaders are represented using the shader::Program
struct. To create an instance of it, use the from_source()
function:
fn from_source(src: &str) -> ErrorResult<shader::Program>;
// example usage
let shader = shader::Program::from_source(include_str!("/path/to/your/shader.glsl")).expect("error loading shader");
Shaders like this are best used in conjunction with a Material
. See 4. Material System
for more information.
Shader Uniforms are variables in the shader code separate from vertex attributes that can be uploaded from the CPU to the GPU at any time. To do this, use the upload_uniform()
function:
fn upload_uniform<U: Uniform + ?Sized>(&self, name: &UniformStr, data: &U);
// example usage
const UNIFORM_NAME: UniformStr = uniform_str!("u_YourUniformName") // special null-byte-terminated string for identifying uniform variables
let shader = ...; // load/get shader here
let value = ...; // assign uniform value
shader.bind(); // don't forget this!
shader.upload_uniform(&UNIFORM_NAME, &value);
shader.unbind(); // optional, omit if not needed for better performance
Naming schemes:
While not enforced, we advise you to use the following naming scheme within your shader code to keep it inline with other aeonetica shader code:
Generally, all constants are in UPPER_SNAKE_CASE
, local variables in snake_case
and global variables in PascalCase
with one of the following prefixes:
Prefix | Meaning/Usage |
---|---|
a_ |
Vertex Attribute |
v_ |
Passthrough value between vertex and fragment shaders |
u_ |
Uniform variables |
r_ |
Final fragment shader output result variables |
Another use for shaders apart from drawing geometry is the post-processing step. This step takes a single big texture with all geometry pre-rendered and outputs it to another texture or buffer. This way you can add nice visual effects like bloom, blur, distortion, etc.
Post processing shaders always accept the following vertex attributes:
// in #[vertex]
layout (location = 0) in vec2 a_Position;
layout (location = 1) in vec2 a_FrameCoord;
There are two ways a post-processing shader may be realized:
1. PostProcessingLayer
The PostProcessingLayer
is the easiest way of creating a post processing shader. Simply create an implementation of the PostProcessingLayer
trait:
struct MyPostProcessingLayer {
// struct data
}
impl MyPostProcessingLayer {
fn new() -> Self { Self { } } // initialize here
}
impl PostProcessingLayer for MyPostProcessingLayer {
fn attach(&self) { /* run on layer attachment */ }
fn detach(&self) { /* run on layer detachment */ }
fn enabled(&self) -> bool { true|false /* is the layer enabled? */ }
fn post_processing_shader(&self) -> &shader::Program { /* provide your desired shader here */ }
}
Alternatively, you can also implement uniforms()
to set custom shader uniform variables:
const MY_UNIFORM_NAME: UniformStr = uniform_str!("u_MyUniform");
// in impl PostProcessingLayer fro MyPostProcessingLayer
fn uniforms<'a>(&self) -> &'a[(&'a UniformStr, &'a dyn Uniform)] {
&[(MY_UNIFORM_NAME, 3.14), ...] /* return uniform names and values here */
}
A post-processing layer, similar to a standard Layer
implementation, can be applied using the set_post_processing_layer()
function of the RenderContext
struct. Do this in the client mod's start()
function:
fn set_post_processing_layer(&mut RenderContext, post_processing_layer: Rc<dyn PostProcessingLayer>);
// example usage:
// in start() in impl ClientMod for MyModClient
context.set_post_processing_layer(Rc::new(MyPostProcessingLayer::new()));
Note This way, the post-processing shader's fragment-shader has to provide a
u_Frame
texture uniform:uniform sampler2D u_Frame;
2. In a FrameBuffer
's draw()
function:
Another, way of rendering with a post-processing shader is via Framebuffer
s.
The FrameBuffer
struct provides a draw()
function accepting a shader:
fn render<const N: usize>(&FrameBuffer, texture_attachments: [(usize, &UniformStr); N], target: &Target, shader: &shader::Program);
// example usage
const TEXTURE_ATTACHMENT0_UNIFORM_NAME: UniformStr = uniform_str!("u_Frame"); // change this to the accepting uniform name in the shader
let target = ...; // get your render target here (framebuffer or raw)
let shader = ...; // get your post-processing shader here
let fb_to_draw = ...; // get your source framebuffer here -> has to have at least one color attachment
fb_to_draw.draw([(0, &TEXTURE_ATTACHMENT0_UNIFORM_NAME)], &target, &shader);
Note look into
5. Custom Render Pipeline
for more information aboutFrameBuffer
s.
As you probably noticed, working with raw shaders is a bit complicated. To make using shaders with different Renderable
items easy,
Aeonetica comes with its own material system to provide a unified system for shaders and shader vertex attribute layouts, as well as other shader data handling.
The following two materials are already built-in:
-
FlatColor
draw shapes with a simple, static color -
FlatTexture
draw shapes with a simple, static texture or sprite
Using those materials with Renderable
objects is made especially easy, since they provide custom wrapper functions for an initializer.
Creating a simple textured quad can be achieved using the with_texture()
function:
fn with_texture(position: Vector2<f32>, size: Vector2<f32>, z_index: u8, texture: RenderID) -> Quad<FlatTexture>;
// example usage
let texture = ...; // load texture here
let quad = Quad::with_texture(Vector2::new(0.0, 0.0), Vector2::new(10.0, 10.0), 0, texture.id());
Note This also applies to sprites and
FlatColor
. The following functions exist forQuad
:fn with_sprite(position: Vector2<f32>, size: Vector2<f32>, z_index: u8, sprite: Sprite) -> Quad<FlatTexture>; fn with_color(position: Vector2<f32>, size: Vector2<f32>, z_index: u8, color: [f32; 4]) -> Quad<FlatColor>;
Usually, it's best to keep materials as singletons (keeping them on one instance) to keep the best possible performance. Therefore, both FlatColor
and FlatTexture
feature a get()
function:
fn get() -> Rc<FlatColor>;
fn get() -> Rc<FlatTexture>;
// example usage:
let texture_material = FlatTexture::get();
You can, however, create custom instances with your own shader using the with_shader()
function:
fn with_shader(shader: Rc<shader::Program>) -> FlatColor;
fn with_shader(shader: Rc<shader::Program>) -> FlatTexture;
// example usage:
let texture_shader = Rc::new(...); // get your shader here
let texture_material = FlatTexture::with_shader(texture_shader);
This is especially useful when dealing with Renderable
implementations, that don't support all materials, e.g:
-
Line
supports onlyFlatColor
-
TextArea
supports onlyFlatTexture
Creating custom materials is possible too by simply implementing the Material
trait:
struct MyMaterial {
shader: Rc<shader::Program>
}
impl Material for MyMaterial {
type Layout = ...; // buffer layout in the form of `BufferLayoutBuilder`
type Data<const N: usize> = ...; // vertex data (not including vertices!) with a given length N
type VertexTuple = ...; // actual vertex attribute layout used by the shader in `VertexTupleN<...>` representation
fn shader(&self) -> &Rc<shader::Program> { &self.shader } // shader getter
fn texture_id<const N: usize>(data: &Self::Data<N>) -> Option<RenderID> { ... } // return `Some(texture_id)` if your vertex attributes support textures
fn layout<'a>() -> &'a Rc<BufferLayout> { ... } // return the built BufferLayout instance
fn vertices<const N: usize>(&self, vertices: [[f32; 2]; N], data: &Self::Data<N>) -> [Self::VertexTuple; N] { ... } // combine both vertex points and data together
fn data_slice<const N: usize, const NN: usize>(&self, data: &Self::Data<N>, offset: usize) -> Self::Data<NN> { ... } // return a slice of a `Data<N>` variable of length `NN`
fn default_data<const N: usize>(&self) -> Self::Data<N>; // return a pre-filled default value of `Data<N>`
}
Note See
client/src/renderer/material/mod.rs
for example implementations.
To be able to use your custom material easily, with, for example, Quad
, you may define a custom trait to expand the default constructor function:
trait WitMyMaterial {
fn with_my_material(position: Vector2<f32>, size: Vector2<f32>, z_index: u8, my_material_data: ...) -> Self;
}
impl WithMyMaterial for Quad<MyMaterial> {
fn with_my_material(position: Vector2<f32>, size: Vector2<f32>, z_index: u8, my_material_data: ...) -> Self {
Self::new(position, size, z_index, MyMaterial::get() /* implement a get() method */, (
// insert your material's data here (colors, textures, etc.)
))
}
}
Note For the material's
get()
function, it is best to use theDataStore
type to avoid duplication by multiple mod references!
The render pipeline describes the process the renderer takes to generate the final image. The default pipeline is quite basic and looks like this:
fn pipeline(&mut DefaultPipeline, renderer: &mut Renderer, camera: &Camera, target: &Target, mut updater: LayerUpdater, time: Time) {
renderer.begin_scene(camera); // begin the renderer scene -> apply camera transformations
updater.update(renderer, time); // update the scene
renderer.draw_vertices(target); // draw all submitted vertices
renderer.end_scene(); // end the renderer scene
}
For many visual effects however, it is necessary to have a few more steps in the rendering process. For this case, you can implement the Pipeline
struct, which provides a single function pipeline()
:
struct MyPipeline;
impl Pipeline for MyPipeline {
fn pipeline(&mut self, renderer: &mut Renderer, camera: &Camera, target: &Target, mut updater: LayerUpdater, time: Time) {
// your rendering pipeline code here
}
}
Here, you can utilize features like framebuffers, stencils and other OpengGL functionality.
In this code, there are two structs of interest:
1. Target:
Target
is an enum defined like this:
enum Target<'a> {
Raw,
FrameBuffer(&'a FrameBuffer)
}
It defines a specific target the renderer renders to. Raw
represents the default framebuffer of the window and is rarely used. FrameBuffer()
points to a specific FrameBuffer
used as the target.
2. LayerUpdater:
LayerUpdater
is a small struct with the only purpose of calling the update methods in all activated Layer
s and ClientHandle
s. Use the .update()
for this purpose. You can also access other parameters, like the DataStore
via this struct:
fn update(&mut LayerUpdater<'_>, renderer: &mut Renderer, time: Time);
fn store<'a>(&mut LayerUpdater<'a>) -> &mut &'a mut DataStore;
Framebuffers are a combination of textures, buffers and stencils that can be used as a target to render to.
Note Aeonetica has a few built-in
FrameBuffer
s for scaling the canvas. All of these buffers have a resolution of1920x1080
pixels.
A framebuffer can be created using its new()
function:
fn new<const N: usize>(attachments: [Attachment; N], freestanding: bool) -> ErrorResult<FrameBuffer>;
// example usage:
let frame_buffer = FrameBuffer::new(
[
Attachment::Color(Texture::create(Vector2::new(1920, 1080)))
], true
).expect("error creating FrameBuffer");
-
attachments
is an array of features you'd like theFrameBuffer
to support:-
Color
is a standard color attachment used as a target renderer texture. It takes a texture as its argument. At least one color attachment is required perFrameBuffer
! -
DepthStencil
takes aRenderBuffer
used as the depth and stencil buffer at the same time. This is used when layering 3D graphics and is generally supported by default in Aeonetica. - (...)
-
-
if true,
freestanding
allows the framebuffer to be rendered without any other geometry, or theRenderer
struct using theFrameBuffer
'srender()
function:fn render<const N: usize>(&FrameBuffer, texture_attachments: [(usize, &UniformStr); N], target: &Target, shader: &shader::Program); // example usage: const COLOR_ATTACHMENT0_NAME: UniformStr = uniform_str!("u_YourUniformName"); // name of the uniform the color attachment gets uploaded to in the shader let target = ...; // get your render target here let shader = ...; // get your shader here let frame_buffer = ...; // get your source framebuffer here frame_buffer.render([0usize, &COLOR_ATTACHMENT0_NAME], &target, &shader);
In this example, the framebuffer was instructed to upload its first color attachment to the shader uniform
u_YourUniformName
for drawing. You can have and draw as many color attachments as you like by simply expanding the array.
To set a framebuffer as the render target, bind it using the bind()
function. To unbind the framebuffer, use unbind()
:
fn bind(&FrameBuffer);
fn unbind(&FrameBuffer);
// example usage:
let frame_buffer = ...;
frame_buffer.bind();
// do rendering here
frame_buffer.unbind();
// do sth with the framebuffer
To delete a framebuffer, use the delete()
function.
Renderbuffers are used as a storage space for the DepthStencil
attachment in framebuffers.
To create a RenderBuffer
, use the new()
function:
fn new(size: Vector2<u32>) -> ErrorResult<RenderBuffer>;
// example usage:
let render_buffer = RenderBuffer::new(Vector2::new(1920, 1080)).expect("error creating renderbuffer");
Renderbuffers also support binding (bind()
), unbinding (unbind()
) and deletion (delete()
).
In this chapter you will find more advanced rendering functions only used rarely.
By implementing the Renderable
trait for a struct, you can make it compatible with Aeoneticas rendering system. A typical implementation may look like this:
Note This is pseudo-code! Use the shapes defined in
client/src/renderer/builtin
as a reference.
struct MyShape {
position: Vector2<f32>,
z_index: u8,
// other setting values here, like size, rotation, etc.
material: Rc<impl Material>, // either make the material generic or define a fixed material made to use with your shape
vertices: Option<[Material::VertexTuple; N /* for quads, this is 4 */]>,
params: Material::Data<N /* same as above */>,
location: Option<VertexLocation>
}
const MY_SHAPE_INDICES: [u32; X /* for quads, this is 6 */] = [...]; // indices describe the order in which the vertices get drawn, for quads this is 0, 1, 2, 2, 3, 0
impl MyShape {
pub fn new(/* constructor arguments */) -> Self {
// constructor here
}
pub fn set_dirty(&mut self) {
self.vertices = None;
}
pub fn set_position(&mut self, position: Vector2<f32>) {
self.position = position;
self.set_dirty(); // set dirty so the changes in the values can be applied to the vertices
}
/*
* Other getters and setters here
*/
fn recalculate_vertex_data(&mut self) {
self.vertices = Some(self.material.vertices(
[
// insert all vertices (points in space) here that make up your shape. They are expected in the [f32; 2] array type.
],
&self.params
));
}
}
// actual `Renderable` implementation:
impl Renderable for MyShape {
fn vertex_data(&mut self) -> VertexData<'_> {
if self.is_dirty() {
self.recalculate_vertex_data();
}
let vertices = self.vertices.as_ref().unwrap();
VertexData::from_materil::<Material, N /* insert your material and number of vertices here */>(
util::to_raw_byte_slice!(vertices),
MY_SHAPE_INDICES.as_slice(),
&self.material,
&self.params,
self.z_index
)
}
fn texture_id(&self) -> Option<crate::renderer::RenderID> {
None|Some(texture) // if you have a texture bound to your shape, return its RenderID here
}
fn location(&self) -> &Option<VertexLocation> {
&self.location
}
fn set_location(&mut self, location: Option<VertexLocation>) {
self.location = location;
}
fn is_dirty(&self) -> bool {
self.vertices.is_none()
}
fn has_location(&self) -> bool {
self.location.is_some()
}
}
Note Different to, for example
position
, some values, likez_index
or the number of vertices/indices cannot be changed at runtime. Do this before object creation.
This chapter focuses more on the implementation rather than the usage of the renderer.
The whole renderer is implemented in the client/src/renderer
folder (in rust, the aeonetica_client::renderer
module):
Default Shaders:
-
client/assets/default-shader.glsl
default post-processing shader -
client/assets/flat-color-shader.glsl
default shader forFlatColor
materials -
client/assets/flat-texture-shader.glsl
default shader forFlatTexture
materials
Icons:
-
client/assets/logo-XXX.png
used as the main window icon available in four resolutions (15px
,30px
,60px
and120px
)
Mathematic utilities:
-
engine/src/math
-
engine/src/math/vector
implementations of the generic vector typesVector2
andVector3
-
engine/src/math/camera.rs
implementaiton of the viewportCamera
struct -
engine/src/math/matrix.rs
implementation of the generic 4x4 matrix typeMatrix4
-
Other utilities:
-
engine/src/collections/ordered_map.rs
an implementation of a fixed value orderHashMap
-
engine/src/util/generic_assert.rs
implementation of compile-time generic assertion via data types (used inTextArea
)
Main implementation:
-
client/src/renderer
-
util.rs
OpenGL utility wrapper functions aroundglViewport
,glScissor
andglPolygonMode
-
pipeline.rs
Definition of thePipeline
trait and implementation ofDefaultPipeline
-
glerror.rs
OpenGL error handler -
layer.rs
Definition of theLayer
trait and associated utilites -
batch.rs
Definition of the batch renderer -
context.rs
Definition ofRenderContext
, tasked with dispatching events and handling layers -
mod.rs
Implementation of the main renderer frontend
-
-
client/src/renderer/window
-
event.rs
Definition of window events -
mod.rs
Wrapper around theglfw3
library, most basic wrapper around draw- and event callbacks
-
-
client/src/renderer/texture
-
font.rs
Defintion of theBitmapFont
system -
sprite_sheet.rs
Definition of theSpriteSheet
system -
texture.rs
Texture
wrapper and image loader
-
-
client/src/renderer/shader
-
uniform.rs
&data_type.rs
Used for interaction between shader and rust data types. -
postprocessing.rs
Implementation ofPostProcessingLayer
-
mod.rs
Shader wrapper and loader
-
-
client/src/renderer/material/mod.rs
Material system and both builtin materials -
client/src/renderer/builtin
Implementations of basicRenderable
items, likeQuad
,Line
,TextArea
andParticleEmitter
(WIP!) -
client/src/renderer/buffer
-
framebuffer.rs
Wrapper around OpenGL frame buffers -
renderbuffer.rs
Wrapper around OpenGL render buffers -
vertex_array.rs
Wrapper around OpenGL vertex arrays, handles vertex attributes -
mod.rs
Basic general-purpose OpenGL buffer implementation containing utilities to programmatically define theBufferLayout
-