In the previous blog post, we had an empty scene being rendered to two framebuffers which were used to texture two quads, in this blog post I will describe how I created the scene which is rendered to the framebuffers.
For anyone following along at home, here is the code at the end of the last post
The first thing I did was create a mutable list of the
AbstractGameObjects which will be looped through in the
onDrawFrame function to draw each object to the scene.
Then I modified the
AbstractGameObject class to store the position in the scene that the object is to be rendered and I modified the
TriangleGameObject class to take an Int parameter in the constructor which is used to determine the colour of the rendered Triangle.
onSurfaceCreated function, I set the right camera to x = 1.0 y = 0.0 z = -3.0 and the left camera to x = -1.0 y = 0.0 z = -3.0 I then added two of the
TriangleGameObjects to the list of objects to render, one at the same x position as each camera.
onSurfaceCreated function I set the projection matrix to the ratio of the framebuffer texture, as the scene will always be rendered to the same dimensions, this only needs to be done once when the surface is created.
Next, in the
onDrawScene function the view matrix is set based on the passed
GameCamera, this is then used, along with the projection matrix to generate the view projection matrix which is passed to the
draw function of each of the objects to be rendered.
draw function was modified to generate a model matrix (currently just the translation portion of the model matrix), which is then multiplied by the view projection matrix pass in to generate the model view projection matrix which is passed to the Vertex shader.
After these changes we have a simple scene being rendered twice from two different camera positions, the code for which can be found here.
For the next post I shall be looking at using the phones sensors to rotate the camera in the direction the player is looking.