When we last left off we basically just drew pixels to the screen and not much else. If you played around with the vertices you could draw shapes but only in 2D. Perhaps it's worth iterating but WebGL doesn't actually know much about 3D, it's really just about drawing things to screen. We have to put in more work to make that 3D.

# Expanding vertex data

Our last vertices were just 2d locations in clip space. But to make interesting things we need more information. This bring us to the more well known parts of an Open GL tutorial, the GL triangle (In this case we'll keep it a quad since we already have that). This is useful as it shows how data passes from the vertex shader to the fragment shader but also how we can use multiple data streams.

We'll start by adding a method that looks almost identical to the other that added the location vertex data:

```
createColors(){
const colorBuffer = this.context.createBuffer();
this.context.bindBuffer(this.context.ARRAY_BUFFER, colorBuffer);
const colors = new Float32Array([
1.0, 0.0, 0.0,
0.0, 1.0, 0.0,
0.0, 0.0, 1.0,
1.0, 1.0, 0.0
]);
this.context.bufferData(this.context.ARRAY_BUFFER, colors, this.context.STATIC_DRAW);
const vertexColorLocation = this.context.getAttribLocation(this.program, "aVertexColor");
this.context.enableVertexAttribArray(vertexColorLocation);
this.context.vertexAttribPointer(vertexColorLocation, 3, this.context.FLOAT, false, 0, 0); //3 at a time
}
```

This buffer of data has color information for the vertices. Besides the naming changes, we're taking 3 elements at a time instead of 2. The values should be understandable if you've worked with color before. They are float representations of Red, Green, and Blue in RGB float format. Each vertex here corresponds to a vertex in the position buffer so they should be the same order.

You may wonder if buffer data is configurable could we just do all of this in one step with 5-value vertices. We could, but it's more complex. The maximum size of a vertex vector is 4, and we need 5. So in order to do this in one step we'd need to interleave the data. That's what the offset and stride parameters to vertexAttribPointer do. Being amateur myself I don't know if there are merits to doing that but I've found having separate buffers seems to work well especially if you want to turn features off.

Anyway, we need to call `this.createColors()`

to setup our new color buffer and in our shader we now have access to `aVertexColor`

. And we'll make some tiny edits to the vertex shader code.

```
attribute vec2 aVertexPosition;
attribute vec3 aVertexColor;
varying mediump vec4 vColor;
void main(){
gl_Position = vec4(aVertexPosition, 0.0, 1.0);
vColor = vec4(aVertexColor, 1.0);
}
```

Let's break this down. Line 1 we've seen, it's the position input. Line 2 is our new color input. The 3rd line is almost like a return value, it's additional data we're passing to the vertex shader and it defined by `varying`

. There's a second keyword `mediump`

or "medium precision". This specifies the minimum precision of the values we pass. We can choose `lowp`

, `mediump`

or `highp`

. The underlying hardware can pick something higher but not lower. To be honest you don't need to worry about this now as it's all about fine-tuned optimization. You can use `highp`

for everything until it's a problem, just note you might not see a difference over `mediump`

and `lowp`

. And it's probably obvious but we're passing a vec4 worth of data.

Next is the main shader program. gl_Position is another return-value like thing. It's passing position information to the hardware so it renders in the correct spot in screen space. We'll come back to this but right now we're taking our 2d values and simply augmenting them with 0.0 and 1.0 as we need vec4s. 0.0 corresponds to Z, and 1.0 corresponds to W which is kinda like the alpha of positions. Finally we assign to that varying value we defined. Again we're taking our 3d color and augmenting it to be 4 valued because that's typical. The 4th value is alpha.

And we get something more interesting:

If you've seen my other posts on gradients this type of effect is something we still can't quite do yet.

# Index buffers

Full screen rectangles are getting a little old. Let's try something else. Perhaps the most natural thing to expand into 3 dimensions is a cube. The problem with even something simple like a cube is the vertices rapidly become hard to deal with.

For one thing, we have 24 vertices:

```
[
//Front
-0.5, -0.5, -0.5,
-0.5, 0.5, -0.5,
0.5, 0.5, -0.5,
0.5, -0.5, -0.5,
//Right
0.5, -0.5, -0.5,
0.5, 0.5, -0.5,
0.5, 0.5, 0.5,
0.5, -0.5, 0.5,
//Back
0.5, -0.5, 0.5,
0.5, 0.5, 0.5,
-0.5, 0.5, 0.5,
-0.5, -0.5, 0.5,
//Left
-0.5, -0.5, 0.5,
-0.5, 0.5, 0.5,
-0.5, 0.5, -0.5,
-0.5, -0.5, -0.5,
//Top
-0.5, -0.5, 0.5,
-0.5, -0.5, -0.5,
0.5, -0.5, -0.5,
0.5, -0.5, 0.5,
//Bottom
0.5, 0.5, -0.5,
-0.5, 0.5, -0.5,
-0.5, 0.5, 0.5,
0.5, 0.5, 0.5
]
```

Plus you'll want to add colors for each and have it all match up nicely. Instead it would be easier to just define the 8 vertices of the cube plus it saves space.

```
createPositions() {
const positionBuffer = this.context.createBuffer();
this.context.bindBuffer(this.context.ARRAY_BUFFER, positionBuffer);
const positions = new Float32Array([
//Front
-0.5, -0.5, -0.5,
0.5, -0.5, -0.5,
0.5, 0.5, -0.5,
-0.5, 0.5, -0.5,
//Back
0.5, -0.5, 0.5,
-0.5, -0.5, 0.5,
-0.5, 0.5, 0.5,
0.5, 0.5, 0.5
]);
this.context.bufferData(this.context.ARRAY_BUFFER, positions, this.context.STATIC_DRAW);
const positionLocation = this.context.getAttribLocation(this.program, "aVertexPosition");
this.context.enableVertexAttribArray(positionLocation);
this.context.vertexAttribPointer(positionLocation, 3, this.context.FLOAT, false, 0, 0);
}
```

In order to do this we're going to use something called an index buffer. It's similar to the buffers we've created thus far:

```
createIndices() {
const indexBuffer = this.context.createBuffer();
this.context.bindBuffer(this.context.ELEMENT_ARRAY_BUFFER, indexBuffer);
const indices = new Uint16Array([
0, 1, 2,
0, 2, 3,
]);
this.context.bufferData(this.context.ELEMENT_ARRAY_BUFFER, indices, this.context.STATIC_DRAW);
}
```

So the new things here to note is the type ELEMENT_ARRAY_BUFFER when we bind which tells WebGL this is a special array buffer used for indexing elements. As such we don't assign it to a shader parameter, instead we'll change the draw call to use this instead of our regular buffers.

The data is indices into our position and color buffers which allows us to repeat data. We have defined 2 triangles which use the vertices at indices 0,1,2, and 0,2,3.

We can now call this before render. Let's update the draw call:

```
render() {
this.context.clear(this.context.COLOR_BUFFER_BIT | this.context.DEPTH_BUFFER_BIT);
this.context.drawElements(this.context.TRIANGLES, 6, this.context.UNSIGNED_SHORT, 0);
}
```

Instead of `drawArrays`

we use `drawElements`

. The parameters are mostly the same. We have a primitive type, the number of elements, the type of the indices (UNSIGNED_SHORT is a 16bit Integer and the most commonly supported type) and the offset into the array.

You can update the colors in the color buffer as you see fit but you will need 8 of them to match up. If we render now nothing should have really changed. We have a 50% smaller rectangle due to the changes in the vertices but mostly we've change how we go about using our buffers which will come in handy later.

# Making a Cube

Lets update `createIndices`

:

```
createIndices() {
const indexBuffer = this.context.createBuffer();
this.context.bindBuffer(this.context.ELEMENT_ARRAY_BUFFER, indexBuffer);
const indices = new Uint16Array([
0, 1, 2, //front
0, 2, 3,
1, 4, 7, //right
1, 7, 2,
4, 5, 6, //back
4, 6, 7,
5, 0, 3, //left
5, 3, 6,
3, 2, 7, //top
3, 7, 6,
0, 1, 5, //bottom
1, 4, 5
]);
this.context.bufferData(this.context.ELEMENT_ARRAY_BUFFER, indices, this.context.STATIC_DRAW);
}
```

Now we have a cube. While you don't have to specifically lay out everything in terms of individual triangles I find it's at least easier to reason about. You probably want to draw the cube on paper and manually check all the vertices. I start in the bottom left corner for each face and vertices are order counter-clock-wise. Something like:

It's actually important you define the triangles in counter-clock-wise order. We'll see why later. Let's render:

This doesn't look right. One of the hardest things about using WebGL and many other graphics APIs is that you will often get an unexpected result but it's really hard to debug especially if you don't know what you're doing yet. What happened here is that we did in fact render a cube, but the back part of the cube was drawn last and thus over the front side. This is one reason why I made the vertices different colors. If you get stuck in such a situation try turning things off, changing colors or positions in observable ways.

## Backface Culling

So how do we fix this? I alluded to it earlier. The order triangles are defined is important for a feature called "backface-culling." This takes the order of a triangle's vertices called "winding order" and uses that to determine if the polygon faces the screen or not. If it does not we don't bother drawing it. This is an optimization and why in games when you accidentally walk through things you see through them. We can enabled this feature in `bootGpu`

:

```
async bootGpu(){
//... omitted
this.context.enable(this.context.CULL_FACE);
this.context.cullFace(this.context.BACK);
}
```

First we need to enable the feature using `enable`

plus the enum for the feature. Then we can actually tell the GPU to cull faces, specifically the back faces. With this enabled we can check again:

This seems right. It's not really 3D and it's still a bit elongated but you can imagine looking at a cube from straight on the side you won't see anything else.

But as mentioned above do we actually know we're drawing things correctly? We don't have a lot of clues but could we prove this is correct? Well one way is to rotate the cube so that an edge is facing us. This is where we can leverage our vertex shader!

```
attribute vec3 aVertexPosition;
attribute vec3 aVertexColor;
float angle = 3.1415962/ 4.0;
mat3 rotationY = mat3(
cos(angle), 0, sin(angle),
0, 1, 0,
-sin(angle), 0, cos(angle)
);
varying mediump vec4 vColor;
void main(){
gl_Position = vec4(rotationY * aVertexPosition, 1.0);
vColor = vec4(aVertexColor, 1.0);
}
```

The first thing I add is the angle to rotate. There is no constant for PI, you need to define it. `PI/4`

is 45 degrees so that's should show it off. Next, we make a matrix to rotate around the Y axis. This a very standard thing, you can find a listing of them here: https://en.wikipedia.org/wiki/Rotation_matrix#In_three_dimensions. Lastly, we multiply that matrix by our vertex position before passing it to `gl_Position`

. GLSL handily already provides us with matrix multiplication out of the box but make sure you do it in the right order.

This makes sense. We rotated to the left 45 degrees, so the shape should be half the front face and half the right side face which is what this shows. You can play around with other rotation matrices and values but you should get something expected minus the weird elongation. What's the deal with that anyway?

# Uniforms

But first let's introduce a new thing.

So far we've seen we can pass data to the shaders via buffers or under the nomenclature we access them "attributes". In the shader the value varies and we run the vertex shader for each element. There's another way we can pass non-varying data to the shader called "uniforms".

Let's setup another stage in `bootGpu`

called `setupUniforms`

:

```
setupUniforms(){
const colorMatrix = new Float32Array([1,0,0,1, 0,1,0,1, 0,0,1,1, 0,0,0,1]);
const colorLocation = this.context.getUniformLocation(this.program, "uColorMatrix");
this.context.uniformMatrix4fv(colorLocation, false, colorMatrix);
}
```

Here, for testing we'll pass in a 4x4 matrix which represents 4 colors. The data could be used for many purposes (which we'll get to) but colors are easier to test and experiment with. This looks not too dissimilar from setting up the buffer. We first setup our data a typed array. Then we ask the shader program for the location of our uniform. This uses `getUniformLocation`

instead of `getAttribLocation`

but it works the same way. And finally we can use `uniformMatrix4fv`

passing the location, whether or not it should be transposed (this is always false according to documentation), and the data to pass. There are other types and shapes of uniforms if you need them but 4x4 matrices are the largest. The full interface is `uniform[1234][if][v]`

and `uniformMatrix[1234]x[1234]fv`

where `[1234]`

are values 1-4, `i`

is integer, `f`

is float, and `v`

is vector. From there you can find the right combination to fit the shape of your value.

In our vertex shader we can access it:

```
uniform mat4 uColorMatrix;
attribute vec3 aVertexPosition;
attribute vec3 aVertexColor;
float angle = 3.1415962/ 4.0;
mat3 rotationY = mat3(
cos(angle), 0, sin(angle),
0, 1, 0,
-sin(angle), 0, cos(angle)
);
varying mediump vec4 vColor;
void main(){
gl_Position = vec4(rotationY * aVertexPosition, 1.0);
vColor = uColorMatrix[2];
}
```

I'm just passing the 3rd row of values through and this should make all the drawn pixels blue.

Cool. We could have also used it directly from the fragment shader as well (but you need to specify the precision by putting this line at the top of the fragment shader `precision highp float;`

).

# Perspective

The reason we get a streched out cube is because we haven't really told WebGL what to do with the Z value. We have it but it doesn't mean anything. Even when we fixed the faces being drawn in front of each other that was just winding order, we didn't consider Z. To fix that we need to "project" the 3d values into 2d space. You can think of this like casting a shadow. The object is 3d but we cast it onto a 2d surface. I won't got into the math for this because it's really not necessary but the end result is that we can construct a 4x4 matrix that basically takes the Z-value and squishes it into 2d space. Intuitively, the way it works is that objects in the distance look smaller and this matrix represents this operation also called "perspective" transform.

We could define this is in the shader itself but since we have our new uniform passing feature it might be handy to modify it from the outside because it's based on things like the screen/canvas size and field-of-view:

```
export function getProjectionMatrix(screenHeight, screenWidth, fieldOfView, zNear, zFar){
const aspectRatio = screenHeight / screenWidth;
const fieldOfViewRadians = fieldOfView * (Math.PI / 180);
const fovRatio = 1 / Math.tan(fieldOfViewRadians / 2);
return [
aspectRatio * fovRatio, 0 , 0 , 0,
0 , fovRatio, 0 , 0,
0 , 0 , zFar/(zFar - zNear) , 1,
0 , 0 , (-zFar * zNear)/(zFar - zNear), 0
];
}
```

Again we'll gloss over how it works and just talk about what you need to provide. We get the aspect ratio based on the screen height and width in pixels and the field-of-view in *degrees* and construct the special matrix. zNear and zFar determine the the clipping planes in the Z-dimension. Basically this is the closest and furthest things can be from the camera.

We can then pass this matrix to the shader as a uniform.

```
uniform mat4 uProjectionMatrix; //here
attribute vec3 aVertexPosition;
attribute vec3 aVertexColor;
float angle = -3.1415962 / 4.0;
mat4 rotationY = mat4(
cos(angle), 0, sin(angle), 0,
0, 1, 0, 0,
-sin(angle), 0, cos(angle), 0,
0, 0, 0, 1
);
varying mediump vec4 vColor;
void main(){
gl_Position = uProjectionMatrix * rotationY * vec4(aVertexPosition, 1.0); //and use it here
vColor = vec4(aVertexColor, 1.0);
}
```

I've also made a small change to the rotation matrix. Before it was 3x3 but it's actually easier to deal with things in a standard 4x4. The way you change it is to add another 1 in the diagonal. Then we can multiply by the projection matrix.

And let's preview....

Oh no, there's just a white screen, what happened? Well, as it turns out we essentially implicitly chose that we're looking at the cube from point 0,0,0 which as we've defined the cube is right inside of it. As we've learned with backface culling this means from the inside of the polygon all side face away from the camera and are not drawn. This is the type of stuff you can waste hours on if you aren't expecting it. Any way we can just push the cube away from the screen a bit:

```
uniform mat4 uProjectionMatrix;
attribute vec3 aVertexPosition;
attribute vec3 aVertexColor;
float angle = -3.1415962 / 4.0;
mat4 rotationY = mat4(
cos(angle), 0, sin(angle), 0,
0, 1, 0, 0,
-sin(angle), 0, cos(angle), 0,
0, 0, 0, 1
);
vec4 translateZ = vec4(0.0, 0.0, 2.0, 0.0);
varying mediump vec4 vColor;
void main(){
gl_Position = uProjectionMatrix * (translateZ + rotationY * vec4(aVertexPosition, 1.0));
vColor = vec4(aVertexColor, 1.0);
}
```

I'm just adding a constant amount of 2 to the Z component.

And there we have it. A "real" looking cube.

## Discussion (0)