DEV Community

Cover image for Creating a OpenGL square using C++ in Android. A reference guide
Tristan Elliott
Tristan Elliott

Posted on • Edited on

Creating a OpenGL square using C++ in Android. A reference guide

Table of contents

  1. The goal of this blog post?
  2. Before we get started
  3. Getting started
  4. The JNI ( Java Native Interface.)
  5. Full C++ OpenGl Es Code
  6. Making it run with Jetpack compose

Resources

My app on the Google play store

My app's GitHub code

TLDR(Too long didn't read)

  • Follow this tutorial and only use this blog post as a reference guide

The Goal of this blog post

  • The goal of this blog post is simple. use C++ code and OpenGL ES to create a blue square and show said blue square inside of some Jetpack Compose code
  • I have called this blog post a reference guide mainly do to the fact that I am not very confident in its ability to communicate how to effectively use OpenGL es to create a blue square. Yes the code works, however, I am still very new to the world of C++ graphics programming
  • As I mentioned earlier. I believe that you the reader, should follow this tutorial and only use this blog post as a reference guide

  • A UI demonstration of the blue square working with Jetpack compose can be found, here on my reddit page

Before we get started

  • Before we can get started, this blog post assumes you already have the Android NDK and CMakeList installed.
  • If you do not have these installed, you can follow the official documentation for installing them, HERE

CMakeList.txt

  • Documentation
  • The CMakeList.txt is an extremely important file. Basically, it works with Gradle and allows our C++ code to be compiled. We need to make sure we have done 2 thing before moving on.
    • 1) tell Gradle where to find the CMakeList.txt file: we can do so like this inside of the build.gradle file:
android {
  ...
  defaultConfig {...}
  buildTypes {...}

  // Encapsulates your external native build configurations.
  externalNativeBuild {

    // Encapsulates your CMake build configurations.
    cmake {

      // Provides a relative path to your CMake build script.
      path 'src/main/cpp/CMakeLists.txt'
    }
  }
}

Enter fullscreen mode Exit fullscreen mode
  • 2) modify the CMakeLists.txt file to include our C++ code: We can do that like this:
add_library(square_code SHARED
        square_code.cpp)

target_link_libraries(square_code
        android
        log
        EGL
        GLESv2)

Enter fullscreen mode Exit fullscreen mode
  • add_library(): as the documentation states: Adding a library to the project using the specified source files. Which basically means, we are creating a place for our C++ code

  • square_code: has to be a globally unique identifier and is how we will refer to our C++ code throughout the application. If you change this name, you must clean your project and then rebuild it for it to work again

  • SHARED: Just means it is a dynamic library that may be linked by other targets and loaded at runtime.

  • square_code.cpp: This is the source file that will be compiled into the library(the file that holds our C++ code).

Getting started

  • Documentation
  • IF you want to use OpenGL Es and C++ in your application to create UI. The very first thing you should do is to read about GLSurfaceView and GLSurfaceView.Renderer in the documentation listed above.

    • GLSurfaceView: This class is a View where you can draw and manipulate objects using OpenGL API calls
    • GLSurfaceView.Renderer: This interface defines the methods required for drawing graphics in a GLSurfaceView. Of this interface, we want to focus on 2 methods in particular:

    1) onSurfaceCreated(): called dring the initial creation of the GLSurfaceView. This is where we call the code responsible for initializing our OpenGL view and graphics

    2) onDrawFrame(): The system calls this method on each redraw of the GLSurfaceView. This method will contain all of the logic responsible for doing the actual frame by frame drawing of the graphics

The JNI ( Java Native Interface.)

  • Documentation
  • As the documentation states: It defines a way for the bytecode that Android compiles from managed code (written in the Java or Kotlin programming languages) to interact with native code (written in C/C++). JNI is vendor-neutral, has support for loading code from dynamic shared libraries, and while cumbersome at times is reasonably efficient.
  • Which is really just nerd talk for: We can create a Kotlin/Java class and use said class to run some C++ code
  • We can do that like this:
object NativeSquareLoading{

    init{
        System.loadLibrary("square_code");
    }


    /**
     * @param width the current view width
     * @param height the current view height
     */
    external fun init(width: Int, height: Int)
    external fun step()

}

Enter fullscreen mode Exit fullscreen mode
  • The System.loadLibrary("square_code") is what is going to allow us to access the square_code.cpp(Which we will create next) file

The GLSurfaceView and Renderer code

internal class GL2JNIView(context: Context?) : GLSurfaceView(context) {
    init {
        init()
    }


    private fun init() {

       // Create an OpenGL ES 2.0 context
        setEGLContextClientVersion(2)

        /* Set the renderer responsible for frame rendering */
        setRenderer(Renderer())
    }




    //todo: This is the renderer I am looking for
    private class Renderer : GLSurfaceView.Renderer {
        override fun onDrawFrame(gl: GL10) {
            // The system calls this method on each redraw of the GLSurfaceView
            //this is called constantly
            NativeSquareLoading.step()
        }

        override fun onSurfaceChanged(gl: GL10, width: Int, height: Int) {
            // The system calls this method when the GLSurfaceView geometry changes,
            // including changes in size of the GLSurfaceView or orientation of the device screen.
            NativeSquareLoading.init(width, height)
        }

        override fun onSurfaceCreated(gl: GL10?, config: EGLConfig?) {
            // The system calls this method once, when creating the GLSurfaceView.
            // Do nothing.
        }
    }


}

Enter fullscreen mode Exit fullscreen mode

Fragment shader vs Vertex shader

  • documentation

  • First lets talk briefly about what a shader is: Shaders are a set of instructions, but the instructions are executed all at once for every single pixel on the screen. Basically, they are a set of instructions that let our code written on the CPU run on the GPU. Why the GPU? not only does it have access to special math functions accelerated via hardware but the GPU allows us to deliver multiple instructions in parallel processing. Meaning our instructions can all run at the same time

  • Also the vertex shader is going to deal with the individual data points on the streams. The fragment shader is going to deal with colors. So vertex is point data and fragment is color

Full C++ OpenGl Es Code


#include <GLES2/gl2.h>
#include <GLES2/gl2ext.h>
#include <android/log.h>
#include <jni.h>
#include <math.h>
#include <stdio.h>
#include <stdlib.h>


// 1) CREATE A SINGLE RED TRIANGLE                       (DONE)
// 2) CREATE TO TRIANGLES AND HAVE THEM FORM A SQUARE    (DONE)




/**
 * loadShader() is a function meant to create and compile a shader.
 *
 * @param shaderType Specifies the type of shader to be created.
 * Can be GL_VERTEX_SHADER or GL_FRAGMENT_SHADER.
 *
 * @param pSource A C-string containing the source code of the shader.
 * Can be gVertexShader or gFragmentShader
 *
 * @return The shader object handle (GLuint) if the shader is successfully compiled,
 *         or 0 if compilation fails.
 */
GLuint loadShader(GLenum shaderType, const char* pSource) {


    GLuint shader = glCreateShader(shaderType);
    if (shader) {
        glShaderSource(shader, 1, &pSource, NULL);
        glCompileShader(shader);
        GLint compiled = 0;
        glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled);

        //on a successful shader, the below conditional does not get ran
        if (!compiled) {
            GLint infoLen = 0;
            //I WANT TO LOG THIS
            glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLen);
            if (infoLen) {

                char* buf = (char*)malloc(infoLen);
                if (buf) {
                    glGetShaderInfoLog(shader, infoLen, NULL, buf);
                    free(buf);
                }
                glDeleteShader(shader);
                shader = 0;
            }
        }
    }
    return shader;
}

GLuint createProgram(const char* pVertexSource, const char* pFragmentSource) {
    //load up each shader and validate each of them
    GLuint vertexShader = loadShader(GL_VERTEX_SHADER, pVertexSource);
    if (!vertexShader) {
        return 0;
    }
    GLuint pixelShader = loadShader(GL_FRAGMENT_SHADER, pFragmentSource);
    if (!pixelShader) {
        return 0;
    }

    GLuint program = glCreateProgram();
    if (program) {
        //attach the shaders to the program object. Which is how we specify what objects are to be linked together
        //ie. attaching these shaders to the program object, tells openGL that these shaders are to be linked together
        // when linking operations occur on the program object
        glAttachShader(program, vertexShader);
        glAttachShader(program, pixelShader);

        glLinkProgram(program);
        GLint linkStatus = GL_FALSE;
        glGetProgramiv(program, GL_LINK_STATUS, &linkStatus);

        if (linkStatus != GL_TRUE) {
            GLint bufLength = 0;
            glGetProgramiv(program, GL_INFO_LOG_LENGTH, &bufLength);
            if (bufLength) {
                char* buf = (char*)malloc(bufLength);
                if (buf) {
                    glGetProgramInfoLog(program, bufLength, NULL, buf);
                    free(buf);
                }
            }
            glDeleteProgram(program);
            program = 0;
        }
    }

    return program;
}




auto gVertexShader =
        "attribute vec4 vPosition;\n"
        "void main() {\n"
        "  gl_Position = vPosition;\n"
        "}\n";

//This is the source code for your fragment shader.
auto gFragmentShader =
        "precision mediump float;\n"
        "void main() {\n"
        "  gl_FragColor = vec4(0.0, 0.0, 1.0, 1.0);\n"
        "}\n";


GLuint gProgram; // the shader program
GLuint gvPositionHandle;// hold the location of where the GPU will be expecting the vertex data that is required for our shader
//that we have loaded with the vPosition

bool setupGraphics(int w, int h) {

    gProgram = createProgram(gVertexShader, gFragmentShader);
    if (!gProgram) {
        return false;
    }
    gvPositionHandle = glGetAttribLocation(gProgram, "vPosition");

    glViewport(0, 0, w, h);
    return true;
}

const GLfloat gSquareVertices[] = {
        // Triangle one (top-left half of the square)
        -0.5f, 0.25f,   // Top-left corner
        -0.5f, -0.25f,  // Bottom-left corner
        0.5f, -0.25f,   // Bottom-right corner

        // Triangle two (bottom-right half of the square)
        0.5f, 0.25f,    // Top-right corner
        -0.5f, 0.25f,   // Top-left corner
        0.5f, -0.25f    // Bottom-right corner
};

void renderFrame() {

    glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
    glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);



    glUseProgram(gProgram);//We select which program we want to use

    // Get the location of the color uniform
   // GLuint colorUniform = glGetUniformLocation(gProgram, "vColor");


    // We then need to link the attribute we mentioned in the shader to the actual triangle data defined above
    //so we need to link the gvPositionHandle and the gTriangleVertices data
    glVertexAttribPointer(
            gvPositionHandle,
            2, //each vertex is going to have 2 elements to. these are the X,Y positions
            GL_FLOAT, //Specifies the data type of each component in the array
            GL_FALSE,
            0, //no stride between our verticies
            gSquareVertices //pointer to the actual triangle vertices.
    );

    glEnableVertexAttribArray(gvPositionHandle);

    glDrawArrays(GL_TRIANGLES, 0, 3);  // First triangle (vertices 0, 1, 2)
    glDrawArrays(GL_TRIANGLES, 3, 3);  // Second triangle (vertices 3, 4, 5)


}


extern "C"
JNIEXPORT void JNICALL
Java_com_example_clicker_nativeLibraryClasses_NativeSquareLoading_init(JNIEnv *env, jobject thiz,
                                                                       jint width, jint height) {
    setupGraphics(width, height);
}
extern "C"
JNIEXPORT void JNICALL
Java_com_example_clicker_nativeLibraryClasses_NativeSquareLoading_step(JNIEnv *env, jobject thiz) {
    renderFrame();
}

Enter fullscreen mode Exit fullscreen mode

Making it run with Jetpack compose

@Composable
fun GLSurfaceViewComposable(context: Context) {
    AndroidView(
        factory = {
            GL2JNIView(context)
        },
        modifier = Modifier.fillMaxSize()
    )
}

 val context = LocalContext.current
GLSurfaceViewComposable(context)

Enter fullscreen mode Exit fullscreen mode

Conclusion

  • Thank you for taking the time out of your day to read this blog post of mine. If you have any questions or concerns please comment below or reach out to me on Twitter.

Top comments (0)