android-Displaying Graphics with OpenGL ES

来源:互联网 发布:android app项目源码 编辑:程序博客网 时间:2024/05/29 19:02

The OpenGL ES APIs provided by the Android framework offers a set of tools for displaying high-end, animated graphics that are limited only by your imagination and can also benefit from the acceleration of graphics processing units (GPUs) provided on many Android devices.

》 GLSurfaceView is a view container for graphics drawn with OpenGL andGLSurfaceView.Renderer controls what is drawn within that view

In order for your application to use the OpenGL ES 2.0 API, you must add the following declaration to your manifest:

<uses-feature android:glEsVersion="0x00020000" android:required="true" />

If your application uses texture compression, you must also declare which compression formats your app supports, so that it is only installed on compatible devices.

<supports-gl-texture android:name="GL_OES_compressed_ETC1_RGB8_texture" /><supports-gl-texture android:name="GL_OES_compressed_paletted_texture" />
public class OpenGLES20Activity extends Activity {    private GLSurfaceView mGLView;    @Override    public void onCreate(Bundle savedInstanceState) {        super.onCreate(savedInstanceState);        // Create a GLSurfaceView instance and set it        // as the ContentView for this Activity.        mGLView = new MyGLSurfaceView(this);        setContentView(mGLView);    }}

Note: OpenGL ES 2.0 requires Android 2.2 (API Level 8) or higher, so make sure your Android project targets that API or higher.

The essential code for a GLSurfaceView is minimal, so for a quick implementation, it is common to just create an inner class in the activity that uses it:

class MyGLSurfaceView extends GLSurfaceView {    private final MyGLRenderer mRenderer;    public MyGLSurfaceView(Context context){        super(context);        // Create an OpenGL ES 2.0 context        setEGLContextClientVersion(2);        mRenderer = new MyGLRenderer();        // Set the Renderer for drawing on the GLSurfaceView        setRenderer(mRenderer);    }}

One other optional addition to your GLSurfaceView implementation is to set the render mode to only draw the view when there is a change to your drawing data using the GLSurfaceView.RENDERMODE_WHEN_DIRTY setting:

// Render the view only when there is a change in the drawing datasetRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY);

There are three methods in a renderer that are called by the Android system in order to figure out what and how to draw on a GLSurfaceView:

  • onSurfaceCreated() - Called once to set up the view's OpenGL ES environment.
  • onDrawFrame() - Called for each redraw of the view.
  • onSurfaceChanged() - Called if the geometry of the view changes, for example when the device's screen orientation changes.
Being able to define shapes to be drawn in the context of an OpenGL ES view is the first step in creating your high-end graphics masterpiece.

 For maximum efficiency, you write these coordinates into aByteBuffer, that is passed into the OpenGL ES graphics pipeline for processing.

public class Triangle {    private FloatBuffer vertexBuffer;    // number of coordinates per vertex in this array    static final int COORDS_PER_VERTEX = 3;    static float triangleCoords[] = {   // in counterclockwise order:             0.0f,  0.622008459f, 0.0f, // top            -0.5f, -0.311004243f, 0.0f, // bottom left             0.5f, -0.311004243f, 0.0f  // bottom right    };    // Set color with red, green, blue and alpha (opacity) values    float color[] = { 0.63671875f, 0.76953125f, 0.22265625f, 1.0f };    public Triangle() {        // initialize vertex byte buffer for shape coordinates        ByteBuffer bb = ByteBuffer.allocateDirect(                // (number of coordinate values * 4 bytes per float)                triangleCoords.length * 4);        // use the device hardware's native byte order        bb.order(ByteOrder.nativeOrder());        // create a floating point buffer from the ByteBuffer        vertexBuffer = bb.asFloatBuffer();        // add the coordinates to the FloatBuffer        vertexBuffer.put(triangleCoords);        // set the buffer to read the first coordinate        vertexBuffer.position(0);    }}

In order to avoid defining the two coordinates shared by each triangle twice, use a drawing list to tell the OpenGL ES graphics pipeline how to draw these vertices. Here’s the code for this shape:

public class Square {    private FloatBuffer vertexBuffer;    private ShortBuffer drawListBuffer;    // number of coordinates per vertex in this array    static final int COORDS_PER_VERTEX = 3;    static float squareCoords[] = {            -0.5f,  0.5f, 0.0f,   // top left            -0.5f, -0.5f, 0.0f,   // bottom left             0.5f, -0.5f, 0.0f,   // bottom right             0.5f,  0.5f, 0.0f }; // top right    private short drawOrder[] = { 0, 1, 2, 0, 2, 3 }; // order to draw vertices    public Square() {        // initialize vertex byte buffer for shape coordinates        ByteBuffer bb = ByteBuffer.allocateDirect(        // (# of coordinate values * 4 bytes per float)                squareCoords.length * 4);        bb.order(ByteOrder.nativeOrder());        vertexBuffer = bb.asFloatBuffer();        vertexBuffer.put(squareCoords);        vertexBuffer.position(0);        // initialize byte buffer for the draw list        ByteBuffer dlb = ByteBuffer.allocateDirect(        // (# of coordinate values * 2 bytes per short)                drawOrder.length * 2);        dlb.order(ByteOrder.nativeOrder());        drawListBuffer = dlb.asShortBuffer();        drawListBuffer.put(drawOrder);        drawListBuffer.position(0);    }}
Drawing shapes with the OpenGL ES 2.0 takes a bit more code than you might imagine, because the API provides a great deal of control over the graphics rendering pipeline.

You need at least one vertex shader to draw a shape and one fragment shader to color that shape.

public class Triangle {    private final String vertexShaderCode =        "attribute vec4 vPosition;" +        "void main() {" +        "  gl_Position = vPosition;" +        "}";    private final String fragmentShaderCode =        "precision mediump float;" +        "uniform vec4 vColor;" +        "void main() {" +        "  gl_FragColor = vColor;" +        "}";    ...}

Shaders contain OpenGL Shading Language (GLSL) code that must be compiled prior to using it in the OpenGL ES environment. To compile this code, create a utility method in your renderer class:

public static int loadShader(int type, String shaderCode){    // create a vertex shader type (GLES20.GL_VERTEX_SHADER)    // or a fragment shader type (GLES20.GL_FRAGMENT_SHADER)    int shader = GLES20.glCreateShader(type);    // add the source code to the shader and compile it    GLES20.glShaderSource(shader, shaderCode);    GLES20.glCompileShader(shader);    return shader;}

Note: Compiling OpenGL ES shaders and linking programs is expensive in terms of CPU cycles and processing time, so you should avoid doing this more than once. If you do not know the content of your shaders at runtime, you should build your code such that they only get created once and then cached for later use.

public class Triangle() {    ...    private final int mProgram;    public Triangle() {        ...        int vertexShader = MyGLRenderer.loadShader(GLES20.GL_VERTEX_SHADER,                                        vertexShaderCode);        int fragmentShader = MyGLRenderer.loadShader(GLES20.GL_FRAGMENT_SHADER,                                        fragmentShaderCode);        // create empty OpenGL ES Program        mProgram = GLES20.glCreateProgram();        // add the vertex shader to program        GLES20.glAttachShader(mProgram, vertexShader);        // add the fragment shader to program        GLES20.glAttachShader(mProgram, fragmentShader);        // creates OpenGL ES program executables        GLES20.glLinkProgram(mProgram);    }}

Create a draw() method for drawing the shape. This code sets the position and color values to the shape’s vertex shader and fragment shader, and then executes the drawing function.

private int mPositionHandle;private int mColorHandle;private final int vertexCount = triangleCoords.length / COORDS_PER_VERTEX;private final int vertexStride = COORDS_PER_VERTEX * 4; // 4 bytes per vertexpublic void draw() {    // Add program to OpenGL ES environment    GLES20.glUseProgram(mProgram);    // get handle to vertex shader's vPosition member    mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition");    // Enable a handle to the triangle vertices    GLES20.glEnableVertexAttribArray(mPositionHandle);    // Prepare the triangle coordinate data    GLES20.glVertexAttribPointer(mPositionHandle, COORDS_PER_VERTEX,                                 GLES20.GL_FLOAT, false,                                 vertexStride, vertexBuffer);    // get handle to fragment shader's vColor member    mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor");    // Set color for drawing the triangle    GLES20.glUniform4fv(mColorHandle, 1, color, 0);    // Draw the triangle    GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, vertexCount);    // Disable vertex array    GLES20.glDisableVertexAttribArray(mPositionHandle);}
Define a Projection
The following example code takes the height and width of the GLSurfaceViewand uses it to populate a projection transformation Matrix using the Matrix.frustumM() method:
// mMVPMatrix is an abbreviation for "Model View Projection Matrix"private final float[] mMVPMatrix = new float[16];private final float[] mProjectionMatrix = new float[16];private final float[] mViewMatrix = new float[16];@Overridepublic void onSurfaceChanged(GL10 unused, int width, int height) {    GLES20.glViewport(0, 0, width, height);    float ratio = (float) width / height;    // this projection matrix is applied to object coordinates    // in the onDrawFrame() method    Matrix.frustumM(mProjectionMatrix, 0, -ratio, ratio, -1, 1, 3, 7);}

Define a Camera View

In the following example code, the camera view transformation is calculated using the Matrix.setLookAtM() method and then combined with the previously calculated projection matrix. 

@Overridepublic void onDrawFrame(GL10 unused) {    ...    // Set the camera position (View matrix)    Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);    // Calculate the projection and view transformation    Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);    // Draw shape    mTriangle.draw(mMVPMatrix);}

Apply Projection and Camera Transformations


In order to use the combined projection and camera view transformation matrix shown in the previews sections, first add a matrix variable to the vertex shader previously defined in the Triangle class:

public class Triangle {    private final String vertexShaderCode =        // This matrix member variable provides a hook to manipulate        // the coordinates of the objects that use this vertex shader        "uniform mat4 uMVPMatrix;" +        "attribute vec4 vPosition;" +        "void main() {" +        // the matrix must be included as a modifier of gl_Position        // Note that the uMVPMatrix factor *must be first* in order        // for the matrix multiplication product to be correct.        "  gl_Position = uMVPMatrix * vPosition;" +        "}";    // Use to access and set the view transformation    private int mMVPMatrixHandle;    ...}

Next, modify the draw() method of your graphic objects to accept the combined transformation matrix and apply it to the shape:

public void draw(float[] mvpMatrix) { // pass in the calculated transformation matrix    ...    // get handle to shape's transformation matrix    mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix");    // Pass the projection and view transformation to the shader    GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);    // Draw the triangle    GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, vertexCount);    // Disable vertex array    GLES20.glDisableVertexAttribArray(mPositionHandle);}
Drawing objects on screen is a pretty basic feature of OpenGL, but you can do this with other Android graphics framwork classes, including Canvas and Drawableobjects. OpenGL ES provides additional capabilities for moving and transforming drawn objects in three dimensions or in other unique ways to create compelling user experiences.

Rotating a drawing object with OpenGL ES 2.0 is relatively simple. In your renderer, create another transformation matrix (a rotation matrix) and then combine it with your projection and camera view transformation matrices:

private float[] mRotationMatrix = new float[16];public void onDrawFrame(GL10 gl) {    float[] scratch = new float[16];    ...    // Create a rotation transformation for the triangle    long time = SystemClock.uptimeMillis() % 4000L;    float angle = 0.090f * ((int) time);    Matrix.setRotateM(mRotationMatrix, 0, angle, 0, 0, -1.0f);    // Combine the rotation matrix with the projection and camera view    // Note that the mMVPMatrix factor *must be first* in order    // for the matrix multiplication product to be correct.    Matrix.multiplyMM(scratch, 0, mMVPMatrix, 0, mRotationMatrix, 0);    // Draw triangle    mTriangle.draw(scratch);}
what if you want to have users interact with your OpenGL ES graphics? The key to making your OpenGL ES application touch interactive is expanding your implementation of GLSurfaceView to override theonTouchEvent() to listen for touch events.

In order to make your OpenGL ES application respond to touch events, you must implement the onTouchEvent() method in your GLSurfaceView class.

private final float TOUCH_SCALE_FACTOR = 180.0f / 320;private float mPreviousX;private float mPreviousY;@Overridepublic boolean onTouchEvent(MotionEvent e) {    // MotionEvent reports input details from the touch screen    // and other input controls. In this case, you are only    // interested in events where the touch position changed.    float x = e.getX();    float y = e.getY();    switch (e.getAction()) {        case MotionEvent.ACTION_MOVE:            float dx = x - mPreviousX;            float dy = y - mPreviousY;            // reverse direction of rotation above the mid-line            if (y > getHeight() / 2) {              dx = dx * -1 ;            }            // reverse direction of rotation to left of the mid-line            if (x < getWidth() / 2) {              dy = dy * -1 ;            }            mRenderer.setAngle(                    mRenderer.getAngle() +                    ((dx + dy) * TOUCH_SCALE_FACTOR));            requestRender();    }    mPreviousX = x;    mPreviousY = y;    return true;}
Expose the Rotation Angle
Since the renderer code is running on a separate thread from the main user interface thread of your application, you must declare this public variable as 
volatile.

public class MyGLRenderer implements GLSurfaceView.Renderer {    ...    public volatile float mAngle;    public float getAngle() {        return mAngle;    }    public void setAngle(float angle) {        mAngle = angle;    }}
Apply Rotation

To apply the rotation generated by touch input, comment out the code that generates an angle and add mAngle, which contains the touch input generated angle:

public void onDrawFrame(GL10 gl) {    ...    float[] scratch = new float[16];    // Create a rotation for the triangle    // long time = SystemClock.uptimeMillis() % 4000L;    // float angle = 0.090f * ((int) time);    Matrix.setRotateM(mRotationMatrix, 0, mAngle, 0, 0, -1.0f);    // Combine the rotation matrix with the projection and camera view    // Note that the mMVPMatrix factor *must be first* in order    // for the matrix multiplication product to be correct.    Matrix.multiplyMM(scratch, 0, mMVPMatrix, 0, mRotationMatrix, 0);    // Draw triangle    mTriangle.draw(scratch);}
0 0
原创粉丝点击