(this is Part 3; Part 1 has an index of all the posts)

We finished Part 2 with the most basic drawing of all: we filled the screen with a background colour (pink/magenta), but no 3D objects.


NB: if you’re reading this on AltDevBlog, the code-formatter is currently broken on the server. Until the ADB server is fixed, I recommend reading this (identical) post over at T-Machine.org, where the code-formatting is much better.

a standalone library on GitHub, with the code from the articles as a Demo app. It uses the ‘latest’ version of the code, so the early articles are quite different – but it’s an easy starting point

Points in 3D are not Vertices

If you’ve done any 2D geometry or Maths, you probably think of a “vertex” as “the corner of an object”. OpenGL defines “vertex” differently, in a more abstract (and powerful) way.

Imagine a cube. If it were merely “a collection of points in space” it would have no colour. And no texture. Technically, it wouldn’t even have visible edges joining the points together (I drew them here to make it obvious it’s a cube).

cube-of-simple-verts

In practice … 3D objects are a collection of labelled items, where each item has multiple “pieces of information, usually a different value for each one”.

cube-two-attributes

One of those “pieces of information” is the 3D location (x,y,z) of the “labelled item”. Another piece of information is “what colour should the pixel be here?”. You can have arbitrarily many “pieces of information”. They could be the same for all the items in an object, or all be different.

OpenGL Vertex: A point in space that has no particular position, and isn’t drawn itself, but instead has a LIST of tags/values attached to it. Your shaders take a set of vertices and render “something” using the information attached to the vertices.

A vertex could be part of an object, without being on the surface

This example doesn’t work directyl in OpenGL, but it’s the same concept. Look at a bezier curve:

bezier-curve

It has two ordinary points (a start point, and an end point), and two “control points”. The control points aren’t drawn as part of the curve, but you have to know their positions in order to draw the curve.

Likewise, OpenGL vertices: they are more than just “corners”, they are meta-data about the object itself. And … OpenGL is able to “fill the space between vertices”, without drawing the vertices themselves.

Gotcha: This is modern OpenGL; old OpenGL was a bit more prescriptive, and didn’t allow so much freedom. There is still one part of OpenGL that’s a hangover from those days: each vertex has to be given a position sooner or later, and if that position is “off the edge of the screen”, it will be PARTIALLY ignored by the renderer.

How many vertices for an object?

  • In most cases, every “corner” of a 3D object has a vertex
  • In many cases, each “corner” has multiple vertices (usually for special 3D lighting)
  • In rare cases, a “corner” can exist without having any vertices at all: it is calculated procedurally on the GPU itself (c.f. the Bezier curve above)

OpenGL Shaders

We’re now into the world of the GPU … the code we’re talking about will execute on-board the GPU (not the CPU). We’ll have to:

  1. write that code in a special language (one that the GPU understands)
  2. upload it to the GPU
  3. ask the GPU to run it for us at the appropriate time

Shaders are written in a custom, highly simplified, programming language. There’s lots of FUD about how complex it is, but really: it’s a lot easier/simpler than C (it’s like “C, with bits removed”). GLSL (the language) is fully described on just two pages of this GL ES 2 crib-sheet

There are two kinds of Shader in GL ES 2:

  • Vertex Shaders
  • Pixel Shaders (or “Fragment Shaders” as they’re known in OpenGL)

Vertex Shaders

Vertex shaders “do stuff with vertices” (plural of vertex). Hence the name. They can work in 1D, 2D, 3D, or 4D.

Most of the time, they work in 3D or 4D, but their output is usually in 4D. If you’re working in 3D, there’s an automatic “up-conversion” to 4D that happens for you, so you never need to understand the 4D part – except that: a lot of your variables will be of type “4D vector”.

Vertex Shaders typically do a lot of calculations on the input data (vertices) before handing the results to the next stage (see below). Examples include:

  • Move your geometry around: translate, rotate, scale
  • Different “3D” projections: convert to “perspective”, bend the universe, simulate fish-eye lens
  • Physical simulation of “light”: reflections, radiosity, skin, etc
  • Animate bones and joints of a 3D skeleton

The simplest possible Vertex Shader is one that does this:

  1. Ignores the data you send in, and generates the same output 3D position, no matter what

With later versions of GL (not GL ES 2!) there are techniques that use this to do all your rendering without any vertices at all. But ES 2 runs your code “once per vertex you uploaded”, so unless you upload some vertices … your code won’t run at all.

So, in GL ES 2, the simplest usable Vertex Shader does this:

  1. For each vertex, reads a single vertex-attribute that gives a “position in 3D”
  2. Return that point unaltered

Vertex Shaders: output

For each vertex the GPU has, it:

  1. …runs your source code once to calculate a 4D point.
  2. …takes the 4D point, and converts it to a 2D point positioned on-screen, using the X and Y co-ords
    • nearly always: also using the Z co-ord to make sure nothing appears “in front of” things that are hiding it from the camera
  3. …that point is used to create 0 or more pixels, and handed over to the Pixel Shader
  4. …the Pixel shader is executed once for each Pixel created

Vertex Shaders and Co-ordinates

By default, where does a particular 3D point appear on screen when OpenGL draws it?

When you write a 3D engine, you almost never use the OpenGL default co-ordinate system. But when you’re starting out, you want to debug step-by-step, and need to know the defaults.

By default, OpenGL vertex shaders throw away all points with x, y, or z co-ordinate greater than 1 or less than -1

By default, OpenGL vertex shaders do NOT draw 3D with perspective; they use Orthographic rendering, which appears to be 2D

It’s slightly more complex than “throw away all points”, but with simple examples and test cases, you should make sure all your 3D co-ordinates lie within the cube with width = 2, centered on the origin (i.e all co-ords are between -1 and +1).

NB: this also makes some debugging a lot easier, because “(0,0,0)” is the exact center of your screen. If not, you’ve screwed-up something very basic in your OpenGL setup…

Pixel / Fragment Shaders

Pixel shaders turn 3D points (NOT vertices – but the things created from vertices) into pixels on screen. A Pixel Shader always uses the output of a Vertex Shader as its input; it receives both “the 3D point” from the Vertex Shader, and also any extra calculations the Vertex Shader did.

Technically, OpenGL names them “Fragment” shaders. From here on that’s what I’ll call them. But I find it’s easier to understand if you think of them as “pixel” shaders at first.

Allegedly, the reason for calling them “Fragment” shaders is that source code for a Fragment Shader is run for “1 x part-of-a-pixel at a time”, generating part of the final colour for that pixel. In default rendering, each pixel has only one part, and a Fragment Shader might as well be called “a Pixel Shader”; but with sub-sampling, the Fragment Shader might be invoked multiple times for the same Pixel, with slightly different inputs and output

An alternative view: just as an OpenGL “vertex” isn’t quite the same as traditional “points” in geometry, an OpenGL “fragment” isn’t quite the same as a monitor’s/screen’s “pixel”. A Fragment often has an Alpha/transparency value (impossible with a screen pixel!), and might not be a colour (when you get into advanced shader-programming); also … it might have more than one value associated with it (it retains some of the metadata that came from the Vertex).

In simple terms, fragment shaders:

  1. receive the converted data from multiple vertices at once (1, 2, or 3, depending on the Draw call you issued), including the 2D position created by OpenGL
  2. look at the specific Draw call to decide what to do:
    1. if the Draw call specified “draw triangles”, fill in the area between the 3 points
    2. if it specified “draw lines”, interpret them as bends in the line and paint pixels like a join-the-dots puzzle
    3. if it specified “draw points” … merely draw a dot at each point (where “dot” can cover multiple pixels and be multiple colours; it’s really a “sprite” rather than a “dot”)
  3. Finally … once it knows which pixels its actually going to fill in … a Fragment shader gives each pixel a colour; usually a different colour for each one. This is where 99% of your source code goes when writing a Fragment Shader: deciding which colour to fill each pixel

The simplest possible Fragment Shader does this:

  1. Any Fragment Shader can have a pixel colored any colour that it wants so long as it is Blue.

Black would work fine too, but … in OpenGL examples, we never use the colour black.

Black (and white) are reserved for “OpenGL failed internally (usually because you gave it invalid parameters”. If you use black as a colour in your app it’s very hard to know if it’s working correctly or not. Use black/white only when you’re sure your code is working correctly.

Adding Geometry, Shaders, and a Draw call

In OpenGL, remember that you can and should use multiple draw-calls when rendering each “frame”. Ignore the people who tell you that iOS/mobile can’t handle multiple draw-calls; they’re using long-outdated info, or unambitious graphics routines. Modern OpenGL’s real power and awesomeness kicks-in when you have hundreds of draw calls per frame.

Each time you want to add a feature to your app, a good starting point is:

“if I’m about to try something new, I’ll make a new Draw call”

…if it has unexpected effects, it’s easy to toggle it on/off at runtime while you debug it.

Insert a new draw call into the stack of calls:

ViewController.m
[objc]
-(void) viewDidLoad
{

GLK2DrawCall* draw1Triangle = [[GLK2DrawCall new] autorelease];

[self.drawCalls addObject: draw1Triangle];

}
[/objc]

(we’ll be using this to draw a triangle, nothing more. In case the name wasn’t obvious enough)

Check the original is still visible, and that nothing appears any different:

Screen Shot 2013-10-05 at 17.50.03

OK. We want to check that the new Draw call is actually working. Give the new one a new clear colour:

[objc]

[draw1Triangle setClearColourRed:0 green:1.0 blue:0 alpha:0.5];

[/objc]

…and tell it to start clearing the screen each time it draws:

[objc]

draw1Triangle.shouldClearColorBit = TRUE;

[/objc]

Check the screen now fills to the new colour. Note that the “alpha” part is ignored.

Screen Shot 2013-10-05 at 17.53.18

Alpha generally kills hardware performance of all known GPUs, so you just have to accept it needs “special” handling at the API level. By default, alpha will often be ignored by the hardware until you explicitly enable it (to ensure that the default rendering performance is good).

Our new Draw call is working. But we only want it to draw a triangle, so … turn off the “clear” part again:

[objc]

draw1Triangle.shouldClearColorBit = FALSE;

[/objc]

Defining a 3D object using data

It’s time for GLKit to start helping in a bigger way.

OpenGL has an internal concept of “3D position”, but only on the GPU (i.e. in Shaders). On the CPU, where we have to create (or load) the 3D positions in the first place, OpenGL doesn’t have a vector/vertex/point type. This is a little absurd.

I don’t know the full reasons, but it’s partly historical: early desktop OpenGL, being a C API, worked in the lowest-level data it could: everything was a float or an int. A 3D point was “3 floats in a row, one after the other, in memory”. OpenGL could “get away with” not defining a type for “3D point” etc.

With C, every platform was free to add their own version of “3d point type” so long as it was implemented on top of 3 x float.

Shaders were added 10 years later, and with their custom programming language (GLSL) they needed … some kind of type … to represent this. Hence we got the standard of vec2 / vec3 / vec4, and mat2 / mat3 / mat4. But OpenGL’s authors don’t like going back and altering existing API calls, so the “old” API’s, which only understand float (or int, etc), are still with us.

Apple fixed that by creating the following structs in GLKit:

  • GLKVector2 (for 2D rendering, and for lots of special effects, and general 2D geometry and Maths)
  • GLKVector3 (used for most things when doing 3D; both 3D points and RGB colours (even though a “point” and a “colour” are very different concepts!))
  • GLKVector4 (for 3D rotations using Quaternions, and other Maths tricks which make life much easier. ALSO: for storing “colours” as Red + Green + Blue + Alpha)
  • GLKMatrix2 (for 2D geometry)
  • GLKMatrix3 (for simple 3D manipulations – but you’ll probably never use this)
  • GLKMatrix4 (for complex 3D manipulations, with GLKit doing some clever stuff for you behind the scenes)

Jeff Lamarche’s popular GL ES 1 tutorials (and his update to GL ES 2) use a custom “Vector3D” struct instead. For ES 1, Apple hadn’t released GLKit yet, and so he had to invent his own. But now that we have GLKit, you should always use the GLKVector* and GLKMatrix* classes instead:

  1. GLKit structs are a standard, identical on all iOS versions, across all apps
  2. They come with a LOT of essential, well-written, debugged code from Apple that won’t work without them, and you’d have to write yourself otherwise
  3. If you really need to … they are easy enough to port to other platforms (but you’ll be writing the boilerplate code yourself)

We want to create the geometry “only once, but every time we re-initialize the ViewController (re-start OpenGL)”. Your ViewController’s viewDidLoad is fine for now.

The numbers are chosen to fit inside the 2x2x2 “default” clipping cube used by Vertex shaders (see explanation above, “Vertex Shaders and Co-ordinates”):

ViewController.m:
[objc]
-(void) viewDidLoad
{

/** Make some geometry */

GLfloat z = -0.5; // must be more than -1 * zNear, and ABS() less than zFar
GLKVector3 cpuBuffer[] =
{
GLKVector3Make(-1,-1, z), // screen (left, bottom)
GLKVector3Make( 0, 1, z), // screen (middle, top)
GLKVector3Make( 1,-1, z) // screen (right, bottom)
};

…upload the contents of cpuBuffer to the GPU…

[/objc]

Note that “GLKVector3 cpuBuffer[]” is the same thing as “GLKVector3* cpuBuffer”, and that we could have malloc’d and free’d the array manually. But on the CPU, we won’t need this data again – as soon as it’s uploaded to GPU, it’ll be thrown away on the CPU. So we do a local array declaration that will be automatically cleaned-up/released when the current method ends.

Note OpenGL’s definitions of X,Y,Z:

  • X: as expected, positive numbers are to the right, negative numbers to the left
  • Y: positive numbers are TOP of screen (opposite of UIIKit drawing, but same as Quartz/CoreAnimation; Quartz/CA are deliberately the same as OpenGL)
  • Z: positive numbers OUT OF the screen, negative numbers INTO the screen

Upload the 3D object to the GPU

Look back at Part 2 and our list of “where” code runs:

  • either: CPU
  • or: GPU
  • or: CPU-then-GPU
  • (technically also “or: GPU-then-CPU”, but on GL ES this is weakly supported and rarely used)

Your shaders are code that runs 100% on GPU. The points/vertices are data you create on the CPU, then upload to the GPU. It turns out that transferring vertex-data from CPU to GPU is one of the top 3 biggest performance sinks in OpenGL (probably the biggest is: sending a texture from CPU to GPU). So, OpenGL has a special mechanism that breaks the core GL mindset, and lets you “cache” this data on the GPU.

…Buffers and BufferObjects: sending “data” of any kind to the GPU

Early OpenGL has a convention for “uploading data to the GPU”. You might expect a method named “glUploadData[something]” that takes an array of floats, but no, sorry.

Instead, OpenGL uses a convention of “Buffers” and “Buffer Objects”. The convention is:

A BufferObject is a something that lives on the GPU, and appears to your CPU code as an int “name” that lets you interact with it. You can “create” these objects, you can “upload” to them (and in later versions of GL: download from them), you can “select” which one you’re using for a given Draw call, “delete” them, etc.

Each Draw call can read from multiple BufferObjects simultaneously, to grab the vertex-data it needs.

The key commands are:

  • Create a buffer-object: glGenBuffers
  • Select which object to use right now: glBindBuffer
  • Upload data from CPU to GPU: glBufferData (using buffer as a noun, not verb. I.e. this method name means “put data into a buffer”, not “buffer some data that’s being transferred”)
  • Delete a buffer: glDeleteBuffers

Here are three lines that create, select, and fill/upload to a buffer:

[objc]
GLuint VBOName; // the name of the BufferObject so we can later delete it, swap it with another, etc

glGenBuffers( 1, &VBOName );
glBindBuffer(GL_ARRAY_BUFFER, VBOName );
glBufferData(GL_ARRAY_BUFFER, 3 * sizeof( GLKVector3 ), cpuBuffer, GL_DYNAMIC_DRAW);
[/objc]

We use the term “VBO”, short for “Vertex Buffer Object”. A VBO is simply:

VertexBufferObject: A BufferObject that we’ve filled with raw data for describing Vertices (i.e.: for each Vertex, this buffer has values of one or more ‘attribute’s)

Each 3D object will need one or more VBO’s.

When we do a Draw call, before drawing … we’ll have to “select” the set of VBO’s for the object we want to draw.

Add an explicit Draw call to draw our object

If we don’t select anything, a Draw call will use “whatever VBO is currently active”. We’ll tell OpenGL to treat our VBO as containing “Triangles”, and tell it there are “3 vertices in the VBO”, starting from “index 0″ (i.e. start at the beginning of the VBO, read 3 * vertices (i.e. 3 * 4 floats)):

ViewController.m
[objc]
-(void) renderSingleDrawCall:(GLK2DrawCall*) drawCall
{

glDrawArrays( GL_TRIANGLES, 0, 3); // For now: Ignore the word “arrays” in the name
}
[/objc]

Screen Shot 2013-10-05 at 17.50.03

Note that nothing changes. Why not? Because you have no shaders installed. Recap, you’ve:

  • ONCE only:
    1. created some geometry on the CPU (the GLKVector3* pointer)
    2. alloc/init’d space on the GPU to hold the geometry (glGenBuffers( 1, … ) + glBindBuffer( GL_ARRAY_BUFFER, … ))
    3. uploaded the geometry from CPU -> GPU (glBufferData)
  • EVERY frame:
    1. called “glClear”
    2. called “draw” and said you wanted the 3 vertices interpreted as a “triangle”.
      • (GL knows this means “flood-fill the space between the 3 points”)

It appears OpenGL is “drawing nothing”. In old-fashioned GL, this wouldn’t happen. Instead, it would have drawn your object in solid black. If you’d set a colour to black, it would seem to be working. If you’d set a black background, it would seem broken (see what I mean? never use black when debugging…).

In modern GL, including GL ES 2, it’s different:

Nothing appears on your background because there is no “shader program” installed on the GPU to generate fragment-colours for a “flood fill”

Creating and uploading a simple “default” ShaderProgram

First, we’ll make the world’s simplest Vertex shader. We assume there’s a “position” attribute attached to each vertex, and we output it unchanged. GLSL has a very small number of global variables you can assign to, and with Vertex Shaders you’re required to assign to the “gl_Position” one:

[objc]
gl_Position = position;
[/objc]

…but the complete file is a bit longer, since (as always in C languages) you have to “declare” your variables before you use them, and you have to put your code inside a function named “main”:

VertexPositionUnprojected.vsh … make a new file for this!
[objc]
attribute vec4 position;

void main()
{
gl_Position = position;
}
[/objc]

…and: the world’s simplest Fragment Shader

Create the world’s simplest Fragment shader. Similarly, a Fragment Shader is required to either discard the Fragment (i.e. say “don’t output a colour/value for this pixel”), or else write “something” into the global “gl_FragColor”:

[objc]
gl_FragColor = vec4( 0.0, 0.0, 1.0, 1.0 ); // R,G,B,Alpha – i.e. solid blue
[/objc]

…as a complete file:

FragmentColourOnly.fsh … note the extension “fsh” instead of “vsh”
[objc]
void main()
{
gl_FragColor = vec4( 0.0, 0.0, 1.0, 1.0 );
}
[/objc]

OpenGL: Compiling/Linking a Shader

The OpenGL design committee screwed the pooch when it came to compiling Shaders. You, the programmer, are required to write stupid boilerplate code that is error-prone and yet identical for all apps. There’s no benefit to this, it’s simply bad API design. The steps are:

Note: the “compile” and “link” we’re talking about here happen inside OpenGL and/or on-board the GPU; its not your IDE / desktop machine doing the compiling/linking

  1. Compile each of your Shaders (Vertex, Fragment)
  2. Create a ShaderProgram to hold the combined output, and attach the Shaders
  3. Link the ShaderProgram, finalising it
  4. Enable the ShaderProgram using a method call that has the wrong name

Compiling and storing Shaders

You don’t have to store the type of shader – but it’s easier to avoid bugs if you track which was which:

[objc]
typedef enum GLK2ShaderType
{
GLK2ShaderTypeVertex,
GLK2ShaderTypeFragment
} GLK2ShaderType;
[/objc]

Our GLK2Shader class is almost entirely a data-holding class:

[objc]
@interface GLK2Shader : NSObject

@property(nonatomic) GLuint glName;
@property(nonatomic) GLK2ShaderType type;
/** Filename for the shader with NO extension; assumes all Vertex shaders end .vsh, all Fragment shaders end .fsh */
@property(nonatomic,retain) NSString* filename;

[/objc]

Since we’ll never subclass GLK2Shader (there’s no point), we create a convenience one-line constructor for it:

[objc]
/** Convenience method that sets up the shader, ready to be compiled */
+(GLK2Shader*) shaderFromFilename:(NSString*) fname type:(GLK2ShaderType) type
{
GLK2Shader* newShader = [[[GLK2Shader alloc]init]autorelease];
newShader.type = type;
newShader.filename = fname;
return newShader;
}
[/objc]

Compiling is simple, but uses non-standard approach in OpenGL.

First we read the file from disk, loading it into a C-string (required by OpenGL):

[objc]
const GLchar *source = (GLchar *)[[NSString stringWithContentsOfFile:file encoding:NSUTF8StringEncoding error:nil] UTF8String];
[/objc]

…then we tell the GPU to create an empty shader for us, and we upload the raw C-string to the GPU for it to compile:
[objc]
GLuint *shader = glCreateShader(type);
glShaderSource(*shader, 1, &source, NULL);
[/objc]

…finally, we tell the GPU to compile it:

[objc]
glCompileShader(*shader);
[/objc]

Apple’s version of the compile method is a static method “compileShader:type:file:”, and I’ve made as few changes to it as possible, it just contains the above code. All well and good, except … Apple’s methods for locating files in a project are arcane and ugly. To make things easier, we wrap this in an INSTANCE method “compile” which looks through the Xcode project to find the right files, and errors if it can’t find them:

[objc]
switch( self.type )
{
case GLK2ShaderTypeFragment:
{
glShaderType = GL_FRAGMENT_SHADER;
shaderPathname = [[NSBundle mainBundle] pathForResource:self.filename ofType: @”fsh”];
stringShaderType = @”fragment”;
}break;

case GLK2ShaderTypeVertex:
{
glShaderType = GL_VERTEX_SHADER;
shaderPathname = [[NSBundle mainBundle] pathForResource:self.filename ofType: @”vsh”];
stringShaderType = @”vertex”;
}break;
}
[/objc]

Source for: GLK2Shader.h and GLK2Shader.m

Linking 2 Shaders into a single ShaderProgram

GLK2ShaderProgram (remember: a ShaderProgram is the combination of multiple Shaders) contains a bunch of simple data, and a private internal method (“link”):

GLK2ShaderProgram.h
[objc]
@interface GLK2ShaderProgram : NSObject

/**
Load a pair of Shaders, compile them, put them together into a complete ShaderProgram
*/
+(GLK2ShaderProgram*) shaderProgramFromVertexFilename:(NSString*) vFilename fragmentFilename:(NSString*) fFilename;

@property(nonatomic) GLuint glName;
@property(nonatomic,retain) GLK2Shader* vertexShader, * fragmentShader;
[/objc]

First, we do something slightly special in the init/dealloc:

[objc]
- (id)init
{
self = [super init];
if (self)
{
self.glName = glCreateProgram();
}
return self;
}

-(void)dealloc
{
self.vertexShader = nil;
self.fragmentShader = nil;

if (self.glName)
{
glDeleteProgram(self.glName); // MUST go last (it’s used by other things during dealloc side-effects)
NSLog(@”[%@] dealloc: Deleted GL program with GL name = %i”, [self class], self.glName );
}
else
NSLog(@”[%@] dealloc: NOT deleting GL program (no GL name)”, [self class] );

[super dealloc];
}
[/objc]

OpenGL does retain/release management of Shaders and ShaderPrograms. This is unique in OpenGL (part of the “OpenGL committee screwed the pooch when adding Shaders” problem). To make life easy, our GLK2ShaderProgram class matches the ObjC retain/release to doing the same in OpenGL, so we don’t have to worry about it.

We also override the setFragmentShader: / setVertexShader: setters so that they use glAttachShader and glDetachShader as required. This is part of the retain/release management – again: needs doing once ever, and never worry about it afterwards.

Most of the time, since OpenGL requires exactly 2 shaders, the creation and setup code will be identical. After you’ve created the two Shaders you compile them, add them to the GLK2ShaderProgram, and finally … tell the ShaderProgram to “link” itself:

[objc]
… // the guts of our shaderProgramFromVertexFilename:fragmentFilename: method

[vertexShader compile];
[fragmentShader compile];

newProgram.vertexShader = vertexShader;
newProgram.fragmentShader = fragmentShader;

[newProgram link];

[/objc]

Linking is simple, just one line of code:

[objc]
+(void) linkProgram:(GLuint) programRef
{
glLinkProgram(programRef);
}
[/objc]

…but again, we wrap it in an instance method (“link”) which adds some checks before and after, and automatically detaches/deletes the unused Shader objects if the linking fails.

Source for: GLK2ShaderProgram.h and GLK2ShaderProgram.m

Adding Shaders and a ShaderProgram to our draw call

Finally, with all that Shader/ShaderProgram code written … we can compile and link the shaders, and upload them to the GPU.

Add the Shader and ShaderProgram imports:

ViewController.h
[objc]
#import “GLK2Shader.h”
#import “GLK2ShaderProgram.h”
[/objc]

…create the ShaderProgram and tell the GPU to start using it, because some GL methods will fail if there’s no ShaderProgram currently selected:

[objc]
-(void) viewDidLoad
{

/** — Draw Call 2: draw a triangle onto the screen */
GLK2DrawCall* draw1Triangle = [[GLK2DrawCall new] autorelease];

/** Upload a program */
draw1Triangle.shaderProgram = [GLK2ShaderProgram shaderProgramFromVertexFilename:@"VertexPositionUnprojected" fragmentFilename:@"FragmentColourOnly"];
glUseProgram( draw1Triangle.shaderProgram.glName );

}
[/objc]

Upgrade our “drawcall” class so that it can store a ShaderProgram:

GLK2DrawCall.h
[objc]
#import “GLK2ShaderProgram.h”
@property(nonatomic,retain) GLK2ShaderProgram* shaderProgram;

[/objc]

Now that a Draw call can have a ShaderProgram, our render method needs to make use of this.

NOTE: OpenGL re-renders everything, every frame, and after each Draw call, the OpenGL state is NOT reset automatically. For each draw-call, if it doesn’t use a given OpenGL feature, we must manually re-set OpenGL’s global state to say “not using that feature right now, kthnkbye”

ViewController.h
[objc]
-(void) renderSingleDrawCall:(GLK2DrawCall*) drawCall
{

glClear( (drawCall.shouldClearColorBit ? GL_COLOR_BUFFER_BIT : 0) );

if( drawCall.shaderProgram != nil )
glUseProgram( drawCall.shaderProgram.glName);
else
glUseProgram( 0 /** means “none */ );

[/objc]

Going forwards, every time you add a new “OpenGL feature” to your code, you will do “if( drawcall uses it ) … do it … else … disable it”. If you don’t, OpenGL will start to “leak” settings between your draw-calls; most leaks are cosmetic (the wrong stuff appear on scren), but in extreme cases then can crash the renderer.

Compile failure … WTF?

Find that the shader fails to compile:

What’s happened is that Apple has turned your shader file into a “source” file, and removed it from your project (you don’t want source files appearing in your shipping binary / .ipa file!). You have to go into “Project Settings > Build Phases”, find the file inside “Source files”, and drag it out of there and into “Resource files” / “Files to be copied”.

This is a bug in Xcode 4, pretty embarassing IMHO. Apple always converts shader files into “unusable” source files, and then promptly removes them from your project’s output. I haven’t found a way to prevent it doing this. Every time you make a new shader, you have to manually “unbreak” your xcode project. Only once per file, fortunately, but it’s a huge annoyance that Apple didn’t notice and fix this.

Fix that, re-build, and try again…

Telling OpenGL how to extract values of VertexShader “attribute’s” from its Buffers (VBO’s)

Screen Shot 2013-10-05 at 17.50.03

ARGH! It appears that OpenGL is still “drawing nothing”! Why?

Nothing appears on your background because:

  1. check you have successfully loaded and compiled 2 x shaders (yes!)
  2. … and created a program from them, and uploaded a program (yes!)
  3. … and selected it (yes!)
  4. check you have uploaded some data-attached-to-vertices (yes!)
  5. … and the program reads that data as ‘attribute’ variables (yes!)
  6. … and you’ve told OpenGL how to convert your raw uploaded bytes of data into the specific ‘attribute’ pointers it needs (wait, what?)

OpenGL can clearly see that you’ve uploaded a whole load of GLfloat values, and that you’ve got exactly the right number of them to use “1 per attribute in the shader, and 1 per vertex we told GL to draw” … but it plays dumb, and refuses to simply “use them”.

Instead, it waits for us to give it permission to use THOSE floats as the ACTUAL floats that map to the “attribute” we named “[whatever]” (in this case, we named it “position”, but we could have called it anything at all). In desktop GL, this mucking around has been improved (a little) – you can put the mapping into your Shader source files – but that’s not allowed in GL ES yet.

We have to do three things:

  1. While compiling/linking the shaders, tell OpenGL to save the list of ‘attribute’ lines it found in the source code
  2. Find the OpenGL-specific way of referencing “the attribute in the source file, that I named ‘position’”
  3. Tell OpenGL how to “interpret” the data we uploaded so that it knows, for each vertex, which bits/bytes/offset in the data correspond to the attribute value for that vertex

In the long-run, we’ll want to and need to do some more advanced/clever stuff with these “vertex attributes”, so we create a class specifically for them. For now, it’s purely a data-holding class:

GLK2Attribute.h
[objc]
#import

@interface GLK2Attribute : NSObject

+(GLK2Attribute*) attributeNamed:(NSString*) nameOfAttribute GLType:(GLenum) openGLType GLLocation:(GLint) openGLLocation;

/** The name of the variable inside the shader source file(s) */
@property(nonatomic) NSString* nameInSourceFile;

/** The magic key that allows you to “set” this attribute later by uploading a list/array of data to the GPU, e.g. using a VBO */
@property(nonatomic) GLint glLocation;
@end
[/objc]

The two variables should be obvious, except for:
[objc]
@property(nonatomic) GLenum glType;
[/objc]

…but ignore that one for now. It’s included in the class because you can’t read the other data for an ‘attribute’ in OpenGL without also reading this, but we have no use for it here.

Add code to the end of the “link” method that will find + save all the ‘attribute’ lines it finds. First add a Dictionary to store them:

[objc]
@interface GLK2ShaderProgram()
@property(nonatomic,retain) NSMutableDictionary * vertexAttributesByName;
@end
[/objc]

… and initialize it:

[objc]
- (id)init
{

self.vertexAttributesByName = [NSMutableDictionary dictionary];

}

-(void)dealloc
{

self.vertexAttributesByName = nil;

}
[/objc]

…then finally use this code to iterate across all the Attributes (by name, strangely), and store them:

[objc]
-(void) link
{

/********************************************************************
*
* Query OpenGL for the data on all the “Attributes” (anything
* in your shader source files that has type “attribute”)

* Allocate enough memory to store the string name of each uniform
(OpenGL is a C API. C is a horrible, dead language. Deal with it)
*/
GLint numCharactersInLongestName;
glGetProgramiv( self.glName, GL_ACTIVE_ATTRIBUTE_MAX_LENGTH, &numCharactersInLongestName);
char* nextAttributeName = malloc( sizeof(char) * numCharactersInLongestName );

/** how many attributes did OpenGL find? */
GLint numAttributesFound;
glGetProgramiv( self.glName, GL_ACTIVE_ATTRIBUTES, &numAttributesFound);

NSLog(@”[%@] —- WARNING: this is not recommended; I am implementing it to check it works, but you should very rarely use glGetActiveAttrib – instead you should be using an explicit glBindAttribLoction BEFORE linking”, [self class]);
/** iterate through all the attributes found, and store them on CPU somewhere */
for( int i = 0; i < numAttributesFound; i++ )
{
GLint attributeLocation, attributeSize;
GLenum attributeType;
NSString* stringName; // converted from GL C string, for use in standard ObjC calls and classes

/** From two items: the glProgram object, and the text/string of attribute-name … we get all other data, using 2 calls */
glGetActiveAttrib( self.glName, i, numCharactersInLongestName, NULL /**length of string written to final arg; not needed*/, &attributeSize, &attributeType, nextAttributeName );

attributeLocation = glGetAttribLocation( self.glName, nextAttributeName );
stringName = [NSString stringWithUTF8String:nextAttributeName];

GLK2Attribute* newAttribute = [GLK2Attribute attributeNamed:stringName GLType:attributeType GLLocation:attributeLocation];

[self.vertexAttributesByName setObject:newAttribute forKey:stringName];
}

free( nextAttributeName ); // important: in C, memory-managing of strings is clunky. Leaking this here would be a tiny, tiny leak, so small you'd never notice. But that's no excuse to write bad source code. So we do it the right way: "free" the thing we "malloc"-ed.
}
[/objc]

That's great, now OpenGL is saving the list of attributes. There's an NSLog warning in the middle there – we're going to ignore glBindAttribLocation for now. Personally, in real apps, I prefer to use glBindAttribLocation (it makes life easier to experiment with different shaders at runtime), but you don't need it yet, and it requires more code.

Back to your ViewController, where we’ll have to read-back the saved GLKAttribute, just after we’ve compiled and linked the ShaderProgram:

[objc]

draw1Triangle.shaderProgram = [GLK2ShaderProgram shaderProgramFromVertexFilename:@"VertexPositionUnprojected" fragmentFilename:@"FragmentColourOnly"];
glUseProgram( draw1Triangle.shaderProgram.glName );

GLK2Attribute* attribute = [draw1Triangle.shaderProgram attributeNamed:@"position"]; // will fail if you haven’t called glUseProgram yet

/** Make some geometry */

[/objc]

…which enables us to tell OpenGL ‘THIS attribute is stored in the uploaded data like THAT’. But there are two parts to this. If we simply “tell” OpenGL this information, it will immediately forget it, and if we try to render a different object (in a different Draw call), the mapping will get over-written.

Obviously, the sensible thing to do would be to attach this metadata about the “contents” of a BufferObject / VBO to the VBO itself. OpenGL took a different path (for historical reasons, again), and invented a new GPU-side “object” whose purpose is to store the metadata for one or more VBO’s. This is the VertexArrayObject, or VAO.

It’s not an Array, and it’s not a modern Object, nor does it contain Vertices. But … it is a GPU-side thing (or “object”) that holds the state for an array-of-vertices. It’s a clunky name, but you get used to it.

After you’ve finished storing the VBO’s state/settings/mapping-to-VertexShader-attributes in the VAO … you can de-select the VAO so that any later calls don’t accidentally overwrite the stored state. In standard OpenGL style, deselection is done by “binding” (glBindVertexArrayOES) the number “0″. We won’t do this here because it’s unnecessary, but lots of tutorials will do it as a precaution against typos elsewhere in the app.

[objc]

glBufferData(GL_ARRAY_BUFFER, 3 * sizeof( GLKVector3 ), cpuBuffer, GL_DYNAMIC_DRAW);

/** … Create a VAO (state) to save the metadata/state for the VBO (vertex data) */
GLuint VAOName;
glGenVertexArraysOES(1, &VAOName );
glBindVertexArrayOES( VAOName );

/** Tell OpenGL “how” the attribute “position” is stored/packed into the stream of bytes we just uploaded */
glEnableVertexAttribArray( attribute.glLocation );
glVertexAttribPointer( attribute.glLocation, 3, GL_FLOAT, GL_FALSE, 0, 0);

[/objc]

Complete! Geometry on-screen!

At last, we have a triangle:

Screen Shot 2013-10-05 at 17.56.03

You can turn the background-clear on and off for the new Draw call, and see that without it, the triangle appears on the old magenta background, and with it, it appears on the new green background.

In this post, we’ve added three new classes to the project. No need to type them all out yourself. GitHub link to a commit that has this exact version of the full project (I’ll update this link if I make a new commit with changes/fixes).

Next tutorial…

What will be next? I think it has to be … texture-mapping!

UPDATE: actually, I did a quick time-out to cover Multiple objects on screen at once, and an improved set of classes for VAO and VBOs