UPDATED 24/09/13: Added some essential details to the class files at end of post, and corrected typos
Github project with full source from this article
a standalone library on GitHub, with the code from the articles as a Demo app. It uses the ‘latest’ version of the code, so the early articles are quite different – but it’s an easy starting point
NB: if you’re reading this on AltDevBlog, the code-formatter is currently broken on the server. Until the ADB server is fixed, I recommend reading this (identical) post over at T-Machine.org, where the code-formatting is much better.
I’ve been using OpenGL ES 2 less than a year, so if you see anything odd here, I’m probably wrong. Might be a mix-up with desktop OpenGL, or just a typo. Comment/ask if you’re not sure.
2D APIs, Windowing systems, Widgets, and Drawing
Windowing systems draw like this:
- The app uses a bunch of Widget classes (textboxes (UILabel/UITextArea), buttons, images (UIImageView), etc)
- Each widget is implemented the same way: it draws colours onto a background canvas
- The canvas (UIView + CALayer) is a very simple class that provides a rectangular area of pixels, and gives you various ways of setting the colours of any/all of those pixels
- A window displays itself using one or more canvases
- When something “changes”, the windowing system finds the highest-level data to change, re-draws that, and sticks it on the screen. Usually: the canvas(es) for a small set of views
Under the hood, windowing systems draw like this:
- Each canvas saves its pixels on the GPU
- The OS and GPU keep sending those raw pixels at 60Hz onto your monitor/screen
- When a widget changes, the canvas DELETES its pixels, re-draws them on the CPU, uploads the new “saved” values to the GPU, and goes back to doing nothing
The core idea is: “if nothing has changed, do nothing”. The best way to slow down an app is to keep telling the OS/windowing system “this widget/canvas has changed, re-draw it” as fast as you can. Every time you do that, NOT ONLY do you have to re-draw it (CPU cost), BUT ALSO the CPU has to upload the saved pixels onto the GPU, so that the OS can draw it to the screen.
OpenGL and Drawing
That sounds great. It leads to good app design. Clean OOP. etc.
But OpenGL does it differently.
Instead, OpenGL starts out by saying:
We’ll redraw everything, always, every single refresh of the monitor. If you change your code to re-draw something “every frame”, then with OpenGL … there is no change in performance, because we were doing that anyway.
(Desktop graphics chips usually have dedicated hardware for each different part of the OpenGL API. There’s no point in “not using” features frame-to-frame if the hardware is there and sitting idle. With mobile GPUs, some hardware is duplicated, some isn’t. Usually the stuff you most want is “shared” between the features you’re using, so just like normal CPU code: nothing is free. But it’s worth checking on a feature-by-feature basis, because sometimes it is exactly that: free)
When people say “it’s fast” this is partly what they mean: OpenGL is so blisteringly fast that every frame, at full speed, it can do the things you normally do “as sparingly as possible” in your widgets.
Multiple processors: CPUs vs GPUs … and Shaders
This worked really well in the early days of workstations, when the CPU was no faster than the motherboard, and everything in the computer ran at the same speed. But with modern computer hardware, the CPU normally runs many times faster than the rest of the system, and it’s a waste to “slow it down” to the speed of everything else – which is partly why windowing systems work the way they do.
With modern systems, we also have a “second CPU” – the GPU – which is also running very fast, and is also slowed down by the rest of the system. Current-gen phones have multiple CPU cores *and* multiple GPU cores. That’s a lot of processors you have to keep fed with data… It’s something you’ll try to take advantage of a lot. For instance, in Apple’s performance guide for iOS OpenGL ES, they give the example of having the CPU alternate between multiple rendering tasks to give the GPU time to work on the last lot:
Instead of this:
OpenGL approaches this by running your code in multiple places at once, in parallel:
- A lot of your code runs on the CPU, like normal
- A lot of your code appears to run on the CPU, but is a facade: it’s really running on the GPU
- Some of your code runs on the GPU, and you have to “send” it there first
- Most of your code *could be* running on CPU or GPU, but it’s up to your hardware + hardware drivers to decide exactly where
Most “gl” functions that you call in your code don’t execute code themselves. Instead, they’re in item 2: they run on the CPU, but only for a tiny fraction of time, just long enough to signal to the GPU that *it* should do some work “on your behalf”, and to do that work “at some time in the future. When you’re ready. KTHNXBYE”.
The third item is Shaders (ES 2 only has: Vertex Shaders + Fragment Shaders; GL ES 3 and above have more), and GLSL (the GL Shading Language, a subset of OpenGL).
Of course, multi-threaded programming is more complex than single-threaded programming. There are many new and subtle ways that it can go wrong. It’s easy to accidentally destroy performance – or worse: destroy correctness, so that your app does something different from what your source code seems to be telling it to do.
Thankfully, OpenGL simplifies it a lot. In practice, you usually forget that you’re writing multi-threaded code – all the traditional stuff you’d worry about is taken care of for you. But it leads (finally) to OpenGL’s core paradigm: Draw Calls.
Draw calls (and Frames)
Combine multi-threaded code with parallel-processors, and combine that with facade code that pretends to be on CPU but actually runs on GPU .. and you have a recipe for source-code disasters.
The OpenGL API effectively solves this by organizing your code around a single recurring event: the Draw call.
(NB: not the “frame”. Frames (as in “Frames Per Second”) don’t exist. They’re something that 3D engines (written on top of OpenGL) create as an abstraction – but OpenGL doesn’t know about them and doesn’t care. This difference matters when you get into special effects, where you often want to blur the line between “frames”)
It’s a simple concept. Sooner or later, if you’re doing graphics, you’ll need “to draw something”. OpenGL ES can only draw 3 things: triangles, lines, and points. OpenGL provides many methods for different ways of doing this, but each of them starts with the word “draw”, hence they’re collectively known as “draw calls”.
When you execute a Draw call, the hardware could do anything. But conceptually it does this:
- The CPU sends a message to the GPU: “draw this (set of triangles, lines, or points)”
- The GPU gathers up *all* messages it’s had from the CPU (since the last Draw call)
- The GPU runs them all at once, together with the Draw call
Technically, OpenGL’s multiprocessor paradigm is “batching”: it takes all your commands, caches (but ignores) them … until you give it the final “go!” command (a Draw call). It then runs them all in the order you sent them.
(understanding this matters a lot when it comes to performance, as we’ll soon see)
Anatomy of a Draw call
A Draw call implicitly or explicitly contains:
- A Scissor (masks part of the screen)
- A Background Colour (wipe the screen before drawing)
- A Depth Test (in 3D: if the object in the Draw call is “behind” what’s on screen already, don’t draw it)
- A Stencil Test (Scissors 2.0: much more powerful, but much more complex)
- Geometry (some triangles to draw!)
- Draw settings (performance optimizations for how the triangles are stored)
- A Blend function, usually for Alpha/transparency handling
- Dithering on/off (very old feature for situations where you’re using small colour palettes)
- Culling (automatically ignore “The far side” of 3D objects (the part that’s invisible to your camera))
- Clipping (if something’s behind the camera: don’t waste time drawing it!)
- Lighting/Colouring (in OpenGL ES 2: lighting and colouring a 3D object are *the same thing*. NB: in GL ES 1, they were different!)
- Pixel output (something the monitor can display … or you can do special effects on)
NB: everything in this list is optional! If you don’t do ANY of them, OpenGL won’t complain – it won’t draw anything, of course, but it won’t error either.
That’s a mix of simple data, complex data, and “massively complex data (the geometry) and source code (the lighting model)”.
OpenGL ES 1 had all of the above, but the “massively complex” bit was proving way too complex to design an API for, so the OpenGL committee adopted a whole new programming language for it, wrapped it up, and shoved it off to the side as Shaders.
In GL ES 2, all the above is still there, but half of “Geometry” and half of “Lighting/Colouring” have been *removed* : bits of OpenGL that provided them don’t exist any more, and instead you have to do them inside your shaders.
(the bits that stayed in OpenGL are “triangle/vertex data” (part of Geometry) and “textures” (part of Lighting/Colouring). This also explains why those two parts are two of the hardest bits of OpenGL ES 2 to get right: they’re mostly unchanged from many years ago. By contrast, Shaders had the benefit of being invented after people had long experience with OpenGL, and they were simplified and designed accordingly)
Apple’s EAGLContext and CAEAGLLayer
At some point, Apple has to interface between the cross-platform, hardware-independent OpenGL … and their specific language/platform/Operating System.
[*]EAGL[*] classes are where the messy stuff happens; they’ve been around since the earliest versions of iOS, and they’re pretty basic.
These days, it only handles two things of any interest to us:
- Allow multiple CPU threads to run independently, without needing any complex threading code (OpenGL doesn’t support multi-threading on the CPU)
- Load textures in the background, while continuing to RENDER TO SCREEN the current 3D scene (the hardware is capable of doing both at once)
In practice … all you need to remember is:
All OpenGL method calls will fail, crash, or do nothing … unless there is a valid EAGLContext object in memory AND you’ve called “setCurrentContext” on it
For fancy stuff later on, you might need to pre-create an EAGLContext, rather than create one on the fly
Later on, when we create a ViewController, we’ll insert the following code to cover both of these:
AltDevBlog’s source-code formatter is broken. For now, please use the other copy of this post at Next: Part3: Vertices, Shaders, and Geometry