OpenGL notes

From Helpful
Jump to: navigation, search
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

You may also be looking for

Compute: OpenCL
Graphics: OpenGL
Sound: OpenAL · OpenSL

OpenGL versions and derivatives

OpenGL 1 comes from the nineties and early noughties.

OpenGL 2 introduced shaders (or at least a standard high-level language for them)

OpenGL 3 added a geometry shader

deprecated direct mode, display lists, GLSL 1.1 and 1.2
you can still most of that them, but can assume not forever

OpenGL 4

  • added tesselation and compute stuff


OpenGL has some fun history (way more complex than what's mentioned here). Version 2 already came with some disagreement.

Very roughly speaking, this was eventually resolved by passing OpenGL to Khronos in 2006. Khronos[1] controls various open APIs dealing with graphics (and some audio) trying for some unification. Their membership includes some big names like Nvidia, AMD, Intel, ARM, gaming companies, phone companies, and various other contributing members[2]

(Khronos notably excludes Apple, who have since deprecated OpenGL and openCL on Apple products, instead going for the also-open but not-compatible Vulkan. This, and the fact that it's about as involved to work with as OpenGL, mean game developers are unlikely to opt for supporting it unless they think they will miss market, which they probably won't).


Up to OpenGL 2 it was immediate-mode (with a few retained-mode-like features). It also had a fairly explicit state system: you bind an object, you say how it changes, you tell GL how to draw that state. Initially intended to separate data and drawing and make it easy to hand the actual computation off to the hardware, while still retaining flexibility. It also implies most things are mutable, so cacheing is more complex, multithreading is hard, and the faster the hardware became, the harder it was to fully utilize it.

So it was useful that OpenGL 3 focused on shaders and new data management, which gave OpenGL useful momentum.

But it also started to feature-freeze, deprecate, and remove roughly everything that came before. This pissed a lot of people off, roughly because depending on how that works out in the real world, potentially everything previously called OpenGL will just stop working one day in the near-ish future.

The reason both are currently called OpenGL is ease of transition, a time in which both are mostly supported.

On desktop, anyway, because OpenGL ES (which is by no means new) abandoned immediate mode entirely, so GL in phones and browsers have never been able to run OpenGL code made written in the first two decades of OpenGL's existance.


Which is basically the nicest possible way to break a thing.

For more on versions, see e.g. http://en.wikipedia.org/wiki/OpenGL


Derivations, profiles, and such:

  • OpenGL ES ('embedded systems')
    • Makes it easier to implement by more restricted hardware - removes a number of fancy features and alternatives, adds fixed-point data types. Also removes glBegin...glEnd style in favour of vertex arrays (and VBOs?).
    • OpenGL ES before 2.0 was based on OpenGL 1.3, 1.5.
    • OpenGL ES 2.0 is based on OpenGL 2.0, and switched to doing many more things with shaders.


  • WebGL adds ES-style GL to the (HTML5) <canvas> element, controlled via javascript.
    • WebGL (v1) is based on OpenGL ES 2.0, so also relies on shaders, and doesn't do glStart/glEnd style drawing.
    • WebGL2 is based on OpenGL ES 3.0, and in practice makes a few things easier (guaranteed availability of extensions that were optional in WebGL 1.0)


  • OpenGL SC ('Safety Control') - restricted profile that may be required in in security-critical contexts, such as those in avionics, military, medical, automotive, and other industry settings. Expect it to come at the cost of speed and features.

The ways of telling OpenGL to draw objects (roughly)

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)
  • glBegin...glEnd, encompassing (primarily) glVertex, glTexCoord, glColor, glNormal calls
    • hands in data to render for each frame
    • is referred to as the immediate-mode API (though note that much of the below fits the general concept of being immediate mode)
    • commonly seen in tutorials because it its verbosity also makes its intent clear to newcomers, it's harder to make mistakes in, and easy to play with.
    • for complex objects this means a lot of call overhead, which limits the speed at which you can tell the GPU about complex scenes - at some point, the call overhead will become the bottleneck.
    • not present in OpenGL ES and WebGL
    • Deprecated in OpenGL 3.0? (verify)
  • display lists
    • an optimization of the previous. You let GL record the data commands (into GL state), which can later be rendered as a batch via one GL call
    • still immediate mode, but the data tends to sit closer to where it needs to be
    • immutable, which means you can't change the object, but also that the driver is free to precompile and cache on hardware if it likes to
    • for static objects, DLs are the easiest speed increase over glBegin/glEnd style. In the past it was the main answer to 'how more speed?', these days that question is more complex.
    • for simpler objects, DLs may outperform VBOs (verify)
    • Deprecated in OpenGL 3.0? (verify)


  • vertex arrays, a.k.a. vertex buffers (OpenGL 1.5)
    • you allocate well-typed arrays (host side), store vertex/color/normal/texturecoord data in them
    • ...and in each frame you make a call that makes the OpenGL implementation copy that data to the GPU
    • For the same amount of 3D information, this makes for (much) less call overhead than glBegin/glEnd style each frame, but it may have to send all data to the GPU per frame, which can be bottleneck
    • Much less driver optimization than DLs
    • ...but the data is allowed to change so you can use it for (nontrivial) dynamic stuff
  • index-based vertex arrays
    • you upload vertex (coordinate data), and refer to them via indices
    • less transfer if vertices are significantly reused between primitives (e.g. in meshes)
    • may better exploit the vertex cache: if you order the indices so that the same references are close together, the GPU is likelier to be able to use a cached lookup of index to data.


  • compiled vertex arrays (CVA) (GL extension)
    • vertex arrays, which you can lock (partly or wholly(verify))
    • primarily to say "okay driver, actually you can do pre-compilation and cacheing right now"


  • Vertex Buffer Objects (VBO)
    • The driver may choose to cache VBOs on the 3D hardware, swap them in and out, and may deny allocation
    • Allows for more retained-mode style rendering (the others are much closer to immediate mode)
    • May sometimes be the only way to get all the vertex processing speed your hardware is capable of
  • Index Buffer Objects (IBO)
    • index-style VBOs
  • Note: Buffer objects are the general concept of "we store the data buffers in a place where the GPU can get it directly using DMA, so that the GPU can fetch it later" (e.g. while the CPU is working on the command queue)
    • the CPU can update this data - with minimal arbitration from OpenGL
    • that arbitration does need to happen, since the data may be in video memory or not
    • applicable to things beyond vertex data (VBO, IBO), e.g. also to pixel data (PBOs)
    • (has some other side effects, such as not necessarily polluting the CPU cache)


  • Vertex Array Range, VAR (GL extension)
    • apparently much like buffer objects (cacheable, DMA-able)
    • but a little more streamlined - in a way that usually only matters on high-vertex scenes
    • mainly, you can only do this with specifically allocated and marked memory ranges



See also:

Graphics pipelines

GL coding style

On binding and context

GLSL

Semi-sorted

On GLUT

GLUT handles interaction with the OS such as keyboard, mouse, creating a window, when to redraw, that sort of thing.

It's convenient boilerplate, particularly for an easy cross-platform way of creating a GL context. (GPGPU projects may use a few GLUT calls for just that reason.)


That said, it is not the prettiest way to do things, and has a few irritating limitations.


If you want a little more, look at SDL[3], or other 3D engines, which typically have their own thing instead.


Redrawing

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Design decisions

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)
Immediate versus retained mode


In graphics and animation in general immediate mode refers to telling the library what to draw each frame, while retained mode means updating the graphics state and leaving the library to do decide when and how to draw.

Often enough, retained mode is more likely to be able to completely change and optimize the way it goes about dealing with the data - for speed, for minimal memory, or whatnot. This is particularly relevant for 3D hardware, where its driver actually does this.

Immediate mode

  • means your application keeps track of the scene, and asks the graphics library to draw it.
  • OpenGL's glBegin/glEnd style is often strongly associated with immediate mode, as are vertex arrays because for each frame you (have no option but to) call functions to hand in data
  • geometry can be rendered incrementally (things like PostScript work this way; you could rasterize just a few lines at a time, though this isn't always the most efficient use of resources)
  • when you spend a lot of time sending data thorough a bottleneck (bus) to a relative powerhouse (GPU), this is slower than necessary. The glBegin/glEnd style can most easily lead to such bottlenecks (and is becoming deprecated)


Retained mode

  • means the time at which to actually render can be chosen somewhat more adaptively (which can sometimes look better, and/or make for an easier quality/speed tradeoff, and/or be more efficient(verify))
  • may be easier to use (depending a little on case) -- but may also mean the API/library you use is more restrictive than you wish (and is possible)
  • can have higher memory requirements


In terms of speed and features, the distinction isn't so simple. Immediate mode lets you do some targeted optimizations, retained mode can do some clever things via delayed rendering.


OpenGL (particularly pre-3.0) is primarily immediate mode, but with a number of more retained-mode features, with a limited amount of state (mostly convenience), and extensions that let you e.g. reuse objects without additional transfer.

Direct3D is more specifically designed to do both styles.


Imperative versus declarative

Imperative roughly means you give the graphics commands to execute down to draw commands, declarative means you describe scene and camera, and let the library figure out how to draw it.

It's an arguable distinction in this context, though still useful in the back of your mind.


You could argue it's declarative because because while the application figures out the scene to send to be drawn, and it's the the graphics pipeline that does a lot more processing and eventually does the drawing.

And you can argue it's imperative because you don't get "draw a cube over there", you get "these are the vertices. This is how they make up triangles, in this order so that they face out. This is the color at each point. This is the texture," and any application that wants to be clever or fast has to think about a bunch of relatively low-level details how it will be drawn - dealing with level of detail, occlusion, mip mapping, and such. All the 3D engines out there exist mostly to take the large bulk of low level bother out - some of which can certainly assist more declarative-style setups - though the core of the libraries are imperative.

Graphics pipeline

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)


The graphics pipeline refers to how exactly we go from scene information to final picture.

While the basic setup is the same in most places, the details are far from fixed; they evolve and will vary with time, hardware, libraries, and versions. Complete diagrams of modern pipelines look pretty damn complex, in part because recent hardware capabilities and extensions are often tacked on.


A graphics pipeline now regularly looks like:

  • Application/Command
    • alter and maintain scene state, and the implied graphics state that is exchanged with 3D hardware
    • the imperative/declarative distinction mostly applies at this level.
  • Geometry
    • the most basic geometry is triangles, based on vertices and how to attach those vertices
    • objects are defined in their own coordinate system, and transformed into global coordinates (exactly how depends a bit on the library)
    • some operations on every vertex
    • some operations on every triangle
      • clipping, culling,
    • vertex shaders can be used here
    • result: triangles in screen space (coordinate system associated with the screen)
    • (much of the more basic geometry stuff can be done in parallel, which is what GPUs are good at)
  • rasterization
    • color and lighting per vertex
    • texture per fragment
  • texturing
    • historically combined with colored vertices in a few predefined ways. Now can be more flexible via fragment shaders
  • Fragment
    • owner, scissor, depth, alpha, stencil tests
  • Composite
  • Display


Rendering small bits of geometry, often triangles, has proven the common and often the easiest choice for real-time, parallelized rendering. There are other methods, such as raytracing, that tend to look better but are too complex to do in real-time. Many of the recent (optional) fanciness like shadows, reflections, refraction and such are ray-based, computationally expensive, and hacked onto the basic geometry-based pipeline (and regularly done in cheating, cheaper-to-render ways).


Glossary:

  • fragment - like a pixel, but before occlusion decisions, so there are often various fragments for what to you is a single screen pixel. Fragments often include depth information, as this is necessary for blending and occlusion/opaque rendering.
    • ...means geometry state is necessary even in immediate mode, and is hard to parallelize.
  • Framebuffer - data structure that hold information for each pixel.
    • What you see on screen is a framebuffer. There can be supporting framebuffers.
    • Double buffering basically means drawing into one framebuffer while the other is on-screen.
  • Shaders are programs that reside on the GPU. They are simple and often single-purpose
    • have been part of OpenGL since 2.1, and in DirectX since 8.0. Later hardware and later versions of these libraries support later versions of the Pixel Shader model
    • This style may well replace some of the older styles of programming and scene rendering, largely because this is more flexible in the long run (though a little more complex to beginners)
    • In currently common graphics pipelines there are per-vertex shaders, and per-fragment shaders (a.k.a. pixel shaders).
      • Vertex shaders are run for each vertex, primarily to project into 2D.
      • (geometry shading?)
      • Fragment shaders, part of rasterization, are used for fancier effects like lighting, bump mapping, shadows, reflections
      • some effects may require planned combination of both to work
    • General-purpose use (scientific calculation, password cracking) now usually uses shaders in their more general sense of 'a program running on the GPU' and bypass the graphics pipeline and any rendering. For some cross-reference there, see this table).
      • Getting high throughput often requires no specific optimization, but does need some awareness of the hardware, e.g. how it executes things, moves memory, caches, and such. Code that you simply port just enough to get it running may perform miserably, and many things cannot be parallelized well in the first place.


Unsorted

http://www.opengl.org/resources/features/KilgardTechniques/oglpitfall/


Note on Threading

GL related libraries

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Note that none of these are really part of OpenGL. At best they are bundled in the same download, because they are common convenience, but they are not entangled.

  • GLU - ...Utility Library
    • graphics functions, mostly higher-level convenience tools built on the more basic GL things.
    • Includes things like mapping between screen and world-coordinates, mipmap generation, tesselation, quadratic surfaces, NURBS, parametric generation of objects like spheres, cylinders, disks,
    • frequently seen distributed with OpenGL.
  • GLUT - ...Utility Toolkit
    • Mostly OS interaction, such as window stuff and interaction with keyboard and mouse