OpenGL notes

From Helpful
(Redirected from OpenGL ES)
Jump to navigation Jump to search
This article/section is a stub — probably a pile of half-sorted notes and is probably a first version, is not well-checked, so may have incorrect bits. (Feel free to ignore, or tell me)

You may also be looking for

Compute: OpenCL
Graphics: OpenGL · OpenVG
Sound: OpenAL · OpenSL


Contrast with: DirectX, Vulkan, Apple Metal


OpenGL versions and derivatives

This article/section is a stub — probably a pile of half-sorted notes and is probably a first version, is not well-checked, so may have incorrect bits. (Feel free to ignore, or tell me)


OpenGL 1 (1992 though 2003ish) comes from the nineties and early noughties.

OpenGL 2 (2004) started introducing shaders (or at least a standard high-level language for them)

OpenGL 3 (2008) added a geometry shader

deprecated direct mode, display lists, GLSL 1.1 and 1.2
you can still use most of them, but can assume not much longer

OpenGL 4 (2010 and later)

  • added tesselation and compute stuff



Which looks simple, but ignores a complex history.

Version 2 (2004) already came with some, ahem, political disagreement.


Very roughly speaking, this was eventually resolved by passing control of the OpenGL spec to Khronos in 2006. Khronos[1] controls various open APIs dealing with graphics (and some audio) trying for some unification.

Khronos membership includes some big names like Nvidia, AMD, Intel, ARM, gaming companies, phone companies, and various other contributing members[2]. (This notably excludes Apple)




For more on versions, see e.g. http://en.wikipedia.org/wiki/OpenGL


Up to version 2, OpenGL was immediate mode (with a few retained-mode-like features).

It also had a fairly explicit state system: you define what an object is, you say how it changes, you tell GL how to draw it. This was largely intended to separate data from drawing, and makes it easy to hand all the drawing-related computation off to hardware, while still retaining flexibility.

Such a system of complex state also implies many things are mutable, which makes GPU-side cacheing complex, makes multithreading hard to combine with GL. The faster the hardware became, the harder it was to fully utilize the hardware with such a state description system.


The optimistic take

So it was useful that OpenGL 3 focused a new area - shaders (which had basically already been there since 2. It's not a fundamentally different way of working, but it is a tool that makes other ways easier) and new data management, which gave OpenGL room to grow to fancier and faster things, and momentum in that direction. Which is great...


The critical take

...but at the same time, it also immediately started to feature-freeze, deprecate, and remove things, and this process seemed to targeting roughly everything that came before.

It'd be one thing to maybe call what came before 'GL classic' or something, and make support optional, or just something of secondary concern, separately installed even - yes, old games would rely on it (something we can't change because it's proprietary code), but those also don't need the maximum processing power so the nature of faster hardware means it wouldn't matter if that code gets little attention.


It's a whole other thing to just quietly start removing the building blocks that GL used to entirely consist of.

This is only somewhat dramatized.

Buckets of OpenGL code you will find in tutorials will absolutely not run on, say, your phone, or in your browser.

This has to do with OpenGL ES (any version of it, so basically since the early 2000s), which are basically just a series of forks of standard GL are a "simplified" version that already do.

WebGL is basically just ES.

Phone 3D is basically just ES.


It is interesting to mention Vulkan (introduced 2015 as a more modern open, cross-platform API), which seems to basically this "be honest about starting over" thing years later, yet thrown into this mix it is... not well understood how.




So if you use "standard" to mean "if you adhere to the standard, then it will run", OpenGL has just stopped being a standard.

Again, it's partly just confusing naming. 'OpenGL ES' is clearly a subset, 'OpenGL' could be any version ever, but there is no clear reference to the subset that will start disappearing completely, starting soon. Again, 'we call it OpenGL classic and it's going away staggered, removing X by version Y' would have at least made it clearer, but the reality seems a lot closer to 'we use OpenGL as a brand, have defined incompatible sub-standards, and customers will have to accept whatever the fuck we do'.

There is already varied emulation and of deprecated OpenGL, because -- suprise -- various things depend on it.

This pissed a bunch of of people off, not least because there is a lot of example GL code out there, from the last two decades, that is absolutely never going to run on mobile or in browsers.


And even if you can get it to run on a desktop today, it'll stop working there too in the near-ish future.

They call it "transition period", I call it "breaking shit because it's easier this way".





Some named derivations, profiles, and such:

  • OpenGL ES ('embedded systems')
    • Makes it easier to implement by more restricted hardware - removes a number of fancy features and alternatives, adds fixed-point data types. Also removes glBegin...glEnd style in favour of vertex arrays (and VBOs?).
    • OpenGL ES before 2.0 was based on OpenGL 1.3, 1.5.
    • OpenGL ES 2.0 is based on OpenGL 2.0, and switched to doing many more things with shaders.


  • WebGL adds (only) ES-style GL to the (HTML5) <canvas> element, controlled via javascript.
    • WebGL (v1) is based on OpenGL ES 2.0, so also relies on shaders, and doesn't do glStart/glEnd style drawing.
    • WebGL2 is based on OpenGL ES 3.0, and in practice makes a few things easier (guaranteed availability of extensions that were optional in WebGL 1.0)


  • OpenGL SC ('Safety Control') - restricted profile that may be required in in security-critical contexts, such as those in avionics, military, medical, automotive, and other industry settings. Expect it to come at the cost of speed and features.




The ways of telling OpenGL to draw objects (roughly)

This article/section is a stub — probably a pile of half-sorted notes and is probably a first version, is not well-checked, so may have incorrect bits. (Feel free to ignore, or tell me)
  • glBegin...glEnd, encompassing (primarily) glVertex, glTexCoord, glColor, glNormal calls
    • hands in data to render for each frame
    • is referred to as the immediate-mode API (though note that much of the below fits the general concept of being immediate mode)
    • commonly seen in tutorials because it its verbosity also makes its intent clear to newcomers, it's harder to make mistakes in, and easy to play with.
    • for complex objects this means a lot of call overhead, which limits the speed at which you can tell the GPU about complex scenes - at some point, the call overhead will become the bottleneck.
    • not present in OpenGL ES and WebGL
    • Deprecated in OpenGL 3.0? (verify)
  • display lists
    • an optimization of the previous. You let GL record the data commands (into GL state), which can later be rendered as a batch via one GL call
    • still immediate mode, but the data tends to sit closer to where it needs to be
    • immutable, which means you can't change the object, but also that the driver is free to precompile and cache on hardware if it likes to
    • for static objects, DLs are the easiest speed increase over glBegin/glEnd style. In the past it was the main answer to 'how more speed?', these days that question is more complex.
    • for simpler objects, DLs may outperform VBOs (verify)
    • Deprecated in OpenGL 3.0? (verify)


  • vertex arrays, a.k.a. vertex buffers (OpenGL 1.5)
    • you allocate well-typed arrays (host side), store vertex/color/normal/texturecoord data in them
    • ...and in each frame you make a call that makes the OpenGL implementation copy that data to the GPU
    • For the same amount of 3D information, this makes for (much) less call overhead than glBegin/glEnd style each frame, but it may have to send all data to the GPU per frame, which can be bottleneck
    • Much less driver optimization than DLs
    • ...but the data is allowed to change so you can use it for (nontrivial) dynamic stuff
  • index-based vertex arrays
    • you upload vertex (coordinate data), and refer to them via indices
    • less transfer if vertices are significantly reused between primitives (e.g. in meshes)
    • may better exploit the vertex cache: if you order the indices so that the same references are close together, the GPU is likelier to be able to use a cached lookup of index to data.


  • compiled vertex arrays (CVA) (GL extension)
    • vertex arrays, which you can lock (partly or wholly(verify))
    • primarily to say "okay driver, actually you can do pre-compilation and cacheing right now"


  • Vertex Buffer Objects (VBO)
    • The driver may choose to cache VBOs on the 3D hardware, swap them in and out, and may deny allocation
    • Allows for more retained-mode style rendering (the others are much closer to immediate mode)
    • May sometimes be the only way to get all the vertex processing speed your hardware is capable of
  • Index Buffer Objects (IBO)
    • index-style VBOs
  • Note: Buffer objects are the general concept of "we store the data buffers in a place where the GPU can get it directly using DMA, so that the GPU can fetch it later" (e.g. while the CPU is working on the command queue)
    • the CPU can update this data - with minimal arbitration from OpenGL
    • that arbitration does need to happen, since the data may be in video memory or not
    • applicable to things beyond vertex data (VBO, IBO), e.g. also to pixel data (PBOs)
    • (has some other side effects, such as not necessarily polluting the CPU cache)


  • Vertex Array Range, VAR (GL extension)
    • apparently much like buffer objects (cacheable, DMA-able)
    • but a little more streamlined - in a way that usually only matters on high-vertex scenes
    • mainly, you can only do this with specifically allocated and marked memory ranges



See also:

Graphics pipelines

GL coding style

On binding and context

GLSL

Semi-sorted

On GLUT

GLUT is an API that handles interaction with the OS such as keyboard, mouse, creating a window, when to redraw, that sort of thing, and can be assumed to be cross-platform.

It's convenient boilerplate, particularly for an easy cross-platform way of creating a GL context. (GPGPU projects may use a few GLUT calls for just that reason.)


That said, it is not the prettiest way to do things, and has a few irritating limitations.


If you want a little more, look at SDL[3], or other 3D engines, which typically have their own thing instead.


Redrawing

This article/section is a stub — probably a pile of half-sorted notes and is probably a first version, is not well-checked, so may have incorrect bits. (Feel free to ignore, or tell me)

Design decisions

This article/section is a stub — probably a pile of half-sorted notes and is probably a first version, is not well-checked, so may have incorrect bits. (Feel free to ignore, or tell me)
Immediate versus retained mode


In graphics and animation in general, immediate mode refers to, for each frame, going through a bunch of instructions that tell it how to draw it, while retained mode means updating the state of the things to be drawn, and leaving it up to the library to figure out how and when they should be drawn.


Immediate mode may be more direct, more low-level, more flexible and controllable, and perhaps more easily understood, but the fancier things get, the more work you put on your own shoulders.

Retained mode does more for you, and can apply varied automatic optimizations (particularly relevant for 3D hardware, where its driver actually does this), at the cost APIs that force you into specific ways of doing things. Also, programmers will also care that the APIs have proven to change somewhat frequently.


Immediate mode

  • means your application keeps track of the scene, and asks the graphics library to draw it.
  • early OpenGL's glBegin/glEnd style is often strongly associated with immediate mode, as are vertex arrays because for each frame, you call functions to hand in data
  • geometry can be rendered incrementally (things like PostScript work this way; you could rasterize just a few lines at a time, though this isn't always the most efficient use of resources)
  • the more data you have, the more that sending that data matters. It doesn't matter if you have a relative powerhouse (GPU) when it takes significant time just to tell it what to do. The glBegin/glEnd style can most easily lead to such bottlenecks (and is becoming deprecated)


Retained mode

  • means the time at which to actually render can be chosen somewhat more adaptively (which can sometimes look better, and/or make for an easier quality/speed tradeoff, and/or be more efficient(verify))
  • may be easier to use (depending a little on case) -- but may also mean the API/library you use is more restrictive than you wish (and is possible)
  • can have higher memory requirements


In terms of speed and features, the distinction isn't so simple. Immediate mode lets you do some targeted optimizations, retained mode can do some clever things via delayed rendering.


OpenGL (particularly pre-3.0) is primarily immediate mode, but with a number of more retained-mode features, with a limited amount of state (mostly convenience), and extensions that let you e.g. reuse objects without additional transfer.

Direct3D is more specifically designed to do both styles.


Imperative versus declarative

Imperative roughly means you give the graphics commands to execute down to draw commands, declarative means you describe scene and camera, and let the library figure out how to draw it.

It's an arguable distinction in this context, though still useful in the back of your mind.


You could argue it's declarative because because while the application figures out the scene to send to be drawn, and it's the the graphics pipeline that does a lot more processing and eventually does the drawing.

And you can argue it's imperative because you don't get "draw a cube over there", you get "these are the vertices. This is how they make up triangles, in this order so that they face out. This is the color at each point. This is the texture," and any application that wants to be clever or fast has to think about a bunch of relatively low-level details how it will be drawn - dealing with level of detail, occlusion, mip mapping, and such. All the 3D engines out there exist mostly to take the large bulk of low level bother out - some of which can certainly assist more declarative-style setups - though the core of the libraries are imperative.

Graphics pipeline

This article/section is a stub — probably a pile of half-sorted notes and is probably a first version, is not well-checked, so may have incorrect bits. (Feel free to ignore, or tell me)


The graphics pipeline refers to how exactly we go from scene information to final picture.

While the basic setup is the same in most places, the details are far from fixed; they evolve and will vary with time, hardware, libraries, and versions. Complete diagrams of modern pipelines look pretty damn complex, in part because recent hardware capabilities and extensions are often tacked on.


A graphics pipeline now regularly looks like:

  • Application/Command
    • alter and maintain scene state, and the implied graphics state that is exchanged with 3D hardware
    • the imperative/declarative distinction mostly applies at this level.
  • Geometry
    • the most basic geometry is triangles, based on vertices and how to attach those vertices
    • objects are defined in their own coordinate system, and transformed into global coordinates (exactly how depends a bit on the library)
    • some operations on every vertex
    • some operations on every triangle
      • clipping, culling,
    • vertex shaders can be used here
    • result: triangles in screen space (coordinate system associated with the screen)
    • (much of the more basic geometry stuff can be done in parallel, which is what GPUs are good at)
  • rasterization
    • color and lighting per vertex
    • texture per fragment
  • texturing
    • historically combined with colored vertices in a few predefined ways. Now can be more flexible via fragment shaders
  • Fragment
    • owner, scissor, depth, alpha, stencil tests
  • Composite
  • Display


Rendering small bits of geometry, often triangles, has proven the common and often the easiest choice for real-time, parallelized rendering. There are other methods, such as raytracing, that tend to look better but are too complex to do in real-time. Many of the recent (optional) fanciness like shadows, reflections, refraction and such are ray-based, computationally expensive, and hacked onto the basic geometry-based pipeline (and regularly done in cheating, cheaper-to-render ways).


Glossary:

  • fragment - like a pixel, but before occlusion decisions, so there are often various fragments for what to you is a single screen pixel. Fragments often include depth information, as this is necessary for blending and occlusion/opaque rendering.
    • ...means geometry state is necessary even in immediate mode, and is hard to parallelize.
  • Framebuffer - data structure that hold information for each pixel.
    • What you see on screen is a framebuffer. There can be supporting framebuffers.
    • Double buffering basically means drawing into one framebuffer while the other is on-screen.
  • Shaders are programs that reside on the GPU. They are simple and often single-purpose
    • have been part of OpenGL since 2.1, and in DirectX since 8.0. Later hardware and later versions of these libraries support later versions of the Pixel Shader model
    • This style may well replace some of the older styles of programming and scene rendering, largely because this is more flexible in the long run (though a little more complex to beginners)
    • In currently common graphics pipelines there are per-vertex shaders, and per-fragment shaders (a.k.a. pixel shaders).
      • Vertex shaders are run for each vertex, primarily to project into 2D.
      • (geometry shading?)
      • Fragment shaders, part of rasterization, are used for fancier effects like lighting, bump mapping, shadows, reflections
      • some effects may require planned combination of both to work
    • General-purpose use (scientific calculation, password cracking) now usually uses shaders in their more general sense of 'a program running on the GPU' and bypass the graphics pipeline and any rendering. For some cross-reference there, see this table).
      • Getting high throughput often requires no specific optimization, but does need some awareness of the hardware, e.g. how it executes things, moves memory, caches, and such. Code that you simply port just enough to get it running may perform miserably, and many things cannot be parallelized well in the first place.


Unsorted

http://www.opengl.org/resources/features/KilgardTechniques/oglpitfall/


Note on Threading

GL related libraries

This article/section is a stub — probably a pile of half-sorted notes and is probably a first version, is not well-checked, so may have incorrect bits. (Feel free to ignore, or tell me)

Note that none of these are really part of OpenGL. At best they are bundled in the same download, because they are common convenience, but they are not entangled.

  • GLU - ...Utility Library
    • graphics functions, mostly higher-level convenience tools built on the more basic GL things.
    • Includes things like mapping between screen and world-coordinates, mipmap generation, tesselation, quadratic surfaces, NURBS, parametric generation of objects like spheres, cylinders, disks,
    • frequently seen distributed with OpenGL.
  • GLUT - ...Utility Toolkit
    • Mostly OS interaction, such as window stuff and interaction with keyboard and mouse