Category Archives: OpenGL

[OpenGL part 5] Matrix operations ( translate, scale, rotate )

Introduction to Matrix Operations


In this part, we’ll be looking at basic matrix math. We’ll be looking at how we create the various transformation matrices that we use for rotating, moving and scaling. If you really want, you can skip to the second part where we look at some code and let glm do the maths for us. But I really recommend actually learning the basics of matrix operations.

Matrices


Matrices are basically small tables of numbers. They are used in graphics programming when transforming vectors and object. We use them to move, rotate and scale objects. The math behind them is relatively simple though some of it can be hard to wrap your mind around first. And there are several mistakes that are easy to make and very hard to debug, so it’s very useful to know a bit about them and how they work.

The basics


In all simplicity, a matrix is just a table of numbers. Like the one you see below. With it comes a lot of simple operations we can do to transform objects ( like we saw with the model/view/projection matrix in the last part. But even though the idea is simple and the operations are basic ( usuaully just addition and multiplication ), they can quickly become confusing

\begin{bmatrix} 1\quad0\quad0\quad0 \\0\quad1\quad0\quad0 \\0\quad0\quad1\quad0\\0\quad0\quad0\quad1\end{bmatrix}

The unit matrix


What you see above is what we call the “unit matrix”. You can look at this like the base matrix or the null matrix. The idea behind it is that anything you multiply it with will remain the same ( we’ll look at this soon. ) That makes the unit matrix the starting point as well as it’s used in a few different, more complex matrix expression.

Matrix – vector multiplication


The simplest operations we’ll be looking at, is multiplying a matrix with a vector. This is quite straight forwards, though there will be a lot of numbers to keep track of so read through it a few times and get comfortable with it before proceeding. The formula for multiplying a 3×3 matrix with a 3d vector is as follows :

Let’s isolate just the top row ( all rows are multiplied in the same way )

We can multiply 4×4 matricies and verticies like this :

Step-by-step guide


  1. Look at first row
    1. Multiply first matrix value on that row with first vector value (x)
    2. Multiply second matrix value on that row with second vector value (y)
    3. Multiply third matrix value on that row with third vector value (z)
    4. Continue until no more more numbers on that line of the matrix
    5. Add all the numbers together
  2. Repeat for next row until no more rows
  3. Done!

You might have noticed that this requires that the number of values in each row of the matrix is the same as the numbers of values in the vector. If we have 3 values in each row of the matrix, the vector needs to have 3 values as well. This isn’t really an issue for us, so we won’t be looking at these cases here.

The unit matrix


As mentioned earlier, the unit matrix won’t change anything we multiply by it. We can see this by doing a multiplying with placeholder values for x, y, z and w

So now that we’ve learned how to do multiplication, let’s test it out and see if it’s really true! Let’s try to multiply it by the vector 1.03, -4.2, 9.81, 13 :

As you can see, we end up with the same as we began with, 1.03, -4.2, 9.81, 13.

Matrix – matrix multiplication


Multiplying a matrix with a vector was quite easy. Now let’s make things a tiny bit more difficult and multiply a matrix with a different matrix. This is one of the most important basic operations we need to do. It plays a huge role in moving/scaling/rotating objects. And, as we’ll see in a later part, it’s a very important part of lighting.

Matrix multiplication depends a bit on the sizes of the two matrices. But we’ll simplify it to say that we’ll always be working with unifrom matrixes ( 2×2, 3×3, 4×4 ). Firstly, lets look at the generic formula :

Now this seems a bit complicated, so let’s look at how to calculate just the first number :

matrix multi first cell

As you can see, for the first part of the multiplication we use the first numbers of both matrices ( A11 and B11 ). But for the second number, we use the next number in the same row for matrix A ( A12 ) and the next number in the same column for matrix B ( B21) This pattern repeats for the next number on that row like this :

matrix multi second cell

Now we move to the next row and repeat the process :

matrix multi third cell

And finally, the last cell :

matrix multi fourth cell


This can also be extended to a 3×3 matrix like this

This is a lot of numbers, but if you look closely, you’ll see it’s a lot like the previous one, only with an extra number on each row and column. So in this case, for the first number, all we did was add + a13 * b31 to the original operation ( which was a11 ∗ b11 + a12 * b21 ) For the second number on the top row we added + a13 ∗ b32 to the original operation. The third number on the top row is new but it follows the same pattern;

For all the numbers on the same row from matrix A

a11, a12, a13

and all the numbers in the same column from matrix B

b13, b23, b33

multiply each pair in the same position and add them together.

a31∗b13 + a32∗b23 + a33∗b33

The unit matrix again


Now let’s try multiplying a vector with the unit matrix again. We should get the same result as we started with, but let’s see…

Success! This product is exactly the same as we started with!

Ordering matters


A very important part of matrix operations is the ordering. Normally in mathematics the ordering is not important. When it comes to simple multiplication it does not matter what order you multiply two numbers 123 * 321 gives the exact same result as 321 * 123. But when it comes to matrices, this is not true. Let’s look at a very simple example :

But if we flip the ordering…

..we end up with something completely different.

Because of this, it is important to keep track of the ordering, otherwise, you’ll end up spending hours debugging!

Using matrices to change vertexes


Now that we’ve learned about the basic math about matrices, it’s time to learn how to use them. We’ll be using them to move, scale and rotate objects. We do this by multiplying each vertex by a matrix. After multiplying the vertex, we get a new vertex back that has been moved/scaled/rotated. After doing this to all the vertexes of the object, the entire object will have been moved/scaled/rotated. Creating these matrices is quite easy though there will be a few numbers to keep track of. Let’s look at the operations one by one.

Moving (translating)


The matrix we use for moving an object is quite simple. It is defined like this :

\begin{bmatrix}
\,\ 1\quad 0\quad 0\quad dx\\
\,\ 0\quad 1\quad 0\quad dy\\
\,\ 0\quad 0\quad 1\quad dz\\
0\quad 0\quad 0\quad 1\\
\end{bmatrix}

Where dx is movement in x direction, dy is movement in y direction and dz is movement in z direction. So in effect, we get this matrix :

\begin{bmatrix}
1 & 0 & 0 & \phantom{-}9.2\\
0 & 1 & 0 & \phantom{-}1.2\\
0 & 0 & 1 & -3.7\\
0 & 0 & 0 & \phantom{-}1\\
\end{bmatrix}

Will move the object 9.2 in the x direction, 1.2 in the y direction and -3.7 in the z direction.

Now this might look a bit familiar. Let’s compare it to the unit matrix :

\begin{bmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1\\
\end{bmatrix}

That’s right. It’s the same except the dx, dy and dz part. This will come into effect when we do the actual translation.

Since we are using a 4×4 matrix, it is easier to use 4d vector. But that raises a new question; what about the last value? The x, y and z is, of course, the position. But there is a final number we haven’t cared about yet. As it turns out, this has to be 1 we’ll find out why now.

Let’s look at an example. Say we have the vector [11, 1.5, -43] first we need to add the last digit, 1 so we end up with :

\begin{bmatrix}
\phantom{-}11\\
\phantom{-}1.5\\
-43\\
1
\end{bmatrix}

Now for the translation matrix. Let’s use the one from above which will move the object 42 in the x direction, 19 in the y direction and -13 in the z direction.

\begin{bmatrix}
1 & 0 & 0 & \phantom{-}9.2\\
0 & 1 & 0 & \phantom{-}1.2\\
0 & 0 & 1 & -3.7\\
0 & 0 & 0 & \phantom{-}1\\
\end{bmatrix}

Finally we can try the translation. Translating an object is simply just multiplying the vertex and the translation matrix :

This might seem like a bit of over complication. Why not just add the numbers? Just adding the numbers would be practical if we were just moving the object. But we can do other things like scaling and rotating. By using a matrix, we can combine all of these into a single operation. So let’s look at the next operation, scaling.

Making things bigger or smaller (scaling)


The second operation we’ll look at is how to make objects larger or smaller. This is quite similar to translating objects. For scaling we have the base matrix:

\begin{bmatrix}
sx & 0 & 0 & 0\\
0 & sy & 0 & 0\\
0 & 0 & sz & 0\\
0 & 0 & 0 & 1
\end{bmatrix}

And just like with translation matrices, we multiply our vector with this matrix to get the scaled vector back.

Here sx, sy, sz are the scale factors, which are the numbers we need to multiply with in order to get the result :

  • If you don’t want to scale it all all, you se the scale factor to 1.
  • If you want to double the size you have a scale factor of 2, for trippling you have 3, etc…
  • If you want to make it 50% larger, you have a scale factor of 1.5, 25% larger the scale factor is 1.25, etc…>
  • If you want to halve it, you have a scale factor of 0.5, make it 75% smaller the scale factor is 0.25, etc…

Let’s first look at an example:

Say we have the vertex [2.1, 3.4, -9.5] and we want to scale it like the following :

  • Make it 70% smaller in x direction
    • Scale factor becomes 1.0 - 0.7 = 0.3
  • Make it 80% larger in y direction
    • Scale factor becomes 1.8
  • Triple the size in z direcion
    • Scale factor becomes 3.0

This gives us the scale factors [0.3, 1.8, 3.0 and the vertex [2.1, 3.4, -9.5]. Let’s plot these into the matrix operation:

This gives us the scale factors [0.3, 1.8, 3.0 and the vertex 0.63, 6.12, -28.5]… Which tells us that the vertex has been moved :

  • Closer to the center in x direction ( becasue object get smaller in x direction )
  • A little further away from the center in y direction ( which means the object gets larger in y direction )
  • A lot further away from the center in z direction ( getting a lot larger in z direction )
  • And if we apply this to all the vertices in an object, we find that the center of the object remains the same, so we’re not actually moving the object, we’re just moving the individual vertices. Closer to or further away from the center.

    Rotating


    Now this is where things get a little complicated. We need to translate the object using numbers calculated using sin and cos. The formula for calculating the rotated x and y is as follows :


    x2 = cos β * x1 − sinβ * y1
    y2 = sin β * x1 + cosβ * y1

    I won’t go into details about why this formula works, but you can read about it here.

    Specifying axis


    In order to rotate a 3d object, we need an axis to rotate it around. Take a look at the dice below :

    It is laid out like the following :

    Now imagine we want to rotate it so that we see other numbers. In order to do this, we need an axis to rotate it around. Imagine we stick a toothpick throw this dice from 5 to 6 like the following :

    Now we can rotate the dice 90° down and we end up with something like this :

    [Note: If anyone has any tips or can in any way help me improve these illustrations, it’d be much appreciated]

    The math


    When it comes to the actual math, it’s a bit more complicated. I won’t be explaining where we get the matrices for rotation, but if you’re interested, you can read more about it here.

    Like with translating and scaling, we use a matrix to do the rotation. But the matrix itself is a bit complex and is a little different depending on which axis you rotate :

    For X axis

    \begin{bmatrix}
    1 & \phantom{-}0 & 0 & 0\\
    0 & \phantom{-}cos\phantom{-}θ & sin\phantom{-}θ & 0\\
    0 & -sin\phantom{-}θ & cos\phantom{-}θ & 0\\
    0 & \phantom{-}0 & 0 & 1\\
    \end{bmatrix}

    For Y axis

    \begin{bmatrix}
    cos\phantom{-}θ & 0 & -sin\phantom{-}θ & 0\\
    0 & 1 & 0 & 0\\
    sin\phantom{-}θ & 0 & \phantom{-}cos\phantom{-}θ & 0\\
    0 & 0 & 0 & 1\\
    \end{bmatrix}

    For Z axis

    \begin{bmatrix}
    cos\phantom{-}θ & -sin\phantom{-}θ & 0 & 0\\
    sin\phantom{-}θ & \phantom{-}cos\phantom{-}θ & 0 & 0\\
    0 & 0 & 1 & 0\\
    0 & 0 & 0 & 1\\
    \end{bmatrix}

    Why are they so different


    The reason why they are different is how they multiply with the unit matrix

    \begin{bmatrix}
    1 & 0 & 0 & 0\\
    0 & 1 & 0 & 0\\
    0 & 0 & 1 & 0\\
    0 & 0 & 0 & 1\\
    \end{bmatrix}

    You’ll see that the formula for rotating around the x axis :

    \begin{bmatrix}
    1 & \phantom{-}0 & 0 & 0\\
    0 & \phantom{-}cos\phantom{-}θ & sin\phantom{-}θ & 0\\
    0 & -sin\phantom{-}θ & cos\phantom{-}θ & 0\\
    0 & \phantom{-}0 & 0 & 1\\
    \end{bmatrix}

    Has the same first column and row [1, 0, 0, 0] as the unit matrix. If you look at how matrices are multiplied, you’ll see that this won’t change the final x coordinate

    And if you look at the matrices for rotating around y and z, you’ll see the same. The y rotation matrix has the same second column and row as the unit matrix [ 0, 1, 0, 0 ] and the one for z axis has the same third column and row as the unit matrix [ 0, 0, 0, 1]. This means that rotating around z axis doesn’t change the z coordinate, and rotating around the y axis doesn’t change the y coordinate

    Imagine putting a dice on a table. Now turn the dice clockwise or counter-clockwise without lifting the dice in any way. If you define the z axis to be the height above the table, you’re now rotating the dice around the Z axis. And since you’re not lifting it, the z coordinate remains the same.

    Example


    Let’s make a matrix for rotating the point [2,4, 8] by 30 degrees around the x axis.

    As you can see, the y and z coordinates have changed, but the x coordinate is the same. This is due to how matrix multiplication work.

    Other axis


    You might wonder; what if I want to move the object around a combination of two or three axes? Well, that’s a bit more complex, and I won’t go into the math here. But we’ll see how we can use glm to specify an exact axis of rotation below

    Putting it all together


    Before we look at how to do these operations using code, we need to look at how to do it by hand. Or by online matrix calculators in this case… Buy why? As I mentioned earlier, ordering is important here. Do things in the wrong order, and you get weird results.

    In the previous post, we looked at object space, world space, view/camera space camera and projection space. Let’s skip the last two for now and focus on the object and world space.

    Remember that object space is basically the model represented as a set of coordinates around origin [0, 0, 0] and that world space is the position of the object in the game world. So if the object has moved 10 units to the right, it’ll have the position [0, 10, 0] which means we have to move it there. This is where the translation matrix comes in! The object could also have turned around ( rotated ) and grown (scaled). Since the object is defined in object space ( vectors around [0,0,0] ) and this will never change, we need to move/scale and rotate the object every time. So we need to multiply every coordinate with this matrix in order to place/scale/rotate it correctly.

    Luckily we can just multiply the matrices together and reuse this matrix until the object moves. But this is also where we need to be careful about getting the orientation right. Let’s start by moving and scaling.

    Example – Wrong way


    Say we want to scale by 2 units in every direction and move 3 unites in every direction. Remember that the scale matrix looks like this :

    \begin{bmatrix}
    sx & 0 & 0 & 0\\
    0 & sy & 0 & 0\\
    0 & 0 & sz & 0\\
    0 & 0 & 0 & 1
    \end{bmatrix}

    Filling in numbers :

    \begin{bmatrix}
    2 & 0 & 0 & 0\\
    0 & 2 & 0 & 0\\
    0 & 0 & 2 & 0\\
    0 & 0 & 0 & 1
    \end{bmatrix}

    And translation matrix

    \begin{bmatrix}
    \,\ 1\quad 0\quad 0\quad dx\\
    \,\ 0\quad 1\quad 0\quad dy\\
    \,\ 0\quad 0\quad 1\quad dz\\
    0\quad 0\quad 0\quad 1\\
    \end{bmatrix}

    Filling in the numbers, we get :

    \begin{bmatrix}
    \,\ 1\quad 0\quad 0\quad 3\\
    \,\ 0\quad 1\quad 0\quad 3\\
    \,\ 0\quad 0\quad 1\quad 3\\
    0\quad 0\quad 0\quad 1\\
    \end{bmatrix}

    Now let’s multiply them :

    Let’s analyse this. Looking at the scale numbers, we see 2, 2, 2 as we expected. But when we look at the translation, we see 6, 6, 6! That’s wrong! We wanted 3, 3, 3, not 6, 6, 6!

    The reason why this happens is that we multiplied by scale first when we should have started with the translation instead. So let’s reverse the order of the operations and try again

    That’s more like it. We see that we move by 3 and scale by a factor of 2.

    When we add rotation, we can run into the same problem. Imagine if the object is first moved, then rotated we would still move around the origin(0,0,0). But since we already have moved the object away from the origin, we’ll orbit the origin ( much like a planet ) instead.

    The correct order


    In our example ( and in most cases ) the correct order is scale -> translate -> rotate. You might think it would be the other way around, but matrix operations are in the opposite order of what you expected. So you start out with the last thing you want to happen ( scale ) and end with the first (rotate)

    Using glm to do matrix operations


    Luckily, we don’t have to do all of this ourselves. In fact, glm does nearly all the math for us. Including rotation ( fortunately )All of these methods takes a glm 4×4 matrix, called mat4 this is basically just a 4×4 array representing a matrix.

    You can find the documentation here.

    glm::translate


    Takes a matrix, translates it and returns it.

    Parameters :

    • glm::mat4 original – the matrix you want to translate
    • glm::vec3 dist – the distance to move

    Return :

    The original matrix original translated by dist like we looked at earlier

    glm::scale


    Takes a matrix and scales it and returns it

    Parameters :

    • glm::mat4 original – the matrix you want to scale
    • glm::vec3 scale – the factors to scale by

    Return :

    The original matrix original scaled by scale matrix like we looked at earlier

    glm::rotate


    Takes a matrix and rotates it around an axis and returns it

    Parameters :

    • glm::mat4 original – the matrix you want to scale
    • double angle – the amount/angle you want to rotate by ( radians )
    • glm::vec3 axis – the axis to rotate by

    Return :

    The original matrix original rotated by angle around the axis, axis like we looked at earlier

    Putting it all together


    Now that we have looked at the functions, we can easily put them all together.

    This is a simple class that shows how you can use all the operations we’ve looked at.

    File notes


    You can find the source code for an application that lets you move/scale/rotate a cubehere.

    Images with colored matrix/vector multiplications has been made using calcul.com

    Dice illustration has been made using Inkscape


    Feel free to comment if you have anything to say or ask questions if anything is unclear. I always appreciate getting comments.

    You can also email me : olevegard@headerphile.com
    Share

    [OpenGL Part 4] 3D Basics

    Introduction


    So far we’ve only been dealing with 2D objects, we’ve also had a look at shaders. But we’ve only done stuff in two dimensions, now it’s time to make the step into the 3D world. In order to do this, there is a few things we need to do. The main thing is moving the object to the correct “space”. The spaces are an important, yet confusing, part of 3D rendering. Think of them as a separate coordinate system or a separate world. Any object has a coordinate in all of the spaces.

    Object space


    In this part we’ll be making a cube and rendering it. To make our cube we need 8 coordinates which make up the 8 corners of the cube :

    The center of the object is often [0, 0, 0] and each of the vectors is usualy somgething close to zero. In our case, the cube has one corner that’s [-1, -1, -1] and another one that’s [1, 1, 1] and so on…

    blogpic_object_space

    So basically, this is the coordinate that describe how the model looks.

    World space


    Let’s look at the cube again. It needs to have a position that says where it is in the game world, so that it will appear in one specific spot. If you’ve ever used a map editing program, you can see the exact position of every object. This is the world space coordinates. When programming the gameplay elements like collision detection, this is the coordinate system we’ll be using. The idea behind it, is that every obejct has their own position in the world space, and that position is the same no matter how you look at it.

    blogpic_world_space

    This is an example of a world space that has a cube close to the center and a player to the left of it. The example is for 2D worlds for simplicity, but it would be exactly the same in 3D, only with an extra dimension.

    View space / camera space


    Whereas the world space location is universal and the same for everyone, the view/camera space is different. It basically tells where the objects are in relation to the player where the player is looking. It is similar to pointing a camera at an object. The center of the image would have the position [0, 0, 0] and every other coordinate is defined around that. These are known as camera or view space coordinates.

    blogpic_view_space

    Compare the previous image with this. In the previous image, the cube ( [-1, -1] ) is to the left and behind the player ( [-2, 0] ). So if you look at it the world space from above, that’s how it looks. But if you look at it from the view space of the player, the player will be in the center, and the cube ( which is still at ( [-1, -1] in world space ) will be to the right. Note that the object hasn’t moved around in the world and the player hasn’t moved either. All we did was looking at it with the player as the center instead of the center of the world as the center.

    Another thing about the camera space is that it’s going to be relative to the direction the player or camera is facing. So imagine the player is looking along the x-axis ( towards the world space center. ) Then the player starts rotating right. Soon he’ll see the object. Since he’s rotating right, he’ll see the object moving to his left. Now imagine him stopping. What he can see, is the world in his own view space. Another player at another location looking at another point would see the world in his own view space.

    This might be a bit confusing, but it’ll get clearer soon.

    Projection space


    Finally we have the projection space. This is a little different, it describes the final position on the screen the vertex will have. Unlike the other spaces, this is always a 2D coordinate, because the screen is a 2D surface. You can look at this like the lens of the camera. The camera looks at a 3D world and the lens enables it to create a 2D image. You can look at it like the 2d version of the view space. When the camera looks at an object, it sees the view space. But what ends up on the screen is in 2d, and that is what we refer to as the projection space.

    Just like cameras can have different lenses, so is there different ways of convert camera space coordinates to projection space. We will look at this later when we look at how to convert from space to space

    An illustration of view and projection space


    Below is an illustration of the view and projection space. Hopefully it’ll help make things clearer :

    View space and camera space

    The big pyramid is the view space. It’s all that we can see. In this case it’s just three cubes.

    The 2d plane with the 3 cubes represented in 2d is the projection space. As you can see, it’s the exact same as the rest of the view space, only in 2d.

    Matrices


    In order to transform the the vectors from one space to another, we use a matrix ( plural : matrices ). A matrix can be used to change an object in various ways, including moving, rotating and scaling. A matrix is a 2 dimensional mathematical structure, quite similar to a table :

    \begin{bmatrix} 1\quad0\quad0\quad0 \\0\quad1\quad0\quad0 \\0\quad0\quad1\quad0\\0\quad0\quad0\quad1\end{bmatrix}

    This is what’s called an identity matrix. You can look at it like like a skeleton or an “empty” matrix. It won’t change the object at all. So when we intialize a matrix, this is what we initialize it to.

    If we had initialized it to just 0 for all values, it would have changed the object. We’ll look into the math involved for matrices in the next part. For now just remember than an idenity matrix is a default matrix that doesn’t change the object it’s used on.

    Instead we’ll look at how to work with matrices. And for that purpose, we use glm.

    glm


    In order to do graphics programming, we will eventually need to do more mathematics stuff involving vectors and matrices. We really don’t want to do this manually, because there is lots of operations we’d have to implement ourselves. Instead, we’ll use a tried and tested library that does the mathematics operatiofor us. glm, or OpenGL Mathematics, is a library made for doing the maths for graphics programming. It’s widely used and does just about everything we need. It is also 100% platform independent so we can use it on Linux, Windows and Mac.

    Installation


    The libraries we have been dealing with up until now has required both header files, and library files. glm however, only requires header files. This makes installation very easy, even on Windows.

    Linux + Mac ( the automatic way)


    Both Linux and Mac might have glm available from package manager. If that’s the case, the process is the same as for SDL. Just open the terminal and install glm just like you would with any other program or package. If the package is not found, we need to “install” it ourselves.

    Windows + ( Linux and Mac the slightly harder way)


    If you’re on Windows ( or Linux / Mac and the first step didn’t work, ) we need to install the library ourselves. Fortunately this is relatively easy.

    Downloading

    The first step is to download glm. You can do that here.. Scroll to the bottom and download the format you want ( .zip or .7. ) If you have a tool for dealing with package files, you should have to problems extracting it. Windows has built in support for .zip so choose this if you’re unsure. If none of the options work you can install winrar or 7zip.

    Installing

    Now extract the package anywhere you want and open the folder. You should find another folder name glm. In it there should be a lot of .hpp files ( think of these as your regular header ( .h ) files. )

    For Windows :
    Take the folder name glm ( the one containing the .hpp files ) and copy it to where you put the SDL2 header files so that it now contains both the SDL2 header file folder and the glm header file folder. Once that’s done, you should be able to use it directly ( since we’ve already specified the folder with all our includes. )

    For Linux and Mac:
    Take the folder name glm ( the one containing the .hpp files ) and copy it to /usr/include/ so that you end up with a folder called /usr/include/glm/ that contains all the glm header files.

    Since this is a system directory, you won’t be able to put them here the regular way. But there is a few options.

    If your file browser has a root mode, you can use that ( just be careful! )
    If you can’t find it, you need to use the terminal ( after all, you are on Linux! )

    You can use the cp command to do this :

    Most likely you can do it like this

    What does this do?

    sudo is short for “Super User DO”. This is needed because it’s a system folder. sudo basically tells the operating system that “I know what I’m doing” Use it with caution!

    The cp is the terminal command for copying.

    The -r option is short for recursive it makes the cp command also copy all the sub folders and their files ( wihtout it, it’ll only copy the files inside the glm folders but it’d ignore all sub folders )

    [collapse]

    In order to make sure you got it right, run the command sudo ls /usr/include/glm it should now list the .hpp folders just like in the folder we looked at earlier.

    ( Please tell me if this doesn’t work on Mac, I haven’t been able to test it there yet… )

    We can now include them in the same way as the SDL2 header files : #include <glm/vec4.hpp>. And since glm only uses header files, we don’t need to change our compile command!

    Using glm to do matrix operations


    Using the OpenGL Matmetics library ( glm ) is quite easy. There’s just a few simple functions we need to do what we want.

    First of all, it’s just a mathematics library, so there’s no initialization code. That means we can jump straight to the mathematical functions.

    Matricies and verticies


    Fundamentally, vertecies and matricies are very simple constructions in glm. They’re just arrays with one element for each value. So a 3d vector has 3 elements, a 4d vector has 4 and so on. And similar for matrices.

    A 3×3 matrix has a 9 element matrix. It’s arranged in a 2d array like so : float matr33[3][3] and similarly a 4×4 matrix has 16 values and can look like this : float matr4[4][4]. glm uses float types instead of double but this can be changed if you want to.

    Let’s have a look at the various functions we can use with the vectors in glm

    Creating a vector


    The vector object in glm has several constructors, but we’re just gonna look at the simplest one :

    This will set all the values of the vector to value. So

    gives you the vector [1.3, 1.3, 1.3, 1.3]

    Creating an identity matrix


    When it comes to matrices we will be dealing with several different types of matrices. First we’ll look at creating an identity matrix ( like we saw above )

    The simplest type of matrix is the identity matrix ( as we saw above. ) There are two simple ways to making them :

    Or for 3×3 matrices :

    Both of these produce a idenity matrix, which you can look at as a default value for matricies. It can also be used for reseting a matrix.

    translatation matrix


    In addition to identity matrix, we’ll be looking at translation matrices . A translation matrix is used to move an object by a certain amount. Remember above when talking about world space we saw that each object needs its own position in the world space? This is what the translation matrix is for. We use it to move a single object to the position it’ll have in the world space. Every object in your game world needs to have a be moved to a position in the world space, and to move it we use a translation matrix.

    In addition to translating ( or moving ) an object, we can also scale and rotate it. All of these operations that works on a single object is called the model matrix. We’ll be using the name model matrix, but wel’ll be looking at rotating and scaling in a later post.

    glm::translate


    Here is how we use glm to create a translation matrix :

    Parameters :

    • glm::vec3 d – the distance to move

    The vec3 vector specifies the distance to move in each direction. So, for instance, [ 1, 0, -2 ] creates a matrix that can move an object

    • 1 unit in x direction
    • 0 units in y direction
    • -2 units in z direction

    If you specify the vector [ 0, 0, 0 ] you’ll end up with a matrix that doesn’t translate the object at all. Nor does it change it in any way. So in effect you’ll end up with just a identity matrix.

    Let’s look at a very simple example on how to create a translation matrix :

    So how do we use it? Well that’s a bit more complicated so we’ll look at this later in the post.

    view matrix


    Now that we’ve placed the object in world space, we need to place it in the camera/view space. This is a bit more tricky because we need to set both position and where the camera is pointing.

    It also has what’s called an up vector. This is used to set which direction is up vector. We’ll just leave it at [0, -1, 0] which is the most common value. Since we won’t use it, it’s not something you need to read. But if you want to know more about it, check out the spoiler text

    The up vector

    Think of it as how the camera itself is rotated. For instance, the camera could be turned up and down. Or tilted to the side. Doing so, would also change how the coordinate systems work Which is logical. If you turn the camera upside down, positive x would be towards the left and negative towards the right!

    A possible use for this is if the player is hanging upside down. Then you could just change the up vector, which would rotate everything the player sees.

    [collapse]

    Parameters :

    • glm::vec3 position – the position of the camera
    • glm::vec3 center – the direction the camera is facing ( first paragraph )
    • glm::vec3 up – the tilt of the camera ( second paragraph )

    Here’s the setup we’ll use :

    • position = [ 0, 0, -5 ]
      • x and y is at center, z is 5 units backwards
    • eye = [ 0, 0, 0 ]
      • Looking straight ahead
    • up = [ 0, -1, 0 ]
      • Upside-down, same as in SDL2

    And here’s the code for creating that matrix :

    When we’re rendering the scene, we’ll multiply all vertexes by this matrix. When we do that, things will be moved like we saw above. [ 0, 0, 0] is now what the camera is looking at ( like we saw above )

    projection matrix


    The view matrix dictates the position and orientation of the game camera ( what will end up on screen. ) But there is another matrix we need in order to do 3d, the projection matrix.

    Just like a camera can have many different lenses, so can a projection matrix be set up in different ways

    Parameters :

    • float const &fov – the field of view ( how far out to the sides the player can see )
    • float const &aspect – same as screen formats ( 16:9, 16:10, 3:4, etc… ) Changing this will stretch the image.
    • float const &near – the closest to camera something can be. Anything closer will be cut off.
    • float const &far – the furthest away something can be. Anything further away will be cut off.

    The fov paramater is said to be spcified in degrees, and that’s what we’ll use. But it seems some have issues with glm wanting radians instead. Radians is just an alternative to degrees. You can read more about it here.. So if it doesn’t work, you can try specifying 3.14 * 0.25 = 0.785 for 45º.

    Tip : if you own Minecraft you can experiment with this by going to options and changing the fov there!

    The near and far far arguments will cut whatever is smaller than near or larger than far. It doesn’t cut off whole vertexes, just the pixels than are not between near and far

    So, even though there are a few parameters here, they are relatively easy to comprehend. We’ll look more into how the actual matrix looks and what different types of projection matrices we can make ( yes, there are others ) in a later post.

    Let’s take a look at a simple example

    Combining them


    We can combine all the matrices into one, so that we only have to multiply each of the vertexes by one matrix instead of all three. But when doing matrix operations it’s important to notice that the operations are not commutative. That means that the order in which you multiply them matters. This is in some cases very useful, but it can also lead to weird behavior.

    In general, the order is left to right. The thing you want to happen first, should be the first part of your multiplication chain. Let’s assume we have three matrices. One for moving, one for scaling and one for rotating. If you wanted to scale, then rotate, then move, you’d do mat = scale * rotate * move.

    But… When it comes to the transitioning between spaces, it’s opposite! So we start out with the projection, then we multiply by view and then by model.

    I won’t go into the details of why ( would take too long time ) but it’s important to remember this for later.

    I use the name modelViewProjection because that’s the most common name for this matrix. It is also sometimes shortened to mvp

    Shaders and uniforms


    Now that we know the basics of the matricies, we can finally have a look at how to use them in order to move objects, do projection and get something 3d on the screen. In order to do this, we must pass our matrix on to the shader. And this is where the benefit of having just one comes in. We can now just send one matrix for each object we are rendering, which means we have to send less to the GPU and the rendering will be faster.

    Unifroms are global variables within the shader. It is a constant value so it does not change from rendering call to rendering call. You can’t actually change it from inside your shader at all. Doing so would cause a compiliation error. This is very practical because if there is an issue due to a uniform, we know it’s being changed somewhere in our main source code.

    ID of a uniform


    In order to be able to change a uniform in a shader from the source code, we need to have something to refer to it by. So in OpenGL, your uniforms automatically gets an ID. This is usually just the order in which you declare them. But this again raises a different issue ; how do we get the ID of the uniform on the shader. So we decalre an uniform on the shader, now we need to change it from our source code. How do we get the ID? By using the function glGetUniformLocation. Here’s what it looks like :

    Parameters :

    • GLint location – The id of the shader program
    • const GLchar *name – The name of the variable in the shader

    So if we have a shader that looks like this :

    We can get the value like this :

    Quite simple. And now that we have the ID, we can move on to the next step

    Changing a uniform


    A uniform can be any type of variable, a bool, and int, a float it can also be array type like vectors and matrices and even used defined structs! No matter what type, there is a group of functions we use for setting it, glUniform*. We’ll go into more details about the ones for single values and vectors in a later part. Instead we’ll jump straight into the ones for setting matricies

    glUniformMatrix*


    The function for setting a matrix in OpenGL is glUniformMatrix. There are a lot of varieties of this depending on the type ( whether the induvidual values are ints or floats ) and size ( 4×4, 3×3, 2×3, 4×3, … ). To make this part shorter, we’ll only be focusing on the one we’ll actually be using, glUniformMatrix4fv

    Parameters :

    • GLint location – The location of the matrix ( the result of glGetUniformLocation )
    • GLsizei count – The number of matrices. In most cases this will be 1
    • GLboolean transpose – Whether OpenGL should transpose the matrix.See below
    • const GLfloat *value – The actual values of the matrix as an array

    Matrix transpose


    OpenGL expects the matrix in a spcific way. But in some cases, we might have the matrix transposed ( or “rotated” ). So instead of :

    \begin{bmatrix} 1\quad3\quad5\quad7 \\2\quad4\quad6\quad8 \\3\quad4\quad5\quad6\\4\quad6\quad8\quad9\end{bmatrix}

    It might look like this :

    \begin{bmatrix} 1\quad2\quad3\quad4 \\3\quad4\quad4\quad6 \\5\quad6\quad5\quad8\\7\quad8\quad6\quad9\end{bmatrix}

    The flag tells OpenGL that it needs to transpose the matrix first. Note : Some versions of OpenGL does not support this operation. In these cases the parameter must be set to false. This applies to OpenGL for mobile devices, OpenGL ES

    And now for an example

    In the vertex shader

    And in your main source code :

    Quite simple, and now the matrix is set and we can ( finally ) render 3d!

    The results


    So now after all that work, let’s see what we ended up with….

    depth_fail

    What?! That’s not right, the colors are weird and the front is missing…

    The depth buffer


    Remember from my previous part where I talked about the last step of the rendering piepline, per-sample operations?* I mentioned the depth test and how it determines if something is obscured / invisible and should not be rendered. What you see above is the consequence of not enabling the depth test.

    ( * : It was in there, but I forgot to mention it also checks if something is covered by another object. Sorry about that! )

    Let’s take a close look at what’s happening, but this time we render one side of the cube at a time :

    fail_1_side

    This is the front side of the cube. So far it all looks good!


    Let’s draw the bottom…

    fail_2_side

    Here’s the back and the bottom, and here’s where it goes wrong. The front should cover the bottom. But here the bottom is covering the front. This is because we don’t have depth test enabled so OpenGL just draws the bottom on top of it.


    Let’s look at the next steps and see what happens

    fail_3_side

    Here we’ve added the the next side. If you compare with the inital part, this is what gets covered up. But why just this?


    Let’s render the next triangle

    Fail half

    Here we’ve added half of the front. From this we can see that it is covering the bottom and right sides.


    Let’s render the next triangle ( the second half of the front )

    depth_whole

    It covers up everything. This is because it’s the last thing we drew, so it gets drawn last, on top of everything.


    And if we now draw the sides…

    depth_fail

    … we end up with what we saw earlier. The back gets drawn and covers everything. Then the top and left sides gets drawn on top of that.

    Enabling depth test


    Now let’s look at this with the depth test enabled

    win

    Here we’ve everything, including the front drawn the front. It completely covers the cube. It might seem wrong, all we can see is a blue square! But if we just move it a little…

    win_moved

    .. we see that it actually IS a 3D object! Finally!

    The depth test


    So how does all of this work? It’s quite simple. You can tell OpenGL to create a buffer ( think of it as a 2d array ) that has one value per pixel that says something about how far that pixel is from the camera.

    Each time you draw something, OpenGL checks the value for that pixel in the array. This way, OpenGL can determine if what you’re trying to draw is closer to the camera than what’s already there. If it is, then the new pixel will be draw instead of the old one and the distance value in the buffer will be updated with the value of the new pixel. That way this buffer will always keep a buffer that contains the value of the information about the closest pixel that has been checked so far.

    depth_test

    Here’s how it worked when we drew the front ( blue ) over the rest. For every pixel it compares the previous value ( left ) value with the current ( right ) value. In this case, the blue one is closer and is drawn over the yellow. This happens for every single pixel we try to draw. Luckily, OpenGL is pretty quick at this.

    How to enable it


    Enabling it is quite simple. There are two functions we need for that :

    Parameters :

    • GLenum cap – the OpenGL capability we want to enable. In our case its’s GL_DEPTH_TEST.

    This basic function is used for turning OpenGL features on. You can see a full list of the possible values here.

    Setting the depth function


    We also need to see how the depth comparison works

    Parameters :

    • GLenum func – the function to use

    Here we tell OpenGL what function to use for depth testing. We will be using GL_LEQUAL, you can find more information about it and the others here.

    Clearing the depth buffer


    Finally, we need to tell OpenGL to clear the depth buffer for us. This is so that we can start we a clean slate every time we render. Without it, the depth test could fail because of leftover values making OpenGL not render something that should have been rendered. We’ll be doing this in our glClear function :

    The | character is a way of combining the two values so that we clear GL_DEPTH_BUFFER_BIT and GL_COLOR_BUFFER_BIT in one call.

    The source code


    For the source code, I’ve taken the liberty of organizing it a little. I made a helper class for shaders, one for rendering in general, and one for dealing with models and their model matrix. In addition, I included a class I use for input ( that uses SDL2. )

    Renderer


    In charge of most rendering functionality :

    • Creating and initializing windows
    • Setting up OpenGL options
    • Setting up view and projection matricies

    Shader


    A class that keeps track of Shaders.

    • Can keep track of one shader of each type ( vertex, geometry, tesselation, fragment. )
    • Represents a single, whole shader program
    • Does everything needed to create a single shader program
    • Also used for setting uniforms like the model view projection matrix

    Model


    A class that holds a single object. In our case this is the cube:

    • creates VAO and VBO for the object from a file
    • Keeps the model matrix, which contains the position / scale and roation of the object
    • Keeps a reference to the Shader that the object uses
    • Has a Render() function so that it can set all the VAOs and VBOs and render itself

    EventHandler


    This is a class I wrote some time for keeping track of SDL events like quit, button presses, mouse move, mouse position, etc… It is not dirctly related to OpenGL, we just use it to make our interaction with SDL a tiny bit easier.

    Math


    A very simple Math helper class. It simply takes an EventHandler and creates a vec3 of the movement based on the arrow keys and w and s. So if you you press left, it’ll create a vector with a negative x value. This means that when we use glm::translate with the vector as the argument, we’ll get a matrix that moves the object left. It’ll be the same for every direction. Pressing w will move the object closer, s will move it away “into the screen”.

    main.ccp


    Controls everything.

    • Initializes Renderer
    • Creates a Shader
    • Creates a Model
    • Checks for keyboard events and tells Model to update matrix accordingly ( move or reset )
    • Renders the Model by calling functions in Renderer

    As you can see, main.cpp doesn’t do anything to OpenGL. In fact, it doesn’t even include any OpenGL or SDL stuff. This is completely intentional. main.cpp should only control stuff.

    Since the code is quite long and too big to put in this post ( unless you really like scrolling! ) I’ve put it in the Github repo for this code.

    I’ve also created a zip file in case you don’t want to deal with git. You can find it here.

    Compiling


    Since we have the new .cpp file, EventHandler.cpp, we need to add it to our compilation call :

    For clang :

    clang++ main.cpp EventHandler.cpp -lSDL2 -lGL -lGLEW -std=c++11 -o Part4

    For gcc:

    g++ main.cpp EventHandler.cpp -lSDL2 -lGL -lGLEW -std=c++11 -o Part4


    And NOW we’ve covered everything we need to know in order to do basic 3d rendering. It has taken me a long time to write all of this and it is quite long. But I hope you enjoyed it and that it helps you understand 3d rendering. If not, feel free to ask, I’m happy to help if I can.


    Feel free to comment if you have anything to say or ask questions if anything is unclear. I always appreciate getting comments.

    You can also email me : olevegard@headerphile.com
    Share

    [OpenGL Part 3 ] Shaders in OpenGL

    Shaders in OpenGL


    In the previous part we looked very briefly at shaders. Shaders are small pieces of code that run on the GPU and they enable us to render graphics in lots of fancy ways. But before we look closer at the shaders, let’s have a look at the sequence of them :

    The rendering pipeline


    OpenGL goes through several steps in order to draw something on the screen. This sequence of steps is known as the rendering pipeline. It looks like this :

    The gray-is parts are programmable and these are what I’ll be refering to as shaders. The ones with the dotted lines are optional while the ones with full lines we have to program if we want to render something. At least that’s what the specification says, but some implementations might not require it. So, on some systems, you might be able to skip it, but that’s not guaranteed to work on all system. And it’s much more fun writing the shaders ourselves anyways!

    This part will teach you a little bit about each of the steps, what they do and how they work together. Each one of these steps are quite involved, so I’ll most likely dedicate an entire post for each of them.

    Vertex Specification


    In the previous part we set up VBOs and VAOs so that we later could use the VAO for rendering. That is the vertex specification stage of the rendering pipeline. More specifically its how OpenGL sets up the VBOs and VAOs when we tell it too. Since we already dealt with this stage in the previous part, and what OpenGL does behind the scenes isn't that relevant to us, we're just gonna skip to the next step.

    Vertex Shader


    The vertex shader is the first programmable step of OpenGL. This stage takes a single vertex and outputs a single vertex. The job of the vertex shader is basically to give every vertex the position they should have on screen. In the previous part we were able to use them directly because the position we gave in was the final position of the object. But if we wanted to move it, we could have used the vertex shader to do that.

    Another point is that the screen is only 2D, so when we have a 3D object we need a way of representing that as 2D on the screen. This is a quite complicated step involving several steps to position all the vertexes in the correct position. We will look at this in later post, this post is just for getting an overview of all the shaders. The above image kinda shows this ; the cube is a 2D drawing, but it looks 3D because of how the vertexes ( corners ) are positioned.

    Tesselation


    In the games we see today, a high level of detail is important. And in order to achieve a high level of detail, we need a high number of vertexes. Imagine you have a ball in your game. How do you draw that with a high level of detail? If you have too few vertexes, it'll look blocky and not round at all. You could just add millions of vertexes to make it look better. But a million vertexes would mean 4 byte * 3 * 1 000 000 = 12 000 000 Bytes or 12 MB of just vertexes that's quite a lot, especially if your game has a lot of round objects.And more importantly, it takes time to render that much.

    The purpose of the tesselation shader is basically to add more details to your object when needed. So when we see something from a distance, we don't need a lot of detail. But when we zoom in, we'll be able to see more details, so we need to render the object with more details so that it doesn't look block when viewed up close.

    Geometry Shader


    The next step in the rendering pipeline is the geometry shader. The geometry shader gets input in the form of primitives. ( A primitive is basically either a triangle, line or a point. ) Then with the geometry shader we can create new primitives. This means we can use it for things like spawning particels in a particle system. Or to make fur, hair, grass, etc.

    Image converted using ifftoany

    Let's say we have a sphere. When the tesseletion stage is done, we get the input in the geometry shader as tiny little triangles. Each one of these triangles are a tiny part of our sphere. With the geometry shader we can add fur to the sphere, and now we have a fuzzy little ball.


    Using a geometry shader is one of the most efficent ways to make hair/fur/grass because it doesn't require any additional vertexes, everything is being done on the graphcis card. That makes it really quick.

    The next three steps are fixed, so we can't implement them ourselves, so I'll only describe them briefly.

    Vertex Post-Processing


    This step does a lot of different operations on the vertexes. A lot of these prepares them for the next two steps primitive assembly and rasterization.

    Primitive Assembly


    This is, as the title suggest, the point were our primitives gets assembled. It receives a bunch of vertexes and puts them together into shapes like triangles. It also does some checks to see if a primitive is out of the screen ( or invisible in any other way ). If it is, this primitive won't get passed on to the next step.

    Rasterization


    Now we have our final primitives, but it's just a bunch of shapes. This stage rasterizes the data. That means it takes the data and turns it into something the resembles pixels, fragments.

    As noted about, we don't get the actual pixel from the rasterizes but rather fragments. A fragment contains every data OpenGL needs in order to render a pixel. There will be at least one fragment per pixel. There can be more, but not less.

    Fragment shader


    This is the final shader that we can implement ourselves. It receives the input in the form of fragments ( as described above ) and outputs a single fragment when it's done. At this stage, we basically just set the color of the fragment. Though that can be rather complex. This is also the step were we'll put the texture on the object.

    But setting the color and/or tecxtures also means setting the lighting, and this can get quite complex which means there will be another part for it. For now though, all you have to remember is that this stage is were we set the color ( including alpha value ) of the fragment.

    Per-Sample Processing


    The final step before we get something on the screen is the per-sample processing step. In this step OpenGL looks at every fragment and sees if it, for any reason, should not be rendered. This is done running several tests. If any of them fail, the fragment might not be rendered. Some of these tests aren't enabled by default, so you need to enable or set them up yourself.

    Below is a short description of these tests, you can skip it if you want.

    Per Sample Processing details

    Ownership test


    If there is another window over our OpenGL window, the pixels are not visible to us so there is no need to view them on the screen.

    Scissor test


    You can specify a special rectangle on the screen. If the fragment is outside of this, it'll fail the test.

    Stencil test


    A stencil test takes a stencil, which is basically a black and white image, and uses it to determine if the fragment should be rendered. It works just like a stencil in real life.

    Imagine you take a sheet of paper and cut out a big 'H' in it. Then you put it over a different piece of paper and spray pain all over the H. When you remove the top paper ( the one with the H cut out, ) there will be a H on the paper, the exact same shape as you cut out. This is how this test works too. You can create a bitmap / image that works as the top of the paper. Everything this bitmap covers ( every black or every white pixel ) will then fail this test and not get rendered.

    Depth


    This is the test that checks if anything is actually visible covered up by something else. So if you have an object like a dice and something in front of it like a wall, the depth test is what makes sure the wall is drawn and not the dice.

    Finally, the blending happens. This is where the final alpha value of the fragment gets determined. OpenGL has several ways of calculating alpha values, so this needs to be it's own step. It also relies on the alpha value set by the fragment shader so this step in particular needs to be done after the fragment shader

    [collapse]

    And that's all the steps of the rendering pipeline. Now we'll take a look at how we set them up in OpenGL. We will also expand on the previous example and make something a little bit fancier by creating our own geometry shader and fragment shader

    Setting up the Shaders


    There are a few calls needed for setting up the shaders, but it's actually a bit easier than VBO and VAO. The shaders consists of one main object called the program that collects all the shaders into one, like a VAO. The individual shaders are like the VBOs. They're created separately and in the end they're added to the program. After they've been added, we won't be dealing with them unless we are going to update them.

    First we'll look at setting up the individual shaders. These are the grey steps in the image at the top. The process for setting them up is more or less identical for all shaders ( except that we have to specify the type of shader in one of the steps. )

    glCreateShader


    This is very similar to the other glGen* functions like glGenBuffers and glGenVertexArrays. But this one returns the id and only has one parameters, so we can only make one at the time. The parameter is used to specify type like glGenBuffers. This functions is used for all shader types.

    Parameters :

    • GLenum shaderType - the type of shader to crate ( see below )

    The shader type can be any of the following :

    • GL_VERTEX_SHADER - for creating a vertex shader
    • GL_TESS_CONTROL_SHADER - the first step of the tesselation shader
    • GL_TESS_EVALUATION_SHADER - the last step of the tesselation shader
    • GL_GEOMETRY_SHADER - for creating a geometry shader
    • GL_FRAGMENT_SHADER - for creating a fragment shader
    • GL_COMPUTE_SHADER - a compute shader is not a "standard" shader, it's just for setting a piece of code that will run on the graphics card. It is not part of the rendering pipeline so we won't be using it here

    As you can see, there is 6 different types of shader we can create using this function. We will be using the 5 first, but the process for setting up each one of them is identical so it's not a lot of work.

    Loading the shader source code


    The next step is to set the actual source code for the shaders. This is the .frag and .frag files in the previous part. The first step here is to load the actual shader. This simply involves reading a simple text file, but we need to write the function ourselves because OpenGL has no support for it :

    This function just takes a filename and returns all of the text file as a std::string

    glShaderSource


    Now we have our std::string we need a way of sending it to OpenGL. This is what glShaderSource is for.

    Parameters :

    • GLuint shader - the id of the shader. We'll use the return value of glCreateShader
    • GLsizei count - the number of strings of data we want to use. We only have one file, so we'll use 1 here
    • const GLchar **string - the actual data we want to use
    • const const GLint *length - the length of each induvidual char*

    This function might seem a little weird at first. The first argument is okay, it's just the Id of the shader. We dealt with similar things when we set up the VBO and VAO. But what about the others? I'll describe what the other parameters do and how to use them below. We won't be using this, but I do recommend reading it because then you'll know what the arguments are for. If you read it, you'll know exactly how to use it, which in the end, will make you less likely to write bugs.

    glShaderSource details

    As noted about glShaderSource is made to be able to take in several pieces of data. This allows you to have your shader spread into several different files. Then you could load all of them into different std::strings, one per file. Then finally you could add all the data into the shader with one call. This is were the different parameters comes in.

    count is just the number of different std::strings we have.

    const GLchar** string is a bit more tricky to understand. A GLchar* ( not the single '*' )is the same as char* which is just a text string. But we have two asterisks ('*')! In C++, a pointer is a lot like an array. So you can look at it like an array of char*. This is what allows us to send in several different strings at once.

    The final argument, const const GLint *length works in the same way. Just think about it as an int array, were each value is the number of characters in the string with the same index.

    Let's look at an example to illustrate this :

    Note : this is pseudocode, it won't compile. But hopefully this helps you understand this function and all it arguments. Having a good understand of all the aspects of a functio will make it a lot easier to debug.

    [collapse]

    glCompileShader


    Now we have loaded our shader, it's time for OpenGL to compile it. This is done using this simple function :

    Parameters :

    • GLuint shader - the Id of the shader to compile, same as for glCreateShader and glShaderSource

    As you can see, there isn't really much to this function. And after calling this method, this shader is ready to go. But we first have to create our main shader program. So the final crate + compile shader looks something like this :

    The char* src = const_cast( str.c_str()); part is just a way of converting the result of str.c_str() ( which is a const char*. ) Because OpenGL expects a non-const char*, so we need to cast it using const_cast.

    In glShaderSource(shaderId , 1, &src, &size); we use &src to create a pointer to the char* that holds our source. This turns it into a double pointer or a "pointer to a pointer", if you will. Similarly, OpenGL expects a pointer to an int for the size argument, so we pass in &size. In both of these cases the pointers are use to get array functionality for setting multiple sources ( as explained above. )

    The shader program


    Now we've created a shader time to add it to our program. As mentioned above, the shader program is what combines all shaders into one. Just like with VAOs we can have several off them. So we could have one for particle effects, one for regular objects, one for reflective surfaces, one for the ground with grass, etc... Since the shader program combines all the individual shader object, switching between them is easy. And setting them up is quite simple too!

    glCreateProgram


    This is very similar to the first function we looked at, glCreateShader. It simply creates an OpenGL shader program and returns the Id. We will use this program to connect our shaders and hook them up to the rendering pipeline.

    That's all, now we have created a shader and can use it in the next step.

    glAttachShader


    Now that the shader program has been created, we can attach our shaders to it. This is as simple as it can get :

    Parameters :

    • GLuint program - the Id of the shader program ( the one we created with glCreateProgram )
    • GLuint shader - the Id of the shader ( the one we created with glCreateShader )

    It doesn't really matter at which point in time you call this function, as long as both the shader has been created with glCreate*. You can even do this before loading the source. All it does is that it attaches the shader to the shader program using the ids. Though I find it more logical to attach the shader after it has been fully created. That way we won't be adding any shaders that failed to compile.

    glLinkProgram


    The final step of creating a shader program is to link it. This will inspect the shaders and optimize them before creating an executable. And finally the executable will be sent to the GPU.

    Parameters :

    • GLuint program - the Id of the shader program ( the one we created with glCreateProgram )

    An we're done, the shader program has been created and uploaded to the GPU so we can use it in our OpenGL application.

    glUseProgram


    Finally, now that our program has been created, we can finally start using it. This function is also very simple, it simply activates the program we pass in as the parameter. There can only be one shader program at any time, so passing in a new id disables the old one.

    Parameters :

    • GLuint program - the Id of the shader program ( the one we created with glCreateProgram )

    Putting it all together


    Below is a simple, fully working, example on how to set up a shade program.

    And now, at last, we can have a look into setting up the induvidual shaders, first a look the language they are written in.

    GLSL


    Shaders are written in GLSL ( OpenGL Shading Language ), which is very similar to standard C. But with a few extra built in stuff and some other stuff removed. The most important addition for us right now is the storage qualifiers. These specify whether the value is an input or and output. If the value is an output, it also tells what kind of an input it is. The storage qualifier is place before the type of value ( see example below )

    • Attribute input values ( sttribute)
      • Attribute values ( passed from a VBO)
      • Only for vertex shaders
    • Attribute input values ( in )
      • Input values passed from previous shaders
    • Attribute output values ( out )
      • Input values to pass to next shaders
    • Custom input values ( uniform )
      • Input to the shader
      • Used for values that are not stored in a VBO
      • Can be any type ( float, int, bool, array )

    We won't be using attribute, only in / out

    Vertx shader example


    Let's look at a simple vertex shader :

    The top two variables attribute in vec3 in_Position; in vec4 in_Color; are shader input variables which we got from the VBO / VAO ( see below. )

    The third variable, out vec4 ex_Color;, is our out variable. This is the variable we send to the fragment shader we have to do this manually by setting it in our main() like so : ex_Color = in_Color;

    Fragment shader example


    Now lets look at a simple fragment shader :

    So, the way to pass in VBO data is through an in value. All attributes ( like positions and colors ) must be passed to the vertex shader and then from the vertex shader to the next shader and so on. An attribute can only be passed from one shader to the next, you can't pass it directly to the last shader for example. The output values will automatically be passed through the shaders we haven't written ourselves.

    Geometry shader


    The geometry is a little bit more complicated and more involved than the fragment shader and the vertex shader so I wont explain in in this post. I will show you a geometry shader example that you can experiment with. It's commented so hopefully it should be easy to get an overview over what it does.

    Ordering


    It is very important to get the ordering of the attribute variables right. Remeber this part :

    The indexes we specify here ( positionAttributeIndex and colorAttributeIndex ) dictate the order you must create the attributes in the vertex shader. In out case, this will be :

    Of course, we could change it so that

    positionAttributeIndex = 1 and colorAttributeIndex = 0.

    In this case we would have to declare

    in vec4 in_Color; first and then in vec3 in_Position;.

    This is something that's very easy to miss and can be really frustrating to debug. Generally, OpenGL is quite low level, so mistakes like these are really easy to do and hard to debug if you don't know exactly what to look for.

    Some code


    The major new part of code today is a reworked Shader.h that can load any type of shaders. I also added a geometry shader that'll give you an idea of what the geometry shader does. I didn't add a tesselation shader because that would require OpenGL version 4.0 and that would mean that a lot of you would not be able to run it. Besides, I think there already is enough new stuff in this part. Well anyways, here's some code :

    Shader.h


    The Shader.h has been rewritten. Most of it should be described in the blog post except for the getting of variables from the shader ( including the log. ) I'll get into that in another post

    Vertex shader


    I renamed the vertex shader vert.glsl.

    Geometry shader


    I added a geometry shader it has a few bools you can change to show off what you can do. Keep in mind that when we render it normally, all we get is a square. The extra triangles are created by the geometry shader itself.

    Screenshot :
    simple geometry shader

    Fragmen shader


    I renamed the fragment shader to frag.glsl I also added functionality for setting a random color :

    Screenshot:
    simple fragment shader

    main.cpp


    I also made a few changes to our main file. This time we only render the triangles, not the lines. I also changed the coordinates a little. It still forms a square, but it's separated into four equally large triangles ( instead of two. ) This makes working on it in the geometry shader a lot easier.

    Screenshot:
    exploded

    Compilation


    We'll compile it just like last time :

    Using clang

    clang++ main.cpp -lGL -lGLEW -lSDL2 -std=c++11 -o Test

    Using gcc/g++

    g++ main.cpp -lGL -lGLEW -lSDL2 -std=c++11 -o Test

    Conclusion


    In this part we looked at shaders, what they do, how to create them and how to set them up. I intentionally didn't dive deeply into the shaders themselves, but instead I showed how to set them up. I know there has been a lot of very basic setup stuff in these parts, but I find it important to know how to set up OpenGL properly.

    The end result we get on screen in this part is quite simple, but feel free to play around with the geometry shader. There are a few bool values you can toggle to get different output. Or you could just modify the code yourself and see what you end up wtih.

    But in the next part we'll finally look at getting something 3D on the screen. When we do have something 3D on the screen, we can manipulate ( roatate, move, stretch, etc.. ) it in various ways quite easily. See you then!


    Feel free to comment if you have anything to say or ask questions if anything is unclear. I always appreciate getting comments.

    You can also email me : olevegard@headerphile.com
    Share

    [OpenGL Part 2] Vertexes, VBOs and VAOs

    Introduction


    OpenGL is complicated. Whereas SDL2 is relatively small with a few objects and functions, OpenGL is huge with lots of different elements. Fortunately it’s also very well documented. And, as we saw in the previous part, it’s not hard to get something on the screen ( thanks to SDL2 )

    In this part, we’ll first take a look at vertexes before we look at how to draw a simple object.

    Drawing an object in 3D


    Now that we’ll be working in 3D, we need to do things a little differently. In SDL2 we only used the position and size of each object. Each object was basically just an image that we drew on the screen and moved around. We never told SDL2 anything about how it looked, how big it was, etc. SDL2 simply took a texture and put it on the screen.

    But in OpenGL we’ll be rendering an except shape so that we can view it from any angle, which would be almost impossible in SDL2. It also enables us to color it, apply textures and change the lighting in code. We do this by defining a mesh like you see above. It’s all just a bunch of points in 3D space defined by vectors. A vector in this context is just a simple mathematical unit that defines a position. We’ll be using 3D ones, so they’ll each have three values ( x, y, z ) When we have these vectors we can tell OpenGL the exact shape of an object, and then we can draw it in 3D using OpenGL

    Vertex vs vector


    In OpenGL we use something called a vertex. A vertex is a lot like a vector in that in represents a single point. The difference between a vertex and a vector is that a vector is just the position of a single point. But a vertex contains the vector of the point and it can also hold other things at the same time, like the color of that point, and other things we’ll come to in a later part. So, in essence, a vertex contains everything we need to draw one of these points. And when we draw an object, like a dice, we need to give OpenGL one vertex for each point.

    The dice about has 8 vertexes :

    • left, top, front
    • left, bottom, front
    • right, bottom, front
    • right, top, front
    • left, top, back
    • left, bottom, back
    • right, bottom, back
    • right, top, back

    Each part of the vertex is usually referred to as an attribute. For instance the vectors/positions is one attribute, colors is another attribute and so on…

    OpenGL programming method


    In contrast to other APIs / libraries, OpenGL is not object oriented. There’s really no objects at all, mostly because a lot of the vertex data is stored on the GPU. So instead you need to handle the models, textures, etc on our own.

    OpenGL does, however, have some notion of object. But instead of being a concrete struct like SDL_Texture as we have in SDL2, it’s just an ID to a type of data. The only way to refer to this data through OpenGL is by using ID’s. This is mostly because the objects are stored on the GPU, and you want to keep them there without transferring/streaming them back and forth.

    So let’s take a look at two of the most important objects we’ll be using in OpenGL.

    VBO – Vertex Buffer Object


    The VBO(Vertex Buffer Object) is one of the “objects” of OpenGL. It holds all of a single vertex attributes for an object. Not all vertexes, but all vertexes of one type, like all positions or all colors. So you’ll end up with one VBO for positions, one VBO for colors, etc…

    In order to create a VBO, we first need some data. So let’s take a collection of vectors and put them in a VBO. To keep things simple, we’ll just use a square. Our square has four positions, one of each corner. Let’s create a simple array containing all of these points.

    That’s the simple part. Now we need to tell OpenGL to create the actual VBO for us. This requires a few steps so let’s look at them one at the time.

    glGenBuffers


    This function generates a VBO for us, so that we can store our vertex attribute into it. It also gives us back an ID for this buffer so that we can use it for referring to this VBO later.

    Note : GLsizei is simply just an unsigned integer like uint32_t and GLuint is just a signed integer like int32_t

    Parameters :

    • GLsizei n – the number of buffer we want. One per attribute, so we’ll keep it at 1. But if we were going to add colors, we’d use 2.
    • GLuint* buffers – this is were we get the ID’s of our buffers back as an array.

    So now, let’s generate our VBOs :

    The second line creates an array for holding our ID’s and the third line tells OpenGL to allocate countVBOs VBOs for us. Since arrays works a lot like pointers in C++, we can just pass in vbo, and OpenGL will automatically give us as many IDs as we ask for.

    Now we have our VBO and it has the ID stored in vbo[0]

    glBindBuffer


    This function is deceptively simple, so it’s important to understand it because it can lead to some confusion. And if you call it at the wrong time or don’t call it, your application will most likely crash!

    The function simply sets a buffer as the current buffer. We use it to tell OpenGL that this is the buffer we are working on now.

    Parameters :

    • GLenum target target – the type of buffer we want this to be. In our case, it’s GL_ARRAY_BUFFER
    • GLuint buffer – the ID of the buffer we want to bind / set as active

    You might have notices the new type, GLenum. This is just a huge enum that contains all the predefined flags in OpenGL. These flags are used by a lot of different functions for a lot of different things, so I’ll just explain them as they come.

    GL_ARRAY_BUFFER is the value we use for vertex data like positions and colors.

    Using it is really simple :

    glBufferData


    Now that we have bound the buffer, we can tell OpenGL to store this data for us.

    Now this might seem complicated, but it’s quite logical when you see what the parameters are for.

    Parameters :

    • GLenum target n – the type of buffer we want this to be. We’ll use the same as for glGenBuffers : "GL_ARRAY_BUFFER"
    • GLsizeiptr size – the size of the data in bytes.
    • const GLvoid* data – the data that should be stored
    • GLenum usage – how the data should be used. We will just use GL_STATIC_DRAW which means we won’t be modifying it after this, we’ll only be using it for rendering.

    The second argument, GLsizeiptr size, might seem a bit weird. First of all, what is a GLsizeiptr? Think of it as a very big integer. It’s basically a special type they used for when you need to store huge numbers. But don’t worry too much about this, we’ll be using it as a standard unsigned int.

    The third argument, const GLvoid* data is a pointer to the data. A const GLvoid* ( or simply just void* ) is a pointer that can be pointing to anything. It can be floats, chars, ints, std::strings… Anything! So in reality, it doesn’t know anything about the data at all. This also means it doesn’t know the size either, which is why we need that second argument, GLsizeiptr size

    Finally, here is how we’ll use it :

    sizeof(GLfloat) simply gives us the size of a single GLfloat. So we just multiply that by the number of individual GLfloats in our array, square.

    Here’s the entire code for setting up a VBO so that you can digest it all before moving on to the next part.

    Now we have created a VBO but how do we render it? And what if we have more than just one VBO for the same object? Enter VAO, Vertex Array Object

    VAO – vertex array object


    A VBO represents a single vertex attribute ( like positions or color ). A VAO is a lot like VBO, they’re used in the same way. The difference is that a VBO represents a single attribute, but a VAO can combine several attributes / VBOs so that we have all the vertex data in a single object. This is a lot simpler when it comes to rendering ; we can simply render the VAO, then move on to the next one without even thinking about the VBOs

    We still need a VBO for every attribute though, and we need to put them into the VAO one by one until we have a single object. The VBOs is only needed for creating or updating the VAOs. All other times we just use the VAOs

    glGenVertexArrays


    Think of this as glGenBuffers, only for VAOs. It generates a VAO for us to use later.

    Here’s the signature :

    The parameters are the exact same as for glGenBuffers so I won’t be going into them in any more depth.

    Here’s how we’ll use it

    glBindVertexArray


    Just like glGenVertexArrays is the VAO equivalent of glGenBuffer, glBindVertexArray is the VAO equivalent of glBindBuffer. So this function sets the VAO as the active one. Note that these are not mutually exclusive, we can have both a VBO and a VAO active at the same time.

    Parameters :

    • GLuint array – the ID of the vertex array to bind.

    As you can see, this signature only has one argument. Why? Well in OpenGL there are several data we can store in a VBO, not just vertex data. But a VAO is more of a wrapper object for vertex data, so there is just one type.

    Usage :

    glVertexAttribPointer


    Now this is where things get a little complicated. This method is what associates our vertex data from the currently selected VBO with the current VAO. We use it to tell OpenGL were in the VAO the data from the current VBO should be stored.

    Parameters :

    • GLuint index – An ID we define that refers to this attribute. We’ll need this later so that we can refer to this vertex attribute
    • GLint size – the number of values per attribute ( 1 to 4). In our case it’s 3 since our attributes have 3 values (x, y and z)
    • GLenum type – the datatype the attributes are in. In our case it’s GL_FLOAT
    • GLboolean normalized – whether the data should be normalized ( more on this in a later part. ) For now we’ll use GL_FALSE
    • GLsizei stride – specifies an interval between vertex attributes. We don’t use that so we’ll just use 0 here
    • const GLvoid * pointer – the starting point of the data to use. We don’t use this either, so we’ll just use 0 here as ell.

    As you can see, it’s really not as bad as it looks. The fourth argument, normalized isn’t really important for us now. And the two last ones only deals with cases were we put several vertex attributes in the same array ( like if we put positions and colors ) in the same array.

    The important thing here is that it puts a type of vertex attribute data form a VBO into a VAO. It uses the current active VAO and VBO, so we need to call glBindBuffer and glBindVertexArray first.

    Here’s how we’ll be using it :

    Note that if you haven’t called glBindBuffer() before calling this function, it won’t work properly and your application might crash.

    glEnableVertexAttribArray


    After we’ve set up the VBOs and VAOs, we need to enable the attribute within the VAO because, by default, every vertex attribute array ( like our positions. ) are disabled. This means we’ll have to enable every vertex attribute we create and assign with glVertexAttribPointer. In our case, we just need to call it once since we are only enabling positions.

    Parameters :

    • GLuint index – The index of the vertex attribute array we want to enable.

    With all of that out of the way, we can look at an example of how to set up a VBO and VAO :

    Hopefully this wasn’t too bad. It’s important that you understand what a VBO is, what a VAO is, what their relation is and how to use them. Knowing this will save you from a lot of confusion and frustration in the future.

    I placed the binding of the VAO and VBO in an awkward order to demonstrate the ordering of these functions. The ordering doesn’t matter as long as you bind the VBO before using glBufferData and glBindVertexArray before you call glVertexAttribPointer. Take a look in the code below for a better way of ordering these functions : )

    A quick note about shaders


    Before we can get anything on the screen, we’ll need a shader. Shaders are small programs that runs on the actual GPU/graphics card. We only have to define a vertex shader. This shader deals with things like moving/rotating/scaling objects. We also have a framgment shader which deals with setting the correct colors.

    I won’t be going any deeper into shaders than that this time. But we do need them, which means we also have to set them up properly. So I made a simple helper class that does all of that for us. I’ll post it below with the other code so you can copy it and get the example up and running. The next part will be about sharers and why we need them, so hopefully the code will make a bit more sense then.

    The code


    The code consists of three pieces ; the main .cpp file were most of the code is, the Shader.h which is where all of the shader related code is, and the shaders ; the vertex shader ( tutorial2.vert ), and the fragment shader ( tutorial2.frag )

    I have added setting of colors to the code, along with an example of glEnableVertexAttribArray. I hope it gives you a good idea of how to use these functions. In the next part we’ll take a close look at the shader, how to set them up and how to write our own.

    The code is take from here. Though I have changed it quite a lot.

    main.cpp


    Here is our main file :

    As you can see, it also sets color. It does this in the same way as it sets positions. I added it to further demonstrate how to bind the buffers correctly.

    Shader.h


    Here is the shader helper file. Don’t mind it too much, I’ll go into more detail about how it works the next time.

    tutorial2.vert


    This is our first shader, the vertex shader. Make sure you name it tutorial2.vert and put it along with the other files

    tutorial2.frag


    And finally, the fragment shader. Make sure you name it tutorial2.frag and put it along with the other files

    Compiling


    Using clang

    clang++ main.cpp -lGL -lGLEW -lSDL2 -std=c++11 -o Test

    Using gcc/g++

    g++ main.cpp -lGL -lGLEW -lSDL2 -std=c++11 -o Test

    Conclusion


    Finally we have something on the screen! The process is a bit tedious and not 3D yet. But we’ll be going into 3D territory soon. And that’s when things get really cool.

    I hope this tutorial has helped you understand VBOs and VAOs along with the concept of vertexes. My goal is to go through things thoroughly, giving you a good understanding of how things work. The better you know how things work, the easier it will be to write code.


    Feel free to comment if you have anything to say or ask questions if anything is unclear. I always appreciate getting comments.

    You can also email me : olevegard@headerphile.com
    Share

    [OpenGL – Part 1] OpenGL using SDL2

    Introduction


    In order to program in 3D, you need a 3D library. Sure, you could base your game of an already existing engine. But that’s not what this blog is about! Instead, we’ll
    used a graphics library. The most common ones are OpenGL or DirectX.

    Since DirectX is a Microsoft technology and only works under Windows, we will be using OpenGL. This means the applications we make will work on just about any operating system.

    Note : I recommend that you read at least the first few parts of my SDL2 tutorial before continuing. My SDL2 tutorial will explain the SDL2 elements like SDL_Window in more detail. The first few parts are really short and should give you a basic understanding of SDL2

    What is OpenGL


    OpenGL is a specification or an abstract API if you will. It is not an actual implementation. It doesn’t do anything on its own. But rather, it just defines a lot of functions and data types that we can use in our program. Then it’s the job of the underlying implementation to actually do the job. This implementation is part of the graphics card driver. The means that the implementation varies from platform to platform. The Linux version is different from the Windows version. It’s also different based on the hardware. So a nVidia version is different from an ATI version.

    We really won’t be giving this too much thought, we’ll only use the functions and types defined by the OpenGL specification. But it’s useful to know exactly what OpenGL is.

    Old vs new


    Back in the day, programming in OpenGL was tricky. Setting it up was a mess, you had several different libraries to keep track of like glu, glut and glew. I’m still not quite sure what all of them did. On top of that, the code itself was rather bad too. Really not intuitive and not as flexible as the new version. But after version 3.0 a lot changed. Lots of code was deprecated and lots of new stuff were added. So new we can write very simple and concise OpenGL that’s also multi platform.

    GLEW


    I briefly mentioned GLEW( OpenGL Extension Wrangler Library ) above as one of the libraries that made OpenGL confusing. But that’s really not GLEWs fault. GLEW is actually quite simple, it just lets us write OpenGL code in a simple, platform-independent way. We won’t be noticing it a lot, except for an init call, so there’s really no need to learn a lot about it. But it’s always nice to know what its there for.

    OpenGL and SDL2


    SDL2 makes setting up OpenGL really easy. You can use SDL2 to create your window and hook up a rendering context ( I’ll explain what a rendering context is later. ) If we didn’t do this using OpenGL we’d have to do it in different ways on different platforms. The code would get messy and really complicated. SDL2 lets us do all of this in a really simple way. I

    Rendering context


    A rendering context is a structure that keeps track of all of our resources, basically every thing we want to put on the screen. It also keeps some state like what version of OpenGL we are using and some other stuff. We need a rendering context before we can do any OpenGL stuff. A rendering context is connected to a window ( like SDL_Window ). It can be connected to just one window, several windows and a window can have several rendering contexts.

    An SDL_Renderer is a kind of a rendering context, but SDL_Renderer only supports the SDL2 way of rendering, which is 2d. But now we want 3d, and it’s here that OpenGL comes in. SDL2 even has its own rendering context object, SDL_GLContext. We’ll see how to create it later.

    Setting it up


    Now let’s try to set up a simple OpenGL application. It won’t be much different from the first SDL2 application we made, the point is just to set up OpenGL.

    Libraries and header files


    First of all, if you haven’t already, you should set up SDL2. You can do this by following my guide.

    Linux / Mac

    If you’re on Linux or Mac, you don’t have to set up anything else. All you need is an extra compilation flag which I’ll show you later.

    Windows

    If you’re on Windows things are a little trickier.

    1. Download the libraries, headers and binaries from the GLEW web page
    2. Put the “glew.h” header file in a folder named “GL” in the same directory as you put the “SDL” folder
    3. Put the “glew32d.lib” file in the directory you place “SDL.lib
    4. In the Visual Studio -> Project Properties -> Linker -> Input add glew32d.lib;opengl32.lib;
      • You also need SDL2.lib like in the guide, so your string should start with glew32d.lib;opengl32.lib;sdl2main.lib;sdl2.lib;
    5. Puth the .dll in your project folder

    That should be it. If you get the error 0xc000007b you’ve probably mixed up 32 / 64 bits lib or dll files.

    Creating the window


    The first part of the code should look very familiar to that of plain SDL2

    In fact, the only new thing here is the SDL_WINDOW_OPENGL which tells SDL2 that we will be using this window for OpenGL and not SDL2.

    Just like with plain SDL2, we end up with a SDL_Window. And now that we have created it, we just need to connect a rendering context to it.

    Setting the variables


    Before we create and connect the rendering context, we’ll set a few variables to tell SDL2 and OpenGL which version of OpenGL we want to use. To do this, we use the function SDL_GL_SetAttribute

    Parameters :

    • attr – the attribute we want to set.
    • value – the value we want to set it to

    For a list of all SDL_GLattrs, click here.

    Return

    0 on success, otherwise negative.

    So now let’s use it to set a few variables :

    Context profile mask


    Here we set SDL_GL_CONTEXT_PROFILE_MASK to SDL_GL_CONTEXT_PROFILE_CORE
    This means that the old, deprecated code are disabled, only the newer versions can be used.

    You can also use this to limit your application to which means your code will work on smart phones too. But it also means we’ll have less functionality, so we won’t be doing that.

    Context version


    Set up so that we use version 3.2 of OpenGL. We could set the number higher to use a new version, but your graphics card might not support that. This means we wont have access to all of OpenGL, but for now, 3.2 is sufficient for our needs.

    Double-buffering


    We need to tell OpenGL we want double-buffering. Which basically means that we draw to a hidden “screen” ( or buffer ) When we are done drawing to it, we swap the buffer we drew on with the buffer on the screen so that it becomes visible. Then we start drawing on the buffer we just swapped out ( which is now invisibe). This way, we never draw directly on the screen, making the game look a lot smoother

    The buffer/screen we are drawing on is usually called the “back buffer” and the one on the screen is called the “front buffer”

    Connecting a rendering context


    Now that we’ve set up the properties, we need to connect our rendering context. Fortunately, SDL2 makes this really simple, all we need is the SDL_GL_CreateContext method :

    Parameters :

    • window – the SDL_Window we want the code>rendering context to connect to.

    Return

    A valid SDL_GLContext on succes, otherwise NULL.

    Initializing GLEW


    After initializing SDL2 we need to initialize GLEW so that it can take care of our OpenGL calls. There is a two steps to this :

    This tells OpenGL that we want to use OpenGL 3.0 stuff and later.

    Depending on your graphics card driver, some functions might not be available through the standard lookup mechanism. This means that GLEW can’t find it for us, and the application will crash. glewExperimental enables us to use functionality. So there might be functions that exists, are valid and will work, but the isn’t normally available. glewExperimental tells GLEW that we want to use these functions as well.

    A side note : in my experience, this is needed even when using very basic OpenGL stuff, so it’s possible that some graphics card drivers report a lot of functions as experimental resulting in the need for glewExperimental = GL_TRUE


    As you probably guessed, this simply initializes GLEW so that it can take care of looking up functions for us. And that’s really all we need as far as GLEW goes.

    Drawing stuff


    Finally, let’s use OpenGL to draw something. I’ll just cover the very basics in this part, more interesting stuff next time!

    OpenGL and colors


    For the most part, OpenGL uses float values for colors. So instead of 255 being “max color”, 1.0 is max color. So means no color and 0.5 means 50 % color ( same as 255 / 2 = 127 in SDL2)

    glClearColor


    In order to clear the screen with a single color, we first need to set which colors to clear it with. For that, we can use glClearColor.

    Parameters :

    • red – the amount of red ( 0.0 – 1.0 ).
    • green – the amount of green ( 0.0 – 1.0 ).
    • blue – the amount of blue ( 0.0 – 1.0 ).
    • alpha – the amount of alpha ( 0.0 – 1.0 ).

    If you specify a value higher than 1.0, it’ll be clamped to 1.0 which means that any number higher than 1.0 will be changed to 1.0.

    You can think of this function as the same as

    SDL_SetRenderDrawColor(&renderer, r, g, b, a)

    The parameters are a little different, but both sets the color that will be used in the next step.

    glClear


    In order to update / fill the screen with the color we sat above using glClearColor(), we use glClear()

    Parameters :

    • GLbitfield is basically an enum that tells us what we want to clear. We’ll use GL_COLOR_BUFFER_BIT which means we want to clear the colors, reseting the screen to the color we set using glClearColor

    You can think of this function as the same as

    SDL_RenderClear(&renderer);

    SDL_GL_SwapWindow


    This function swaps the back buffer ( were we are currently drawing ) with the front buffer ( the one currently on the screen) . So you could say that this function does the actual double-buffering.

    Parameters :

    window the SDL_Window we want to swap the buffers on

    You can think of this function as the same as

    SDL_RenderPresent(&renderer);

    Basically ; it pushes things onto the screen.

    Setting background color example.


    Setting the background color in OpenGL is just as simple as in SDL2.

    In SDL2, you can do something like this :

    To do the same in OpenGL, you can do :

    A small example


    Let’s put it all together and make a small example. This example uses the event system in SDL2, so if you’re unfamiliar with that, you should read up on it.

    In order to compile on Linux / Mac, you can simplu run

    clang++ main.cpp -lGL -lGLEW -lSDL2 -std=c++11 -o Test

    or

    g++ main.cpp -lGL -lGLEW -lSDL2 -std=c++11 -o Test

    In the application, you can press r, g, b to swap the color

    Conclusion


    Setting up OpenGL with SDL2 is easy! And now that we have it set up, we can do lots of fancy 3D stuff. I have been thinking about writing this for a long time, and I finally got around to it. I really hope you enjoy it and want to learn more about OpenGL. 3D is much more fun than 2D, and I promise things will get more interesting when we get the basics out of the way

    Code attribution


    The code in this post was based on the code from this post


    Feel free to comment if you have anything to say or ask questions if anything is unclear. I always appreciate getting comments.

    You can also email me : olevegard@headerphile.com
    Share

    [SDL2 – Part 1b] Setting up Visual Studio for SDL2

    Setup Visual Studio for SDL2


    Finally, I’ve gotten around to making a quick guide for setting up Visual Studio for SDL2. This guide also includes a fix that makes it possible to use SDL2 with Visual Studio 2015

    In order to use SDL2 on Windows, you have to set up your IDE to use it. Here’s the guide for how to do that using Visual Studio. The steps are generally the same for all versions of Visual Studio, but there is an issue with Visual Studio 2015

    Visual Studio 2015


    They changed a lot in the 2015 versjon of Visual Studio. This change means that you get a linker error when you try to build an SDL2 project.

    It took me a little trial and error to fix this, but I ended up building the SDLmain from source. You can find it here.

    1. Getting the libs

    You can find the files you need here. For VisualStudio you need to download :

    SDL2-devel-2.x.x-VC.zip (Visual C++ 32/64-bit)

    This includes both the .lib and the .h files.

    Or, as mentioned above, if you’re using Visual Studio 2015, you need a .lib file build with Visual Studio 2015. You can either do this yourself, or download the ones I compiled.

    Placing the includes/libs


    Now take all the .h files in include and move them into a folder named SDL2. You can put this folder anywhere you want as long as the folder containing all the .h files is called SDL2. The reason for this is that we use #include <SDL2/SDL.h>

    Do the same for the .lib files. The name of the directory you put them in is irrelevant in this case, just put them somewhere you remember. ( You might have to put other .libs in here at a later point in time )

    2 Setting up libs


    Start up Visual Studio, create a new project and add / write a .cpp ( for instance you can use the main.cpp in the first part of the tutorial. )

    Now we need to set up VisualStudio so it knows where to find the header files we placed in the step above

    Right click on the project and click “Properties”


    VS Install 1

    Select C/C++, select “Additional include directories” and click “Edit”


    VS Install 2

    Click “New Line”, then navigate the folder containg the SDL2 folder and click “Select Folder”


    VS Install 3

    You should now see something like this :

    VS Install 4

    Click OK. Now we’re done with the header files, time for the lib files.

    Under “Linker”, select “Additional Library Directories


    VS Install 5

    Do the same thing you did for the header files, but this time navigate to the folder conataining the .lib files.

    Navigate to “Input” and enter “SDL2main.lib;SDL2.lib;” in front of the others


    VS Install 6

    3 Copying .dll files


    The .dll files are needed to run SDL2 applications. When it comes to placing them you have two options :

    In the project directory

    This is the same folder as your .exe file. This means you have to copy them every time you create a new project, which can be a little annoying and easy to forget

    In your Windows system directories

    When Windows looks for dll files, it’ll look in a few standard directories in addition to the directory the .exe file is in. Putting it in one of these means the dll will always be there, and you don’t have to worry about copying.

    The directories are as follows :

    • In x86 this directory is C:/Windows/system32/
    • In x64 this directory is C:/Windows/SysWOW64/ though you might have to place them in System32/ as well.

    4 Setting correct subsytem


    You’ll probably also have to set the correct subsystem. Go to Linker -> System and set SubSytem to Console(/SUBSYSTEM:CONSOLE)

    VS Install 7

    Adding other libs


    Now that we have this set up, we can add other SDL2 libs like SDL2_image, SDL2_ttf, etc.. All you have to do, is to download the Visual Studio libs like before and copy the header files and lib files to the same folders as above. You also need to add the name of the new .lib file to “Input” under “Linker” And finally you need to copy the new dlls as mentioned above.

    SDL2_image

    You can find the libs here ( download the one with VC in it. ) Add

    SDL2_image.lib

    to Linker / Input

    SDL2_ttf

    You can find the libs here ( download the one with VC in it. ) Add

    SDL2_ttf.lib

    to Linker / Input

    SDL2_net

    You can find the libs here ( download the one with VC in it. ) Add

    SDL2_net.lib

    to Linker / Input


    Feel free to comment if you have anything to say or ask questions if anything is unclear. I always appreciate getting comments.

    You can also email me : olevegard@headerphile.com
    Share