Category Archives: SDL2

Game programming in SDL2

[OpenGL part 5] Matrix operations ( translate, scale, rotate )

Introduction to Matrix Operations

In this part, we’ll be looking at basic matrix math. We’ll be looking at how we create the various transformation matrices that we use for rotating, moving and scaling. If you really want, you can skip to the second part where we look at some code and let glm do the maths for us. But I really recommend actually learning the basics of matrix operations.


Matrices are basically small tables of numbers. They are used in graphics programming when transforming vectors and object. We use them to move, rotate and scale objects. The math behind them is relatively simple though some of it can be hard to wrap your mind around first. And there are several mistakes that are easy to make and very hard to debug, so it’s very useful to know a bit about them and how they work.

The basics

In all simplicity, a matrix is just a table of numbers. Like the one you see below. With it comes a lot of simple operations we can do to transform objects ( like we saw with the model/view/projection matrix in the last part. But even though the idea is simple and the operations are basic ( usuaully just addition and multiplication ), they can quickly become confusing

\begin{bmatrix} 1\quad0\quad0\quad0 \\0\quad1\quad0\quad0 \\0\quad0\quad1\quad0\\0\quad0\quad0\quad1\end{bmatrix}

The unit matrix

What you see above is what we call the “unit matrix”. You can look at this like the base matrix or the null matrix. The idea behind it is that anything you multiply it with will remain the same ( we’ll look at this soon. ) That makes the unit matrix the starting point as well as it’s used in a few different, more complex matrix expression.

Matrix – vector multiplication

The simplest operations we’ll be looking at, is multiplying a matrix with a vector. This is quite straight forwards, though there will be a lot of numbers to keep track of so read through it a few times and get comfortable with it before proceeding. The formula for multiplying a 3×3 matrix with a 3d vector is as follows :

Let’s isolate just the top row ( all rows are multiplied in the same way )

We can multiply 4×4 matricies and verticies like this :

Step-by-step guide

  1. Look at first row
    1. Multiply first matrix value on that row with first vector value (x)
    2. Multiply second matrix value on that row with second vector value (y)
    3. Multiply third matrix value on that row with third vector value (z)
    4. Continue until no more more numbers on that line of the matrix
    5. Add all the numbers together
  2. Repeat for next row until no more rows
  3. Done!

You might have noticed that this requires that the number of values in each row of the matrix is the same as the numbers of values in the vector. If we have 3 values in each row of the matrix, the vector needs to have 3 values as well. This isn’t really an issue for us, so we won’t be looking at these cases here.

The unit matrix

As mentioned earlier, the unit matrix won’t change anything we multiply by it. We can see this by doing a multiplying with placeholder values for x, y, z and w

So now that we’ve learned how to do multiplication, let’s test it out and see if it’s really true! Let’s try to multiply it by the vector 1.03, -4.2, 9.81, 13 :

As you can see, we end up with the same as we began with, 1.03, -4.2, 9.81, 13.

Matrix – matrix multiplication

Multiplying a matrix with a vector was quite easy. Now let’s make things a tiny bit more difficult and multiply a matrix with a different matrix. This is one of the most important basic operations we need to do. It plays a huge role in moving/scaling/rotating objects. And, as we’ll see in a later part, it’s a very important part of lighting.

Matrix multiplication depends a bit on the sizes of the two matrices. But we’ll simplify it to say that we’ll always be working with unifrom matrixes ( 2×2, 3×3, 4×4 ). Firstly, lets look at the generic formula :

Now this seems a bit complicated, so let’s look at how to calculate just the first number :

matrix multi first cell

As you can see, for the first part of the multiplication we use the first numbers of both matrices ( A11 and B11 ). But for the second number, we use the next number in the same row for matrix A ( A12 ) and the next number in the same column for matrix B ( B21) This pattern repeats for the next number on that row like this :

matrix multi second cell

Now we move to the next row and repeat the process :

matrix multi third cell

And finally, the last cell :

matrix multi fourth cell

This can also be extended to a 3×3 matrix like this

This is a lot of numbers, but if you look closely, you’ll see it’s a lot like the previous one, only with an extra number on each row and column. So in this case, for the first number, all we did was add + a13 * b31 to the original operation ( which was a11 ∗ b11 + a12 * b21 ) For the second number on the top row we added + a13 ∗ b32 to the original operation. The third number on the top row is new but it follows the same pattern;

For all the numbers on the same row from matrix A

a11, a12, a13

and all the numbers in the same column from matrix B

b13, b23, b33

multiply each pair in the same position and add them together.

a31∗b13 + a32∗b23 + a33∗b33

The unit matrix again

Now let’s try multiplying a vector with the unit matrix again. We should get the same result as we started with, but let’s see…

Success! This product is exactly the same as we started with!

Ordering matters

A very important part of matrix operations is the ordering. Normally in mathematics the ordering is not important. When it comes to simple multiplication it does not matter what order you multiply two numbers 123 * 321 gives the exact same result as 321 * 123. But when it comes to matrices, this is not true. Let’s look at a very simple example :

But if we flip the ordering…

..we end up with something completely different.

Because of this, it is important to keep track of the ordering, otherwise, you’ll end up spending hours debugging!

Using matrices to change vertexes

Now that we’ve learned about the basic math about matrices, it’s time to learn how to use them. We’ll be using them to move, scale and rotate objects. We do this by multiplying each vertex by a matrix. After multiplying the vertex, we get a new vertex back that has been moved/scaled/rotated. After doing this to all the vertexes of the object, the entire object will have been moved/scaled/rotated. Creating these matrices is quite easy though there will be a few numbers to keep track of. Let’s look at the operations one by one.

Moving (translating)

The matrix we use for moving an object is quite simple. It is defined like this :

\,\ 1\quad 0\quad 0\quad dx\\
\,\ 0\quad 1\quad 0\quad dy\\
\,\ 0\quad 0\quad 1\quad dz\\
0\quad 0\quad 0\quad 1\\

Where dx is movement in x direction, dy is movement in y direction and dz is movement in z direction. So in effect, we get this matrix :

1 & 0 & 0 & \phantom{-}9.2\\
0 & 1 & 0 & \phantom{-}1.2\\
0 & 0 & 1 & -3.7\\
0 & 0 & 0 & \phantom{-}1\\

Will move the object 9.2 in the x direction, 1.2 in the y direction and -3.7 in the z direction.

Now this might look a bit familiar. Let’s compare it to the unit matrix :

1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1\\

That’s right. It’s the same except the dx, dy and dz part. This will come into effect when we do the actual translation.

Since we are using a 4×4 matrix, it is easier to use 4d vector. But that raises a new question; what about the last value? The x, y and z is, of course, the position. But there is a final number we haven’t cared about yet. As it turns out, this has to be 1 we’ll find out why now.

Let’s look at an example. Say we have the vector [11, 1.5, -43] first we need to add the last digit, 1 so we end up with :


Now for the translation matrix. Let’s use the one from above which will move the object 42 in the x direction, 19 in the y direction and -13 in the z direction.

1 & 0 & 0 & \phantom{-}9.2\\
0 & 1 & 0 & \phantom{-}1.2\\
0 & 0 & 1 & -3.7\\
0 & 0 & 0 & \phantom{-}1\\

Finally we can try the translation. Translating an object is simply just multiplying the vertex and the translation matrix :

This might seem like a bit of over complication. Why not just add the numbers? Just adding the numbers would be practical if we were just moving the object. But we can do other things like scaling and rotating. By using a matrix, we can combine all of these into a single operation. So let’s look at the next operation, scaling.

Making things bigger or smaller (scaling)

The second operation we’ll look at is how to make objects larger or smaller. This is quite similar to translating objects. For scaling we have the base matrix:

sx & 0 & 0 & 0\\
0 & sy & 0 & 0\\
0 & 0 & sz & 0\\
0 & 0 & 0 & 1

And just like with translation matrices, we multiply our vector with this matrix to get the scaled vector back.

Here sx, sy, sz are the scale factors, which are the numbers we need to multiply with in order to get the result :

  • If you don’t want to scale it all all, you se the scale factor to 1.
  • If you want to double the size you have a scale factor of 2, for trippling you have 3, etc…
  • If you want to make it 50% larger, you have a scale factor of 1.5, 25% larger the scale factor is 1.25, etc…>
  • If you want to halve it, you have a scale factor of 0.5, make it 75% smaller the scale factor is 0.25, etc…

Let’s first look at an example:

Say we have the vertex [2.1, 3.4, -9.5] and we want to scale it like the following :

  • Make it 70% smaller in x direction
    • Scale factor becomes 1.0 - 0.7 = 0.3
  • Make it 80% larger in y direction
    • Scale factor becomes 1.8
  • Triple the size in z direcion
    • Scale factor becomes 3.0

This gives us the scale factors [0.3, 1.8, 3.0 and the vertex [2.1, 3.4, -9.5]. Let’s plot these into the matrix operation:

This gives us the scale factors [0.3, 1.8, 3.0 and the vertex 0.63, 6.12, -28.5]… Which tells us that the vertex has been moved :

  • Closer to the center in x direction ( becasue object get smaller in x direction )
  • A little further away from the center in y direction ( which means the object gets larger in y direction )
  • A lot further away from the center in z direction ( getting a lot larger in z direction )
  • And if we apply this to all the vertices in an object, we find that the center of the object remains the same, so we’re not actually moving the object, we’re just moving the individual vertices. Closer to or further away from the center.


    Now this is where things get a little complicated. We need to translate the object using numbers calculated using sin and cos. The formula for calculating the rotated x and y is as follows :

    x2 = cos β * x1 − sinβ * y1
    y2 = sin β * x1 + cosβ * y1

    I won’t go into details about why this formula works, but you can read about it here.

    Specifying axis

    In order to rotate a 3d object, we need an axis to rotate it around. Take a look at the dice below :

    It is laid out like the following :

    Now imagine we want to rotate it so that we see other numbers. In order to do this, we need an axis to rotate it around. Imagine we stick a toothpick throw this dice from 5 to 6 like the following :

    Now we can rotate the dice 90° down and we end up with something like this :

    [Note: If anyone has any tips or can in any way help me improve these illustrations, it’d be much appreciated]

    The math

    When it comes to the actual math, it’s a bit more complicated. I won’t be explaining where we get the matrices for rotation, but if you’re interested, you can read more about it here.

    Like with translating and scaling, we use a matrix to do the rotation. But the matrix itself is a bit complex and is a little different depending on which axis you rotate :

    For X axis

    1 & \phantom{-}0 & 0 & 0\\
    0 & \phantom{-}cos\phantom{-}θ & sin\phantom{-}θ & 0\\
    0 & -sin\phantom{-}θ & cos\phantom{-}θ & 0\\
    0 & \phantom{-}0 & 0 & 1\\

    For Y axis

    cos\phantom{-}θ & 0 & -sin\phantom{-}θ & 0\\
    0 & 1 & 0 & 0\\
    sin\phantom{-}θ & 0 & \phantom{-}cos\phantom{-}θ & 0\\
    0 & 0 & 0 & 1\\

    For Z axis

    cos\phantom{-}θ & -sin\phantom{-}θ & 0 & 0\\
    sin\phantom{-}θ & \phantom{-}cos\phantom{-}θ & 0 & 0\\
    0 & 0 & 1 & 0\\
    0 & 0 & 0 & 1\\

    Why are they so different

    The reason why they are different is how they multiply with the unit matrix

    1 & 0 & 0 & 0\\
    0 & 1 & 0 & 0\\
    0 & 0 & 1 & 0\\
    0 & 0 & 0 & 1\\

    You’ll see that the formula for rotating around the x axis :

    1 & \phantom{-}0 & 0 & 0\\
    0 & \phantom{-}cos\phantom{-}θ & sin\phantom{-}θ & 0\\
    0 & -sin\phantom{-}θ & cos\phantom{-}θ & 0\\
    0 & \phantom{-}0 & 0 & 1\\

    Has the same first column and row [1, 0, 0, 0] as the unit matrix. If you look at how matrices are multiplied, you’ll see that this won’t change the final x coordinate

    And if you look at the matrices for rotating around y and z, you’ll see the same. The y rotation matrix has the same second column and row as the unit matrix [ 0, 1, 0, 0 ] and the one for z axis has the same third column and row as the unit matrix [ 0, 0, 0, 1]. This means that rotating around z axis doesn’t change the z coordinate, and rotating around the y axis doesn’t change the y coordinate

    Imagine putting a dice on a table. Now turn the dice clockwise or counter-clockwise without lifting the dice in any way. If you define the z axis to be the height above the table, you’re now rotating the dice around the Z axis. And since you’re not lifting it, the z coordinate remains the same.


    Let’s make a matrix for rotating the point [2,4, 8] by 30 degrees around the x axis.

    As you can see, the y and z coordinates have changed, but the x coordinate is the same. This is due to how matrix multiplication work.

    Other axis

    You might wonder; what if I want to move the object around a combination of two or three axes? Well, that’s a bit more complex, and I won’t go into the math here. But we’ll see how we can use glm to specify an exact axis of rotation below

    Putting it all together

    Before we look at how to do these operations using code, we need to look at how to do it by hand. Or by online matrix calculators in this case… Buy why? As I mentioned earlier, ordering is important here. Do things in the wrong order, and you get weird results.

    In the previous post, we looked at object space, world space, view/camera space camera and projection space. Let’s skip the last two for now and focus on the object and world space.

    Remember that object space is basically the model represented as a set of coordinates around origin [0, 0, 0] and that world space is the position of the object in the game world. So if the object has moved 10 units to the right, it’ll have the position [0, 10, 0] which means we have to move it there. This is where the translation matrix comes in! The object could also have turned around ( rotated ) and grown (scaled). Since the object is defined in object space ( vectors around [0,0,0] ) and this will never change, we need to move/scale and rotate the object every time. So we need to multiply every coordinate with this matrix in order to place/scale/rotate it correctly.

    Luckily we can just multiply the matrices together and reuse this matrix until the object moves. But this is also where we need to be careful about getting the orientation right. Let’s start by moving and scaling.

    Example – Wrong way

    Say we want to scale by 2 units in every direction and move 3 unites in every direction. Remember that the scale matrix looks like this :

    sx & 0 & 0 & 0\\
    0 & sy & 0 & 0\\
    0 & 0 & sz & 0\\
    0 & 0 & 0 & 1

    Filling in numbers :

    2 & 0 & 0 & 0\\
    0 & 2 & 0 & 0\\
    0 & 0 & 2 & 0\\
    0 & 0 & 0 & 1

    And translation matrix

    \,\ 1\quad 0\quad 0\quad dx\\
    \,\ 0\quad 1\quad 0\quad dy\\
    \,\ 0\quad 0\quad 1\quad dz\\
    0\quad 0\quad 0\quad 1\\

    Filling in the numbers, we get :

    \,\ 1\quad 0\quad 0\quad 3\\
    \,\ 0\quad 1\quad 0\quad 3\\
    \,\ 0\quad 0\quad 1\quad 3\\
    0\quad 0\quad 0\quad 1\\

    Now let’s multiply them :

    Let’s analyse this. Looking at the scale numbers, we see 2, 2, 2 as we expected. But when we look at the translation, we see 6, 6, 6! That’s wrong! We wanted 3, 3, 3, not 6, 6, 6!

    The reason why this happens is that we multiplied by scale first when we should have started with the translation instead. So let’s reverse the order of the operations and try again

    That’s more like it. We see that we move by 3 and scale by a factor of 2.

    When we add rotation, we can run into the same problem. Imagine if the object is first moved, then rotated we would still move around the origin(0,0,0). But since we already have moved the object away from the origin, we’ll orbit the origin ( much like a planet ) instead.

    The correct order

    In our example ( and in most cases ) the correct order is scale -> translate -> rotate. You might think it would be the other way around, but matrix operations are in the opposite order of what you expected. So you start out with the last thing you want to happen ( scale ) and end with the first (rotate)

    Using glm to do matrix operations

    Luckily, we don’t have to do all of this ourselves. In fact, glm does nearly all the math for us. Including rotation ( fortunately )All of these methods takes a glm 4×4 matrix, called mat4 this is basically just a 4×4 array representing a matrix.

    You can find the documentation here.


    Takes a matrix, translates it and returns it.

    Parameters :

    • glm::mat4 original – the matrix you want to translate
    • glm::vec3 dist – the distance to move

    Return :

    The original matrix original translated by dist like we looked at earlier


    Takes a matrix and scales it and returns it

    Parameters :

    • glm::mat4 original – the matrix you want to scale
    • glm::vec3 scale – the factors to scale by

    Return :

    The original matrix original scaled by scale matrix like we looked at earlier


    Takes a matrix and rotates it around an axis and returns it

    Parameters :

    • glm::mat4 original – the matrix you want to scale
    • double angle – the amount/angle you want to rotate by ( radians )
    • glm::vec3 axis – the axis to rotate by

    Return :

    The original matrix original rotated by angle around the axis, axis like we looked at earlier

    Putting it all together

    Now that we have looked at the functions, we can easily put them all together.

    This is a simple class that shows how you can use all the operations we’ve looked at.

    File notes

    You can find the source code for an application that lets you move/scale/rotate a cubehere.

    Images with colored matrix/vector multiplications has been made using

    Dice illustration has been made using Inkscape

    Feel free to comment if you have anything to say or ask questions if anything is unclear. I always appreciate getting comments.

    You can also email me :

    [OpenGL Part 4] 3D Basics


    So far we’ve only been dealing with 2D objects, we’ve also had a look at shaders. But we’ve only done stuff in two dimensions, now it’s time to make the step into the 3D world. In order to do this, there is a few things we need to do. The main thing is moving the object to the correct “space”. The spaces are an important, yet confusing, part of 3D rendering. Think of them as a separate coordinate system or a separate world. Any object has a coordinate in all of the spaces.

    Object space

    In this part we’ll be making a cube and rendering it. To make our cube we need 8 coordinates which make up the 8 corners of the cube :

    The center of the object is often [0, 0, 0] and each of the vectors is usualy somgething close to zero. In our case, the cube has one corner that’s [-1, -1, -1] and another one that’s [1, 1, 1] and so on…


    So basically, this is the coordinate that describe how the model looks.

    World space

    Let’s look at the cube again. It needs to have a position that says where it is in the game world, so that it will appear in one specific spot. If you’ve ever used a map editing program, you can see the exact position of every object. This is the world space coordinates. When programming the gameplay elements like collision detection, this is the coordinate system we’ll be using. The idea behind it, is that every obejct has their own position in the world space, and that position is the same no matter how you look at it.


    This is an example of a world space that has a cube close to the center and a player to the left of it. The example is for 2D worlds for simplicity, but it would be exactly the same in 3D, only with an extra dimension.

    View space / camera space

    Whereas the world space location is universal and the same for everyone, the view/camera space is different. It basically tells where the objects are in relation to the player where the player is looking. It is similar to pointing a camera at an object. The center of the image would have the position [0, 0, 0] and every other coordinate is defined around that. These are known as camera or view space coordinates.


    Compare the previous image with this. In the previous image, the cube ( [-1, -1] ) is to the left and behind the player ( [-2, 0] ). So if you look at it the world space from above, that’s how it looks. But if you look at it from the view space of the player, the player will be in the center, and the cube ( which is still at ( [-1, -1] in world space ) will be to the right. Note that the object hasn’t moved around in the world and the player hasn’t moved either. All we did was looking at it with the player as the center instead of the center of the world as the center.

    Another thing about the camera space is that it’s going to be relative to the direction the player or camera is facing. So imagine the player is looking along the x-axis ( towards the world space center. ) Then the player starts rotating right. Soon he’ll see the object. Since he’s rotating right, he’ll see the object moving to his left. Now imagine him stopping. What he can see, is the world in his own view space. Another player at another location looking at another point would see the world in his own view space.

    This might be a bit confusing, but it’ll get clearer soon.

    Projection space

    Finally we have the projection space. This is a little different, it describes the final position on the screen the vertex will have. Unlike the other spaces, this is always a 2D coordinate, because the screen is a 2D surface. You can look at this like the lens of the camera. The camera looks at a 3D world and the lens enables it to create a 2D image. You can look at it like the 2d version of the view space. When the camera looks at an object, it sees the view space. But what ends up on the screen is in 2d, and that is what we refer to as the projection space.

    Just like cameras can have different lenses, so is there different ways of convert camera space coordinates to projection space. We will look at this later when we look at how to convert from space to space

    An illustration of view and projection space

    Below is an illustration of the view and projection space. Hopefully it’ll help make things clearer :

    View space and camera space

    The big pyramid is the view space. It’s all that we can see. In this case it’s just three cubes.

    The 2d plane with the 3 cubes represented in 2d is the projection space. As you can see, it’s the exact same as the rest of the view space, only in 2d.


    In order to transform the the vectors from one space to another, we use a matrix ( plural : matrices ). A matrix can be used to change an object in various ways, including moving, rotating and scaling. A matrix is a 2 dimensional mathematical structure, quite similar to a table :

    \begin{bmatrix} 1\quad0\quad0\quad0 \\0\quad1\quad0\quad0 \\0\quad0\quad1\quad0\\0\quad0\quad0\quad1\end{bmatrix}

    This is what’s called an identity matrix. You can look at it like like a skeleton or an “empty” matrix. It won’t change the object at all. So when we intialize a matrix, this is what we initialize it to.

    If we had initialized it to just 0 for all values, it would have changed the object. We’ll look into the math involved for matrices in the next part. For now just remember than an idenity matrix is a default matrix that doesn’t change the object it’s used on.

    Instead we’ll look at how to work with matrices. And for that purpose, we use glm.


    In order to do graphics programming, we will eventually need to do more mathematics stuff involving vectors and matrices. We really don’t want to do this manually, because there is lots of operations we’d have to implement ourselves. Instead, we’ll use a tried and tested library that does the mathematics operatiofor us. glm, or OpenGL Mathematics, is a library made for doing the maths for graphics programming. It’s widely used and does just about everything we need. It is also 100% platform independent so we can use it on Linux, Windows and Mac.


    The libraries we have been dealing with up until now has required both header files, and library files. glm however, only requires header files. This makes installation very easy, even on Windows.

    Linux + Mac ( the automatic way)

    Both Linux and Mac might have glm available from package manager. If that’s the case, the process is the same as for SDL. Just open the terminal and install glm just like you would with any other program or package. If the package is not found, we need to “install” it ourselves.

    Windows + ( Linux and Mac the slightly harder way)

    If you’re on Windows ( or Linux / Mac and the first step didn’t work, ) we need to install the library ourselves. Fortunately this is relatively easy.


    The first step is to download glm. You can do that here.. Scroll to the bottom and download the format you want ( .zip or .7. ) If you have a tool for dealing with package files, you should have to problems extracting it. Windows has built in support for .zip so choose this if you’re unsure. If none of the options work you can install winrar or 7zip.


    Now extract the package anywhere you want and open the folder. You should find another folder name glm. In it there should be a lot of .hpp files ( think of these as your regular header ( .h ) files. )

    For Windows :
    Take the folder name glm ( the one containing the .hpp files ) and copy it to where you put the SDL2 header files so that it now contains both the SDL2 header file folder and the glm header file folder. Once that’s done, you should be able to use it directly ( since we’ve already specified the folder with all our includes. )

    For Linux and Mac:
    Take the folder name glm ( the one containing the .hpp files ) and copy it to /usr/include/ so that you end up with a folder called /usr/include/glm/ that contains all the glm header files.

    Since this is a system directory, you won’t be able to put them here the regular way. But there is a few options.

    If your file browser has a root mode, you can use that ( just be careful! )
    If you can’t find it, you need to use the terminal ( after all, you are on Linux! )

    You can use the cp command to do this :

    Most likely you can do it like this

    What does this do?

    sudo is short for “Super User DO”. This is needed because it’s a system folder. sudo basically tells the operating system that “I know what I’m doing” Use it with caution!

    The cp is the terminal command for copying.

    The -r option is short for recursive it makes the cp command also copy all the sub folders and their files ( wihtout it, it’ll only copy the files inside the glm folders but it’d ignore all sub folders )


    In order to make sure you got it right, run the command sudo ls /usr/include/glm it should now list the .hpp folders just like in the folder we looked at earlier.

    ( Please tell me if this doesn’t work on Mac, I haven’t been able to test it there yet… )

    We can now include them in the same way as the SDL2 header files : #include <glm/vec4.hpp>. And since glm only uses header files, we don’t need to change our compile command!

    Using glm to do matrix operations

    Using the OpenGL Matmetics library ( glm ) is quite easy. There’s just a few simple functions we need to do what we want.

    First of all, it’s just a mathematics library, so there’s no initialization code. That means we can jump straight to the mathematical functions.

    Matricies and verticies

    Fundamentally, vertecies and matricies are very simple constructions in glm. They’re just arrays with one element for each value. So a 3d vector has 3 elements, a 4d vector has 4 and so on. And similar for matrices.

    A 3×3 matrix has a 9 element matrix. It’s arranged in a 2d array like so : float matr33[3][3] and similarly a 4×4 matrix has 16 values and can look like this : float matr4[4][4]. glm uses float types instead of double but this can be changed if you want to.

    Let’s have a look at the various functions we can use with the vectors in glm

    Creating a vector

    The vector object in glm has several constructors, but we’re just gonna look at the simplest one :

    This will set all the values of the vector to value. So

    gives you the vector [1.3, 1.3, 1.3, 1.3]

    Creating an identity matrix

    When it comes to matrices we will be dealing with several different types of matrices. First we’ll look at creating an identity matrix ( like we saw above )

    The simplest type of matrix is the identity matrix ( as we saw above. ) There are two simple ways to making them :

    Or for 3×3 matrices :

    Both of these produce a idenity matrix, which you can look at as a default value for matricies. It can also be used for reseting a matrix.

    translatation matrix

    In addition to identity matrix, we’ll be looking at translation matrices . A translation matrix is used to move an object by a certain amount. Remember above when talking about world space we saw that each object needs its own position in the world space? This is what the translation matrix is for. We use it to move a single object to the position it’ll have in the world space. Every object in your game world needs to have a be moved to a position in the world space, and to move it we use a translation matrix.

    In addition to translating ( or moving ) an object, we can also scale and rotate it. All of these operations that works on a single object is called the model matrix. We’ll be using the name model matrix, but wel’ll be looking at rotating and scaling in a later post.


    Here is how we use glm to create a translation matrix :

    Parameters :

    • glm::vec3 d – the distance to move

    The vec3 vector specifies the distance to move in each direction. So, for instance, [ 1, 0, -2 ] creates a matrix that can move an object

    • 1 unit in x direction
    • 0 units in y direction
    • -2 units in z direction

    If you specify the vector [ 0, 0, 0 ] you’ll end up with a matrix that doesn’t translate the object at all. Nor does it change it in any way. So in effect you’ll end up with just a identity matrix.

    Let’s look at a very simple example on how to create a translation matrix :

    So how do we use it? Well that’s a bit more complicated so we’ll look at this later in the post.

    view matrix

    Now that we’ve placed the object in world space, we need to place it in the camera/view space. This is a bit more tricky because we need to set both position and where the camera is pointing.

    It also has what’s called an up vector. This is used to set which direction is up vector. We’ll just leave it at [0, -1, 0] which is the most common value. Since we won’t use it, it’s not something you need to read. But if you want to know more about it, check out the spoiler text

    The up vector

    Think of it as how the camera itself is rotated. For instance, the camera could be turned up and down. Or tilted to the side. Doing so, would also change how the coordinate systems work Which is logical. If you turn the camera upside down, positive x would be towards the left and negative towards the right!

    A possible use for this is if the player is hanging upside down. Then you could just change the up vector, which would rotate everything the player sees.


    Parameters :

    • glm::vec3 position – the position of the camera
    • glm::vec3 center – the direction the camera is facing ( first paragraph )
    • glm::vec3 up – the tilt of the camera ( second paragraph )

    Here’s the setup we’ll use :

    • position = [ 0, 0, -5 ]
      • x and y is at center, z is 5 units backwards
    • eye = [ 0, 0, 0 ]
      • Looking straight ahead
    • up = [ 0, -1, 0 ]
      • Upside-down, same as in SDL2

    And here’s the code for creating that matrix :

    When we’re rendering the scene, we’ll multiply all vertexes by this matrix. When we do that, things will be moved like we saw above. [ 0, 0, 0] is now what the camera is looking at ( like we saw above )

    projection matrix

    The view matrix dictates the position and orientation of the game camera ( what will end up on screen. ) But there is another matrix we need in order to do 3d, the projection matrix.

    Just like a camera can have many different lenses, so can a projection matrix be set up in different ways

    Parameters :

    • float const &fov – the field of view ( how far out to the sides the player can see )
    • float const &aspect – same as screen formats ( 16:9, 16:10, 3:4, etc… ) Changing this will stretch the image.
    • float const &near – the closest to camera something can be. Anything closer will be cut off.
    • float const &far – the furthest away something can be. Anything further away will be cut off.

    The fov paramater is said to be spcified in degrees, and that’s what we’ll use. But it seems some have issues with glm wanting radians instead. Radians is just an alternative to degrees. You can read more about it here.. So if it doesn’t work, you can try specifying 3.14 * 0.25 = 0.785 for 45º.

    Tip : if you own Minecraft you can experiment with this by going to options and changing the fov there!

    The near and far far arguments will cut whatever is smaller than near or larger than far. It doesn’t cut off whole vertexes, just the pixels than are not between near and far

    So, even though there are a few parameters here, they are relatively easy to comprehend. We’ll look more into how the actual matrix looks and what different types of projection matrices we can make ( yes, there are others ) in a later post.

    Let’s take a look at a simple example

    Combining them

    We can combine all the matrices into one, so that we only have to multiply each of the vertexes by one matrix instead of all three. But when doing matrix operations it’s important to notice that the operations are not commutative. That means that the order in which you multiply them matters. This is in some cases very useful, but it can also lead to weird behavior.

    In general, the order is left to right. The thing you want to happen first, should be the first part of your multiplication chain. Let’s assume we have three matrices. One for moving, one for scaling and one for rotating. If you wanted to scale, then rotate, then move, you’d do mat = scale * rotate * move.

    But… When it comes to the transitioning between spaces, it’s opposite! So we start out with the projection, then we multiply by view and then by model.

    I won’t go into the details of why ( would take too long time ) but it’s important to remember this for later.

    I use the name modelViewProjection because that’s the most common name for this matrix. It is also sometimes shortened to mvp

    Shaders and uniforms

    Now that we know the basics of the matricies, we can finally have a look at how to use them in order to move objects, do projection and get something 3d on the screen. In order to do this, we must pass our matrix on to the shader. And this is where the benefit of having just one comes in. We can now just send one matrix for each object we are rendering, which means we have to send less to the GPU and the rendering will be faster.

    Unifroms are global variables within the shader. It is a constant value so it does not change from rendering call to rendering call. You can’t actually change it from inside your shader at all. Doing so would cause a compiliation error. This is very practical because if there is an issue due to a uniform, we know it’s being changed somewhere in our main source code.

    ID of a uniform

    In order to be able to change a uniform in a shader from the source code, we need to have something to refer to it by. So in OpenGL, your uniforms automatically gets an ID. This is usually just the order in which you declare them. But this again raises a different issue ; how do we get the ID of the uniform on the shader. So we decalre an uniform on the shader, now we need to change it from our source code. How do we get the ID? By using the function glGetUniformLocation. Here’s what it looks like :

    Parameters :

    • GLint location – The id of the shader program
    • const GLchar *name – The name of the variable in the shader

    So if we have a shader that looks like this :

    We can get the value like this :

    Quite simple. And now that we have the ID, we can move on to the next step

    Changing a uniform

    A uniform can be any type of variable, a bool, and int, a float it can also be array type like vectors and matrices and even used defined structs! No matter what type, there is a group of functions we use for setting it, glUniform*. We’ll go into more details about the ones for single values and vectors in a later part. Instead we’ll jump straight into the ones for setting matricies


    The function for setting a matrix in OpenGL is glUniformMatrix. There are a lot of varieties of this depending on the type ( whether the induvidual values are ints or floats ) and size ( 4×4, 3×3, 2×3, 4×3, … ). To make this part shorter, we’ll only be focusing on the one we’ll actually be using, glUniformMatrix4fv

    Parameters :

    • GLint location – The location of the matrix ( the result of glGetUniformLocation )
    • GLsizei count – The number of matrices. In most cases this will be 1
    • GLboolean transpose – Whether OpenGL should transpose the matrix.See below
    • const GLfloat *value – The actual values of the matrix as an array

    Matrix transpose

    OpenGL expects the matrix in a spcific way. But in some cases, we might have the matrix transposed ( or “rotated” ). So instead of :

    \begin{bmatrix} 1\quad3\quad5\quad7 \\2\quad4\quad6\quad8 \\3\quad4\quad5\quad6\\4\quad6\quad8\quad9\end{bmatrix}

    It might look like this :

    \begin{bmatrix} 1\quad2\quad3\quad4 \\3\quad4\quad4\quad6 \\5\quad6\quad5\quad8\\7\quad8\quad6\quad9\end{bmatrix}

    The flag tells OpenGL that it needs to transpose the matrix first. Note : Some versions of OpenGL does not support this operation. In these cases the parameter must be set to false. This applies to OpenGL for mobile devices, OpenGL ES

    And now for an example

    In the vertex shader

    And in your main source code :

    Quite simple, and now the matrix is set and we can ( finally ) render 3d!

    The results

    So now after all that work, let’s see what we ended up with….


    What?! That’s not right, the colors are weird and the front is missing…

    The depth buffer

    Remember from my previous part where I talked about the last step of the rendering piepline, per-sample operations?* I mentioned the depth test and how it determines if something is obscured / invisible and should not be rendered. What you see above is the consequence of not enabling the depth test.

    ( * : It was in there, but I forgot to mention it also checks if something is covered by another object. Sorry about that! )

    Let’s take a close look at what’s happening, but this time we render one side of the cube at a time :


    This is the front side of the cube. So far it all looks good!

    Let’s draw the bottom…


    Here’s the back and the bottom, and here’s where it goes wrong. The front should cover the bottom. But here the bottom is covering the front. This is because we don’t have depth test enabled so OpenGL just draws the bottom on top of it.

    Let’s look at the next steps and see what happens


    Here we’ve added the the next side. If you compare with the inital part, this is what gets covered up. But why just this?

    Let’s render the next triangle

    Fail half

    Here we’ve added half of the front. From this we can see that it is covering the bottom and right sides.

    Let’s render the next triangle ( the second half of the front )


    It covers up everything. This is because it’s the last thing we drew, so it gets drawn last, on top of everything.

    And if we now draw the sides…


    … we end up with what we saw earlier. The back gets drawn and covers everything. Then the top and left sides gets drawn on top of that.

    Enabling depth test

    Now let’s look at this with the depth test enabled


    Here we’ve everything, including the front drawn the front. It completely covers the cube. It might seem wrong, all we can see is a blue square! But if we just move it a little…


    .. we see that it actually IS a 3D object! Finally!

    The depth test

    So how does all of this work? It’s quite simple. You can tell OpenGL to create a buffer ( think of it as a 2d array ) that has one value per pixel that says something about how far that pixel is from the camera.

    Each time you draw something, OpenGL checks the value for that pixel in the array. This way, OpenGL can determine if what you’re trying to draw is closer to the camera than what’s already there. If it is, then the new pixel will be draw instead of the old one and the distance value in the buffer will be updated with the value of the new pixel. That way this buffer will always keep a buffer that contains the value of the information about the closest pixel that has been checked so far.


    Here’s how it worked when we drew the front ( blue ) over the rest. For every pixel it compares the previous value ( left ) value with the current ( right ) value. In this case, the blue one is closer and is drawn over the yellow. This happens for every single pixel we try to draw. Luckily, OpenGL is pretty quick at this.

    How to enable it

    Enabling it is quite simple. There are two functions we need for that :

    Parameters :

    • GLenum cap – the OpenGL capability we want to enable. In our case its’s GL_DEPTH_TEST.

    This basic function is used for turning OpenGL features on. You can see a full list of the possible values here.

    Setting the depth function

    We also need to see how the depth comparison works

    Parameters :

    • GLenum func – the function to use

    Here we tell OpenGL what function to use for depth testing. We will be using GL_LEQUAL, you can find more information about it and the others here.

    Clearing the depth buffer

    Finally, we need to tell OpenGL to clear the depth buffer for us. This is so that we can start we a clean slate every time we render. Without it, the depth test could fail because of leftover values making OpenGL not render something that should have been rendered. We’ll be doing this in our glClear function :

    The | character is a way of combining the two values so that we clear GL_DEPTH_BUFFER_BIT and GL_COLOR_BUFFER_BIT in one call.

    The source code

    For the source code, I’ve taken the liberty of organizing it a little. I made a helper class for shaders, one for rendering in general, and one for dealing with models and their model matrix. In addition, I included a class I use for input ( that uses SDL2. )


    In charge of most rendering functionality :

    • Creating and initializing windows
    • Setting up OpenGL options
    • Setting up view and projection matricies


    A class that keeps track of Shaders.

    • Can keep track of one shader of each type ( vertex, geometry, tesselation, fragment. )
    • Represents a single, whole shader program
    • Does everything needed to create a single shader program
    • Also used for setting uniforms like the model view projection matrix


    A class that holds a single object. In our case this is the cube:

    • creates VAO and VBO for the object from a file
    • Keeps the model matrix, which contains the position / scale and roation of the object
    • Keeps a reference to the Shader that the object uses
    • Has a Render() function so that it can set all the VAOs and VBOs and render itself


    This is a class I wrote some time for keeping track of SDL events like quit, button presses, mouse move, mouse position, etc… It is not dirctly related to OpenGL, we just use it to make our interaction with SDL a tiny bit easier.


    A very simple Math helper class. It simply takes an EventHandler and creates a vec3 of the movement based on the arrow keys and w and s. So if you you press left, it’ll create a vector with a negative x value. This means that when we use glm::translate with the vector as the argument, we’ll get a matrix that moves the object left. It’ll be the same for every direction. Pressing w will move the object closer, s will move it away “into the screen”.


    Controls everything.

    • Initializes Renderer
    • Creates a Shader
    • Creates a Model
    • Checks for keyboard events and tells Model to update matrix accordingly ( move or reset )
    • Renders the Model by calling functions in Renderer

    As you can see, main.cpp doesn’t do anything to OpenGL. In fact, it doesn’t even include any OpenGL or SDL stuff. This is completely intentional. main.cpp should only control stuff.

    Since the code is quite long and too big to put in this post ( unless you really like scrolling! ) I’ve put it in the Github repo for this code.

    I’ve also created a zip file in case you don’t want to deal with git. You can find it here.


    Since we have the new .cpp file, EventHandler.cpp, we need to add it to our compilation call :

    For clang :

    clang++ main.cpp EventHandler.cpp -lSDL2 -lGL -lGLEW -std=c++11 -o Part4

    For gcc:

    g++ main.cpp EventHandler.cpp -lSDL2 -lGL -lGLEW -std=c++11 -o Part4

    And NOW we’ve covered everything we need to know in order to do basic 3d rendering. It has taken me a long time to write all of this and it is quite long. But I hope you enjoyed it and that it helps you understand 3d rendering. If not, feel free to ask, I’m happy to help if I can.

    Feel free to comment if you have anything to say or ask questions if anything is unclear. I always appreciate getting comments.

    You can also email me :

    [OpenGL Part 3 ] Shaders in OpenGL

    Shaders in OpenGL

    In the previous part we looked very briefly at shaders. Shaders are small pieces of code that run on the GPU and they enable us to render graphics in lots of fancy ways. But before we look closer at the shaders, let’s have a look at the sequence of them :

    The rendering pipeline

    OpenGL goes through several steps in order to draw something on the screen. This sequence of steps is known as the rendering pipeline. It looks like this :

    The gray-is parts are programmable and these are what I’ll be refering to as shaders. The ones with the dotted lines are optional while the ones with full lines we have to program if we want to render something. At least that’s what the specification says, but some implementations might not require it. So, on some systems, you might be able to skip it, but that’s not guaranteed to work on all system. And it’s much more fun writing the shaders ourselves anyways!

    This part will teach you a little bit about each of the steps, what they do and how they work together. Each one of these steps are quite involved, so I’ll most likely dedicate an entire post for each of them.

    Vertex Specification

    In the previous part we set up VBOs and VAOs so that we later could use the VAO for rendering. That is the vertex specification stage of the rendering pipeline. More specifically its how OpenGL sets up the VBOs and VAOs when we tell it too. Since we already dealt with this stage in the previous part, and what OpenGL does behind the scenes isn't that relevant to us, we're just gonna skip to the next step.

    Vertex Shader

    The vertex shader is the first programmable step of OpenGL. This stage takes a single vertex and outputs a single vertex. The job of the vertex shader is basically to give every vertex the position they should have on screen. In the previous part we were able to use them directly because the position we gave in was the final position of the object. But if we wanted to move it, we could have used the vertex shader to do that.

    Another point is that the screen is only 2D, so when we have a 3D object we need a way of representing that as 2D on the screen. This is a quite complicated step involving several steps to position all the vertexes in the correct position. We will look at this in later post, this post is just for getting an overview of all the shaders. The above image kinda shows this ; the cube is a 2D drawing, but it looks 3D because of how the vertexes ( corners ) are positioned.


    In the games we see today, a high level of detail is important. And in order to achieve a high level of detail, we need a high number of vertexes. Imagine you have a ball in your game. How do you draw that with a high level of detail? If you have too few vertexes, it'll look blocky and not round at all. You could just add millions of vertexes to make it look better. But a million vertexes would mean 4 byte * 3 * 1 000 000 = 12 000 000 Bytes or 12 MB of just vertexes that's quite a lot, especially if your game has a lot of round objects.And more importantly, it takes time to render that much.

    The purpose of the tesselation shader is basically to add more details to your object when needed. So when we see something from a distance, we don't need a lot of detail. But when we zoom in, we'll be able to see more details, so we need to render the object with more details so that it doesn't look block when viewed up close.

    Geometry Shader

    The next step in the rendering pipeline is the geometry shader. The geometry shader gets input in the form of primitives. ( A primitive is basically either a triangle, line or a point. ) Then with the geometry shader we can create new primitives. This means we can use it for things like spawning particels in a particle system. Or to make fur, hair, grass, etc.

    Image converted using ifftoany

    Let's say we have a sphere. When the tesseletion stage is done, we get the input in the geometry shader as tiny little triangles. Each one of these triangles are a tiny part of our sphere. With the geometry shader we can add fur to the sphere, and now we have a fuzzy little ball.

    Using a geometry shader is one of the most efficent ways to make hair/fur/grass because it doesn't require any additional vertexes, everything is being done on the graphcis card. That makes it really quick.

    The next three steps are fixed, so we can't implement them ourselves, so I'll only describe them briefly.

    Vertex Post-Processing

    This step does a lot of different operations on the vertexes. A lot of these prepares them for the next two steps primitive assembly and rasterization.

    Primitive Assembly

    This is, as the title suggest, the point were our primitives gets assembled. It receives a bunch of vertexes and puts them together into shapes like triangles. It also does some checks to see if a primitive is out of the screen ( or invisible in any other way ). If it is, this primitive won't get passed on to the next step.


    Now we have our final primitives, but it's just a bunch of shapes. This stage rasterizes the data. That means it takes the data and turns it into something the resembles pixels, fragments.

    As noted about, we don't get the actual pixel from the rasterizes but rather fragments. A fragment contains every data OpenGL needs in order to render a pixel. There will be at least one fragment per pixel. There can be more, but not less.

    Fragment shader

    This is the final shader that we can implement ourselves. It receives the input in the form of fragments ( as described above ) and outputs a single fragment when it's done. At this stage, we basically just set the color of the fragment. Though that can be rather complex. This is also the step were we'll put the texture on the object.

    But setting the color and/or tecxtures also means setting the lighting, and this can get quite complex which means there will be another part for it. For now though, all you have to remember is that this stage is were we set the color ( including alpha value ) of the fragment.

    Per-Sample Processing

    The final step before we get something on the screen is the per-sample processing step. In this step OpenGL looks at every fragment and sees if it, for any reason, should not be rendered. This is done running several tests. If any of them fail, the fragment might not be rendered. Some of these tests aren't enabled by default, so you need to enable or set them up yourself.

    Below is a short description of these tests, you can skip it if you want.

    Per Sample Processing details

    Ownership test

    If there is another window over our OpenGL window, the pixels are not visible to us so there is no need to view them on the screen.

    Scissor test

    You can specify a special rectangle on the screen. If the fragment is outside of this, it'll fail the test.

    Stencil test

    A stencil test takes a stencil, which is basically a black and white image, and uses it to determine if the fragment should be rendered. It works just like a stencil in real life.

    Imagine you take a sheet of paper and cut out a big 'H' in it. Then you put it over a different piece of paper and spray pain all over the H. When you remove the top paper ( the one with the H cut out, ) there will be a H on the paper, the exact same shape as you cut out. This is how this test works too. You can create a bitmap / image that works as the top of the paper. Everything this bitmap covers ( every black or every white pixel ) will then fail this test and not get rendered.


    This is the test that checks if anything is actually visible covered up by something else. So if you have an object like a dice and something in front of it like a wall, the depth test is what makes sure the wall is drawn and not the dice.

    Finally, the blending happens. This is where the final alpha value of the fragment gets determined. OpenGL has several ways of calculating alpha values, so this needs to be it's own step. It also relies on the alpha value set by the fragment shader so this step in particular needs to be done after the fragment shader


    And that's all the steps of the rendering pipeline. Now we'll take a look at how we set them up in OpenGL. We will also expand on the previous example and make something a little bit fancier by creating our own geometry shader and fragment shader

    Setting up the Shaders

    There are a few calls needed for setting up the shaders, but it's actually a bit easier than VBO and VAO. The shaders consists of one main object called the program that collects all the shaders into one, like a VAO. The individual shaders are like the VBOs. They're created separately and in the end they're added to the program. After they've been added, we won't be dealing with them unless we are going to update them.

    First we'll look at setting up the individual shaders. These are the grey steps in the image at the top. The process for setting them up is more or less identical for all shaders ( except that we have to specify the type of shader in one of the steps. )


    This is very similar to the other glGen* functions like glGenBuffers and glGenVertexArrays. But this one returns the id and only has one parameters, so we can only make one at the time. The parameter is used to specify type like glGenBuffers. This functions is used for all shader types.

    Parameters :

    • GLenum shaderType - the type of shader to crate ( see below )

    The shader type can be any of the following :

    • GL_VERTEX_SHADER - for creating a vertex shader
    • GL_TESS_CONTROL_SHADER - the first step of the tesselation shader
    • GL_TESS_EVALUATION_SHADER - the last step of the tesselation shader
    • GL_GEOMETRY_SHADER - for creating a geometry shader
    • GL_FRAGMENT_SHADER - for creating a fragment shader
    • GL_COMPUTE_SHADER - a compute shader is not a "standard" shader, it's just for setting a piece of code that will run on the graphics card. It is not part of the rendering pipeline so we won't be using it here

    As you can see, there is 6 different types of shader we can create using this function. We will be using the 5 first, but the process for setting up each one of them is identical so it's not a lot of work.

    Loading the shader source code

    The next step is to set the actual source code for the shaders. This is the .frag and .frag files in the previous part. The first step here is to load the actual shader. This simply involves reading a simple text file, but we need to write the function ourselves because OpenGL has no support for it :

    This function just takes a filename and returns all of the text file as a std::string


    Now we have our std::string we need a way of sending it to OpenGL. This is what glShaderSource is for.

    Parameters :

    • GLuint shader - the id of the shader. We'll use the return value of glCreateShader
    • GLsizei count - the number of strings of data we want to use. We only have one file, so we'll use 1 here
    • const GLchar **string - the actual data we want to use
    • const const GLint *length - the length of each induvidual char*

    This function might seem a little weird at first. The first argument is okay, it's just the Id of the shader. We dealt with similar things when we set up the VBO and VAO. But what about the others? I'll describe what the other parameters do and how to use them below. We won't be using this, but I do recommend reading it because then you'll know what the arguments are for. If you read it, you'll know exactly how to use it, which in the end, will make you less likely to write bugs.

    glShaderSource details

    As noted about glShaderSource is made to be able to take in several pieces of data. This allows you to have your shader spread into several different files. Then you could load all of them into different std::strings, one per file. Then finally you could add all the data into the shader with one call. This is were the different parameters comes in.

    count is just the number of different std::strings we have.

    const GLchar** string is a bit more tricky to understand. A GLchar* ( not the single '*' )is the same as char* which is just a text string. But we have two asterisks ('*')! In C++, a pointer is a lot like an array. So you can look at it like an array of char*. This is what allows us to send in several different strings at once.

    The final argument, const const GLint *length works in the same way. Just think about it as an int array, were each value is the number of characters in the string with the same index.

    Let's look at an example to illustrate this :

    Note : this is pseudocode, it won't compile. But hopefully this helps you understand this function and all it arguments. Having a good understand of all the aspects of a functio will make it a lot easier to debug.



    Now we have loaded our shader, it's time for OpenGL to compile it. This is done using this simple function :

    Parameters :

    • GLuint shader - the Id of the shader to compile, same as for glCreateShader and glShaderSource

    As you can see, there isn't really much to this function. And after calling this method, this shader is ready to go. But we first have to create our main shader program. So the final crate + compile shader looks something like this :

    The char* src = const_cast( str.c_str()); part is just a way of converting the result of str.c_str() ( which is a const char*. ) Because OpenGL expects a non-const char*, so we need to cast it using const_cast.

    In glShaderSource(shaderId , 1, &src, &size); we use &src to create a pointer to the char* that holds our source. This turns it into a double pointer or a "pointer to a pointer", if you will. Similarly, OpenGL expects a pointer to an int for the size argument, so we pass in &size. In both of these cases the pointers are use to get array functionality for setting multiple sources ( as explained above. )

    The shader program

    Now we've created a shader time to add it to our program. As mentioned above, the shader program is what combines all shaders into one. Just like with VAOs we can have several off them. So we could have one for particle effects, one for regular objects, one for reflective surfaces, one for the ground with grass, etc... Since the shader program combines all the individual shader object, switching between them is easy. And setting them up is quite simple too!


    This is very similar to the first function we looked at, glCreateShader. It simply creates an OpenGL shader program and returns the Id. We will use this program to connect our shaders and hook them up to the rendering pipeline.

    That's all, now we have created a shader and can use it in the next step.


    Now that the shader program has been created, we can attach our shaders to it. This is as simple as it can get :

    Parameters :

    • GLuint program - the Id of the shader program ( the one we created with glCreateProgram )
    • GLuint shader - the Id of the shader ( the one we created with glCreateShader )

    It doesn't really matter at which point in time you call this function, as long as both the shader has been created with glCreate*. You can even do this before loading the source. All it does is that it attaches the shader to the shader program using the ids. Though I find it more logical to attach the shader after it has been fully created. That way we won't be adding any shaders that failed to compile.


    The final step of creating a shader program is to link it. This will inspect the shaders and optimize them before creating an executable. And finally the executable will be sent to the GPU.

    Parameters :

    • GLuint program - the Id of the shader program ( the one we created with glCreateProgram )

    An we're done, the shader program has been created and uploaded to the GPU so we can use it in our OpenGL application.


    Finally, now that our program has been created, we can finally start using it. This function is also very simple, it simply activates the program we pass in as the parameter. There can only be one shader program at any time, so passing in a new id disables the old one.

    Parameters :

    • GLuint program - the Id of the shader program ( the one we created with glCreateProgram )

    Putting it all together

    Below is a simple, fully working, example on how to set up a shade program.

    And now, at last, we can have a look into setting up the induvidual shaders, first a look the language they are written in.


    Shaders are written in GLSL ( OpenGL Shading Language ), which is very similar to standard C. But with a few extra built in stuff and some other stuff removed. The most important addition for us right now is the storage qualifiers. These specify whether the value is an input or and output. If the value is an output, it also tells what kind of an input it is. The storage qualifier is place before the type of value ( see example below )

    • Attribute input values ( sttribute)
      • Attribute values ( passed from a VBO)
      • Only for vertex shaders
    • Attribute input values ( in )
      • Input values passed from previous shaders
    • Attribute output values ( out )
      • Input values to pass to next shaders
    • Custom input values ( uniform )
      • Input to the shader
      • Used for values that are not stored in a VBO
      • Can be any type ( float, int, bool, array )

    We won't be using attribute, only in / out

    Vertx shader example

    Let's look at a simple vertex shader :

    The top two variables attribute in vec3 in_Position; in vec4 in_Color; are shader input variables which we got from the VBO / VAO ( see below. )

    The third variable, out vec4 ex_Color;, is our out variable. This is the variable we send to the fragment shader we have to do this manually by setting it in our main() like so : ex_Color = in_Color;

    Fragment shader example

    Now lets look at a simple fragment shader :

    So, the way to pass in VBO data is through an in value. All attributes ( like positions and colors ) must be passed to the vertex shader and then from the vertex shader to the next shader and so on. An attribute can only be passed from one shader to the next, you can't pass it directly to the last shader for example. The output values will automatically be passed through the shaders we haven't written ourselves.

    Geometry shader

    The geometry is a little bit more complicated and more involved than the fragment shader and the vertex shader so I wont explain in in this post. I will show you a geometry shader example that you can experiment with. It's commented so hopefully it should be easy to get an overview over what it does.


    It is very important to get the ordering of the attribute variables right. Remeber this part :

    The indexes we specify here ( positionAttributeIndex and colorAttributeIndex ) dictate the order you must create the attributes in the vertex shader. In out case, this will be :

    Of course, we could change it so that

    positionAttributeIndex = 1 and colorAttributeIndex = 0.

    In this case we would have to declare

    in vec4 in_Color; first and then in vec3 in_Position;.

    This is something that's very easy to miss and can be really frustrating to debug. Generally, OpenGL is quite low level, so mistakes like these are really easy to do and hard to debug if you don't know exactly what to look for.

    Some code

    The major new part of code today is a reworked Shader.h that can load any type of shaders. I also added a geometry shader that'll give you an idea of what the geometry shader does. I didn't add a tesselation shader because that would require OpenGL version 4.0 and that would mean that a lot of you would not be able to run it. Besides, I think there already is enough new stuff in this part. Well anyways, here's some code :


    The Shader.h has been rewritten. Most of it should be described in the blog post except for the getting of variables from the shader ( including the log. ) I'll get into that in another post

    Vertex shader

    I renamed the vertex shader vert.glsl.

    Geometry shader

    I added a geometry shader it has a few bools you can change to show off what you can do. Keep in mind that when we render it normally, all we get is a square. The extra triangles are created by the geometry shader itself.

    Screenshot :
    simple geometry shader

    Fragmen shader

    I renamed the fragment shader to frag.glsl I also added functionality for setting a random color :

    simple fragment shader


    I also made a few changes to our main file. This time we only render the triangles, not the lines. I also changed the coordinates a little. It still forms a square, but it's separated into four equally large triangles ( instead of two. ) This makes working on it in the geometry shader a lot easier.



    We'll compile it just like last time :

    Using clang

    clang++ main.cpp -lGL -lGLEW -lSDL2 -std=c++11 -o Test

    Using gcc/g++

    g++ main.cpp -lGL -lGLEW -lSDL2 -std=c++11 -o Test


    In this part we looked at shaders, what they do, how to create them and how to set them up. I intentionally didn't dive deeply into the shaders themselves, but instead I showed how to set them up. I know there has been a lot of very basic setup stuff in these parts, but I find it important to know how to set up OpenGL properly.

    The end result we get on screen in this part is quite simple, but feel free to play around with the geometry shader. There are a few bool values you can toggle to get different output. Or you could just modify the code yourself and see what you end up wtih.

    But in the next part we'll finally look at getting something 3D on the screen. When we do have something 3D on the screen, we can manipulate ( roatate, move, stretch, etc.. ) it in various ways quite easily. See you then!

    Feel free to comment if you have anything to say or ask questions if anything is unclear. I always appreciate getting comments.

    You can also email me :

    [OpenGL Part 2] Vertexes, VBOs and VAOs


    OpenGL is complicated. Whereas SDL2 is relatively small with a few objects and functions, OpenGL is huge with lots of different elements. Fortunately it’s also very well documented. And, as we saw in the previous part, it’s not hard to get something on the screen ( thanks to SDL2 )

    In this part, we’ll first take a look at vertexes before we look at how to draw a simple object.

    Drawing an object in 3D

    Now that we’ll be working in 3D, we need to do things a little differently. In SDL2 we only used the position and size of each object. Each object was basically just an image that we drew on the screen and moved around. We never told SDL2 anything about how it looked, how big it was, etc. SDL2 simply took a texture and put it on the screen.

    But in OpenGL we’ll be rendering an except shape so that we can view it from any angle, which would be almost impossible in SDL2. It also enables us to color it, apply textures and change the lighting in code. We do this by defining a mesh like you see above. It’s all just a bunch of points in 3D space defined by vectors. A vector in this context is just a simple mathematical unit that defines a position. We’ll be using 3D ones, so they’ll each have three values ( x, y, z ) When we have these vectors we can tell OpenGL the exact shape of an object, and then we can draw it in 3D using OpenGL

    Vertex vs vector

    In OpenGL we use something called a vertex. A vertex is a lot like a vector in that in represents a single point. The difference between a vertex and a vector is that a vector is just the position of a single point. But a vertex contains the vector of the point and it can also hold other things at the same time, like the color of that point, and other things we’ll come to in a later part. So, in essence, a vertex contains everything we need to draw one of these points. And when we draw an object, like a dice, we need to give OpenGL one vertex for each point.

    The dice about has 8 vertexes :

    • left, top, front
    • left, bottom, front
    • right, bottom, front
    • right, top, front
    • left, top, back
    • left, bottom, back
    • right, bottom, back
    • right, top, back

    Each part of the vertex is usually referred to as an attribute. For instance the vectors/positions is one attribute, colors is another attribute and so on…

    OpenGL programming method

    In contrast to other APIs / libraries, OpenGL is not object oriented. There’s really no objects at all, mostly because a lot of the vertex data is stored on the GPU. So instead you need to handle the models, textures, etc on our own.

    OpenGL does, however, have some notion of object. But instead of being a concrete struct like SDL_Texture as we have in SDL2, it’s just an ID to a type of data. The only way to refer to this data through OpenGL is by using ID’s. This is mostly because the objects are stored on the GPU, and you want to keep them there without transferring/streaming them back and forth.

    So let’s take a look at two of the most important objects we’ll be using in OpenGL.

    VBO – Vertex Buffer Object

    The VBO(Vertex Buffer Object) is one of the “objects” of OpenGL. It holds all of a single vertex attributes for an object. Not all vertexes, but all vertexes of one type, like all positions or all colors. So you’ll end up with one VBO for positions, one VBO for colors, etc…

    In order to create a VBO, we first need some data. So let’s take a collection of vectors and put them in a VBO. To keep things simple, we’ll just use a square. Our square has four positions, one of each corner. Let’s create a simple array containing all of these points.

    That’s the simple part. Now we need to tell OpenGL to create the actual VBO for us. This requires a few steps so let’s look at them one at the time.


    This function generates a VBO for us, so that we can store our vertex attribute into it. It also gives us back an ID for this buffer so that we can use it for referring to this VBO later.

    Note : GLsizei is simply just an unsigned integer like uint32_t and GLuint is just a signed integer like int32_t

    Parameters :

    • GLsizei n – the number of buffer we want. One per attribute, so we’ll keep it at 1. But if we were going to add colors, we’d use 2.
    • GLuint* buffers – this is were we get the ID’s of our buffers back as an array.

    So now, let’s generate our VBOs :

    The second line creates an array for holding our ID’s and the third line tells OpenGL to allocate countVBOs VBOs for us. Since arrays works a lot like pointers in C++, we can just pass in vbo, and OpenGL will automatically give us as many IDs as we ask for.

    Now we have our VBO and it has the ID stored in vbo[0]


    This function is deceptively simple, so it’s important to understand it because it can lead to some confusion. And if you call it at the wrong time or don’t call it, your application will most likely crash!

    The function simply sets a buffer as the current buffer. We use it to tell OpenGL that this is the buffer we are working on now.

    Parameters :

    • GLenum target target – the type of buffer we want this to be. In our case, it’s GL_ARRAY_BUFFER
    • GLuint buffer – the ID of the buffer we want to bind / set as active

    You might have notices the new type, GLenum. This is just a huge enum that contains all the predefined flags in OpenGL. These flags are used by a lot of different functions for a lot of different things, so I’ll just explain them as they come.

    GL_ARRAY_BUFFER is the value we use for vertex data like positions and colors.

    Using it is really simple :


    Now that we have bound the buffer, we can tell OpenGL to store this data for us.

    Now this might seem complicated, but it’s quite logical when you see what the parameters are for.

    Parameters :

    • GLenum target n – the type of buffer we want this to be. We’ll use the same as for glGenBuffers : "GL_ARRAY_BUFFER"
    • GLsizeiptr size – the size of the data in bytes.
    • const GLvoid* data – the data that should be stored
    • GLenum usage – how the data should be used. We will just use GL_STATIC_DRAW which means we won’t be modifying it after this, we’ll only be using it for rendering.

    The second argument, GLsizeiptr size, might seem a bit weird. First of all, what is a GLsizeiptr? Think of it as a very big integer. It’s basically a special type they used for when you need to store huge numbers. But don’t worry too much about this, we’ll be using it as a standard unsigned int.

    The third argument, const GLvoid* data is a pointer to the data. A const GLvoid* ( or simply just void* ) is a pointer that can be pointing to anything. It can be floats, chars, ints, std::strings… Anything! So in reality, it doesn’t know anything about the data at all. This also means it doesn’t know the size either, which is why we need that second argument, GLsizeiptr size

    Finally, here is how we’ll use it :

    sizeof(GLfloat) simply gives us the size of a single GLfloat. So we just multiply that by the number of individual GLfloats in our array, square.

    Here’s the entire code for setting up a VBO so that you can digest it all before moving on to the next part.

    Now we have created a VBO but how do we render it? And what if we have more than just one VBO for the same object? Enter VAO, Vertex Array Object

    VAO – vertex array object

    A VBO represents a single vertex attribute ( like positions or color ). A VAO is a lot like VBO, they’re used in the same way. The difference is that a VBO represents a single attribute, but a VAO can combine several attributes / VBOs so that we have all the vertex data in a single object. This is a lot simpler when it comes to rendering ; we can simply render the VAO, then move on to the next one without even thinking about the VBOs

    We still need a VBO for every attribute though, and we need to put them into the VAO one by one until we have a single object. The VBOs is only needed for creating or updating the VAOs. All other times we just use the VAOs


    Think of this as glGenBuffers, only for VAOs. It generates a VAO for us to use later.

    Here’s the signature :

    The parameters are the exact same as for glGenBuffers so I won’t be going into them in any more depth.

    Here’s how we’ll use it


    Just like glGenVertexArrays is the VAO equivalent of glGenBuffer, glBindVertexArray is the VAO equivalent of glBindBuffer. So this function sets the VAO as the active one. Note that these are not mutually exclusive, we can have both a VBO and a VAO active at the same time.

    Parameters :

    • GLuint array – the ID of the vertex array to bind.

    As you can see, this signature only has one argument. Why? Well in OpenGL there are several data we can store in a VBO, not just vertex data. But a VAO is more of a wrapper object for vertex data, so there is just one type.

    Usage :


    Now this is where things get a little complicated. This method is what associates our vertex data from the currently selected VBO with the current VAO. We use it to tell OpenGL were in the VAO the data from the current VBO should be stored.

    Parameters :

    • GLuint index – An ID we define that refers to this attribute. We’ll need this later so that we can refer to this vertex attribute
    • GLint size – the number of values per attribute ( 1 to 4). In our case it’s 3 since our attributes have 3 values (x, y and z)
    • GLenum type – the datatype the attributes are in. In our case it’s GL_FLOAT
    • GLboolean normalized – whether the data should be normalized ( more on this in a later part. ) For now we’ll use GL_FALSE
    • GLsizei stride – specifies an interval between vertex attributes. We don’t use that so we’ll just use 0 here
    • const GLvoid * pointer – the starting point of the data to use. We don’t use this either, so we’ll just use 0 here as ell.

    As you can see, it’s really not as bad as it looks. The fourth argument, normalized isn’t really important for us now. And the two last ones only deals with cases were we put several vertex attributes in the same array ( like if we put positions and colors ) in the same array.

    The important thing here is that it puts a type of vertex attribute data form a VBO into a VAO. It uses the current active VAO and VBO, so we need to call glBindBuffer and glBindVertexArray first.

    Here’s how we’ll be using it :

    Note that if you haven’t called glBindBuffer() before calling this function, it won’t work properly and your application might crash.


    After we’ve set up the VBOs and VAOs, we need to enable the attribute within the VAO because, by default, every vertex attribute array ( like our positions. ) are disabled. This means we’ll have to enable every vertex attribute we create and assign with glVertexAttribPointer. In our case, we just need to call it once since we are only enabling positions.

    Parameters :

    • GLuint index – The index of the vertex attribute array we want to enable.

    With all of that out of the way, we can look at an example of how to set up a VBO and VAO :

    Hopefully this wasn’t too bad. It’s important that you understand what a VBO is, what a VAO is, what their relation is and how to use them. Knowing this will save you from a lot of confusion and frustration in the future.

    I placed the binding of the VAO and VBO in an awkward order to demonstrate the ordering of these functions. The ordering doesn’t matter as long as you bind the VBO before using glBufferData and glBindVertexArray before you call glVertexAttribPointer. Take a look in the code below for a better way of ordering these functions : )

    A quick note about shaders

    Before we can get anything on the screen, we’ll need a shader. Shaders are small programs that runs on the actual GPU/graphics card. We only have to define a vertex shader. This shader deals with things like moving/rotating/scaling objects. We also have a framgment shader which deals with setting the correct colors.

    I won’t be going any deeper into shaders than that this time. But we do need them, which means we also have to set them up properly. So I made a simple helper class that does all of that for us. I’ll post it below with the other code so you can copy it and get the example up and running. The next part will be about sharers and why we need them, so hopefully the code will make a bit more sense then.

    The code

    The code consists of three pieces ; the main .cpp file were most of the code is, the Shader.h which is where all of the shader related code is, and the shaders ; the vertex shader ( tutorial2.vert ), and the fragment shader ( tutorial2.frag )

    I have added setting of colors to the code, along with an example of glEnableVertexAttribArray. I hope it gives you a good idea of how to use these functions. In the next part we’ll take a close look at the shader, how to set them up and how to write our own.

    The code is take from here. Though I have changed it quite a lot.


    Here is our main file :

    As you can see, it also sets color. It does this in the same way as it sets positions. I added it to further demonstrate how to bind the buffers correctly.


    Here is the shader helper file. Don’t mind it too much, I’ll go into more detail about how it works the next time.


    This is our first shader, the vertex shader. Make sure you name it tutorial2.vert and put it along with the other files


    And finally, the fragment shader. Make sure you name it tutorial2.frag and put it along with the other files


    Using clang

    clang++ main.cpp -lGL -lGLEW -lSDL2 -std=c++11 -o Test

    Using gcc/g++

    g++ main.cpp -lGL -lGLEW -lSDL2 -std=c++11 -o Test


    Finally we have something on the screen! The process is a bit tedious and not 3D yet. But we’ll be going into 3D territory soon. And that’s when things get really cool.

    I hope this tutorial has helped you understand VBOs and VAOs along with the concept of vertexes. My goal is to go through things thoroughly, giving you a good understanding of how things work. The better you know how things work, the easier it will be to write code.

    Feel free to comment if you have anything to say or ask questions if anything is unclear. I always appreciate getting comments.

    You can also email me :

    [OpenGL – Part 1] OpenGL using SDL2


    In order to program in 3D, you need a 3D library. Sure, you could base your game of an already existing engine. But that’s not what this blog is about! Instead, we’ll
    used a graphics library. The most common ones are OpenGL or DirectX.

    Since DirectX is a Microsoft technology and only works under Windows, we will be using OpenGL. This means the applications we make will work on just about any operating system.

    Note : I recommend that you read at least the first few parts of my SDL2 tutorial before continuing. My SDL2 tutorial will explain the SDL2 elements like SDL_Window in more detail. The first few parts are really short and should give you a basic understanding of SDL2

    What is OpenGL

    OpenGL is a specification or an abstract API if you will. It is not an actual implementation. It doesn’t do anything on its own. But rather, it just defines a lot of functions and data types that we can use in our program. Then it’s the job of the underlying implementation to actually do the job. This implementation is part of the graphics card driver. The means that the implementation varies from platform to platform. The Linux version is different from the Windows version. It’s also different based on the hardware. So a nVidia version is different from an ATI version.

    We really won’t be giving this too much thought, we’ll only use the functions and types defined by the OpenGL specification. But it’s useful to know exactly what OpenGL is.

    Old vs new

    Back in the day, programming in OpenGL was tricky. Setting it up was a mess, you had several different libraries to keep track of like glu, glut and glew. I’m still not quite sure what all of them did. On top of that, the code itself was rather bad too. Really not intuitive and not as flexible as the new version. But after version 3.0 a lot changed. Lots of code was deprecated and lots of new stuff were added. So new we can write very simple and concise OpenGL that’s also multi platform.


    I briefly mentioned GLEW( OpenGL Extension Wrangler Library ) above as one of the libraries that made OpenGL confusing. But that’s really not GLEWs fault. GLEW is actually quite simple, it just lets us write OpenGL code in a simple, platform-independent way. We won’t be noticing it a lot, except for an init call, so there’s really no need to learn a lot about it. But it’s always nice to know what its there for.

    OpenGL and SDL2

    SDL2 makes setting up OpenGL really easy. You can use SDL2 to create your window and hook up a rendering context ( I’ll explain what a rendering context is later. ) If we didn’t do this using OpenGL we’d have to do it in different ways on different platforms. The code would get messy and really complicated. SDL2 lets us do all of this in a really simple way. I

    Rendering context

    A rendering context is a structure that keeps track of all of our resources, basically every thing we want to put on the screen. It also keeps some state like what version of OpenGL we are using and some other stuff. We need a rendering context before we can do any OpenGL stuff. A rendering context is connected to a window ( like SDL_Window ). It can be connected to just one window, several windows and a window can have several rendering contexts.

    An SDL_Renderer is a kind of a rendering context, but SDL_Renderer only supports the SDL2 way of rendering, which is 2d. But now we want 3d, and it’s here that OpenGL comes in. SDL2 even has its own rendering context object, SDL_GLContext. We’ll see how to create it later.

    Setting it up

    Now let’s try to set up a simple OpenGL application. It won’t be much different from the first SDL2 application we made, the point is just to set up OpenGL.

    Libraries and header files

    First of all, if you haven’t already, you should set up SDL2. You can do this by following my guide.

    Linux / Mac

    If you’re on Linux or Mac, you don’t have to set up anything else. All you need is an extra compilation flag which I’ll show you later.


    If you’re on Windows things are a little trickier.

    1. Download the libraries, headers and binaries from the GLEW web page
    2. Put the “glew.h” header file in a folder named “GL” in the same directory as you put the “SDL” folder
    3. Put the “glew32d.lib” file in the directory you place “SDL.lib
    4. In the Visual Studio -> Project Properties -> Linker -> Input add glew32d.lib;opengl32.lib;
      • You also need SDL2.lib like in the guide, so your string should start with glew32d.lib;opengl32.lib;sdl2main.lib;sdl2.lib;
    5. Puth the .dll in your project folder

    That should be it. If you get the error 0xc000007b you’ve probably mixed up 32 / 64 bits lib or dll files.

    Creating the window

    The first part of the code should look very familiar to that of plain SDL2

    In fact, the only new thing here is the SDL_WINDOW_OPENGL which tells SDL2 that we will be using this window for OpenGL and not SDL2.

    Just like with plain SDL2, we end up with a SDL_Window. And now that we have created it, we just need to connect a rendering context to it.

    Setting the variables

    Before we create and connect the rendering context, we’ll set a few variables to tell SDL2 and OpenGL which version of OpenGL we want to use. To do this, we use the function SDL_GL_SetAttribute

    Parameters :

    • attr – the attribute we want to set.
    • value – the value we want to set it to

    For a list of all SDL_GLattrs, click here.


    0 on success, otherwise negative.

    So now let’s use it to set a few variables :

    Context profile mask

    This means that the old, deprecated code are disabled, only the newer versions can be used.

    You can also use this to limit your application to which means your code will work on smart phones too. But it also means we’ll have less functionality, so we won’t be doing that.

    Context version

    Set up so that we use version 3.2 of OpenGL. We could set the number higher to use a new version, but your graphics card might not support that. This means we wont have access to all of OpenGL, but for now, 3.2 is sufficient for our needs.


    We need to tell OpenGL we want double-buffering. Which basically means that we draw to a hidden “screen” ( or buffer ) When we are done drawing to it, we swap the buffer we drew on with the buffer on the screen so that it becomes visible. Then we start drawing on the buffer we just swapped out ( which is now invisibe). This way, we never draw directly on the screen, making the game look a lot smoother

    The buffer/screen we are drawing on is usually called the “back buffer” and the one on the screen is called the “front buffer”

    Connecting a rendering context

    Now that we’ve set up the properties, we need to connect our rendering context. Fortunately, SDL2 makes this really simple, all we need is the SDL_GL_CreateContext method :

    Parameters :

    • window – the SDL_Window we want the code>rendering context to connect to.


    A valid SDL_GLContext on succes, otherwise NULL.

    Initializing GLEW

    After initializing SDL2 we need to initialize GLEW so that it can take care of our OpenGL calls. There is a two steps to this :

    This tells OpenGL that we want to use OpenGL 3.0 stuff and later.

    Depending on your graphics card driver, some functions might not be available through the standard lookup mechanism. This means that GLEW can’t find it for us, and the application will crash. glewExperimental enables us to use functionality. So there might be functions that exists, are valid and will work, but the isn’t normally available. glewExperimental tells GLEW that we want to use these functions as well.

    A side note : in my experience, this is needed even when using very basic OpenGL stuff, so it’s possible that some graphics card drivers report a lot of functions as experimental resulting in the need for glewExperimental = GL_TRUE

    As you probably guessed, this simply initializes GLEW so that it can take care of looking up functions for us. And that’s really all we need as far as GLEW goes.

    Drawing stuff

    Finally, let’s use OpenGL to draw something. I’ll just cover the very basics in this part, more interesting stuff next time!

    OpenGL and colors

    For the most part, OpenGL uses float values for colors. So instead of 255 being “max color”, 1.0 is max color. So means no color and 0.5 means 50 % color ( same as 255 / 2 = 127 in SDL2)


    In order to clear the screen with a single color, we first need to set which colors to clear it with. For that, we can use glClearColor.

    Parameters :

    • red – the amount of red ( 0.0 – 1.0 ).
    • green – the amount of green ( 0.0 – 1.0 ).
    • blue – the amount of blue ( 0.0 – 1.0 ).
    • alpha – the amount of alpha ( 0.0 – 1.0 ).

    If you specify a value higher than 1.0, it’ll be clamped to 1.0 which means that any number higher than 1.0 will be changed to 1.0.

    You can think of this function as the same as

    SDL_SetRenderDrawColor(&renderer, r, g, b, a)

    The parameters are a little different, but both sets the color that will be used in the next step.


    In order to update / fill the screen with the color we sat above using glClearColor(), we use glClear()

    Parameters :

    • GLbitfield is basically an enum that tells us what we want to clear. We’ll use GL_COLOR_BUFFER_BIT which means we want to clear the colors, reseting the screen to the color we set using glClearColor

    You can think of this function as the same as



    This function swaps the back buffer ( were we are currently drawing ) with the front buffer ( the one currently on the screen) . So you could say that this function does the actual double-buffering.

    Parameters :

    window the SDL_Window we want to swap the buffers on

    You can think of this function as the same as


    Basically ; it pushes things onto the screen.

    Setting background color example.

    Setting the background color in OpenGL is just as simple as in SDL2.

    In SDL2, you can do something like this :

    To do the same in OpenGL, you can do :

    A small example

    Let’s put it all together and make a small example. This example uses the event system in SDL2, so if you’re unfamiliar with that, you should read up on it.

    In order to compile on Linux / Mac, you can simplu run

    clang++ main.cpp -lGL -lGLEW -lSDL2 -std=c++11 -o Test


    g++ main.cpp -lGL -lGLEW -lSDL2 -std=c++11 -o Test

    In the application, you can press r, g, b to swap the color


    Setting up OpenGL with SDL2 is easy! And now that we have it set up, we can do lots of fancy 3D stuff. I have been thinking about writing this for a long time, and I finally got around to it. I really hope you enjoy it and want to learn more about OpenGL. 3D is much more fun than 2D, and I promise things will get more interesting when we get the basics out of the way

    Code attribution

    The code in this post was based on the code from this post

    Feel free to comment if you have anything to say or ask questions if anything is unclear. I always appreciate getting comments.

    You can also email me :

    [SDL2 – Part 1b] Setting up Visual Studio for SDL2

    Setup Visual Studio for SDL2

    Finally, I’ve gotten around to making a quick guide for setting up Visual Studio for SDL2. This guide also includes a fix that makes it possible to use SDL2 with Visual Studio 2015

    In order to use SDL2 on Windows, you have to set up your IDE to use it. Here’s the guide for how to do that using Visual Studio. The steps are generally the same for all versions of Visual Studio, but there is an issue with Visual Studio 2015

    Visual Studio 2015

    They changed a lot in the 2015 versjon of Visual Studio. This change means that you get a linker error when you try to build an SDL2 project.

    It took me a little trial and error to fix this, but I ended up building the SDLmain from source. You can find it here.

    1. Getting the libs

    You can find the files you need here. For VisualStudio you need to download : (Visual C++ 32/64-bit)

    This includes both the .lib and the .h files.

    Or, as mentioned above, if you’re using Visual Studio 2015, you need a .lib file build with Visual Studio 2015. You can either do this yourself, or download the ones I compiled.

    Placing the includes/libs

    Now take all the .h files in include and move them into a folder named SDL2. You can put this folder anywhere you want as long as the folder containing all the .h files is called SDL2. The reason for this is that we use #include <SDL2/SDL.h>

    Do the same for the .lib files. The name of the directory you put them in is irrelevant in this case, just put them somewhere you remember. ( You might have to put other .libs in here at a later point in time )

    2 Setting up libs

    Start up Visual Studio, create a new project and add / write a .cpp ( for instance you can use the main.cpp in the first part of the tutorial. )

    Now we need to set up VisualStudio so it knows where to find the header files we placed in the step above

    Right click on the project and click “Properties”

    VS Install 1

    Select C/C++, select “Additional include directories” and click “Edit”

    VS Install 2

    Click “New Line”, then navigate the folder containg the SDL2 folder and click “Select Folder”

    VS Install 3

    You should now see something like this :

    VS Install 4

    Click OK. Now we’re done with the header files, time for the lib files.

    Under “Linker”, select “Additional Library Directories

    VS Install 5

    Do the same thing you did for the header files, but this time navigate to the folder conataining the .lib files.

    Navigate to “Input” and enter “SDL2main.lib;SDL2.lib;” in front of the others

    VS Install 6

    3 Copying .dll files

    The .dll files are needed to run SDL2 applications. When it comes to placing them you have two options :

    In the project directory

    This is the same folder as your .exe file. This means you have to copy them every time you create a new project, which can be a little annoying and easy to forget

    In your Windows system directories

    When Windows looks for dll files, it’ll look in a few standard directories in addition to the directory the .exe file is in. Putting it in one of these means the dll will always be there, and you don’t have to worry about copying.

    The directories are as follows :

    • In x86 this directory is C:/Windows/system32/
    • In x64 this directory is C:/Windows/SysWOW64/ though you might have to place them in System32/ as well.

    4 Setting correct subsytem

    You’ll probably also have to set the correct subsystem. Go to Linker -> System and set SubSytem to Console(/SUBSYSTEM:CONSOLE)

    VS Install 7

    Adding other libs

    Now that we have this set up, we can add other SDL2 libs like SDL2_image, SDL2_ttf, etc.. All you have to do, is to download the Visual Studio libs like before and copy the header files and lib files to the same folders as above. You also need to add the name of the new .lib file to “Input” under “Linker” And finally you need to copy the new dlls as mentioned above.


    You can find the libs here ( download the one with VC in it. ) Add


    to Linker / Input


    You can find the libs here ( download the one with VC in it. ) Add


    to Linker / Input


    You can find the libs here ( download the one with VC in it. ) Add


    to Linker / Input

    Feel free to comment if you have anything to say or ask questions if anything is unclear. I always appreciate getting comments.

    You can also email me :

    [SDL2 – Part 13] Multiplayer – TCP

    Continuing where we left of…

    Last time we left off after having taken a look at UDP connections. But, as we saw, UDP connections have several issues :

    • No delivery guarantee
    • Packages might not arrive in the same order they were sent

    So how do we solve these issues? By creating an actual connection. A TCP connection that is.

    TCP connections

    As mentioned in the previous post, UDP connections aren’t connections at all. It’s basically just two parts sending data back and forth between them. TCP connections, however, are proper connections. This means we know the following :

    • Both parts tries to keep the connection alive
    • Both parts have received everything the other part has sent
    • Every package has been received in order

    These points requires two different mechanisms, so let’s look at the one by one.

    How TCP works

    In order to have a connection, both parts have to agree on it. This is different from UDP connection because in UDP connections we just send the data without bothering about actual connections. So how do we set up the TCP connection? By using a technique called three-way handshake

    The three-way handshake

    The process of establishing a TCP connections requires three different steps:

    • Part 1 contacts part 2 and asks to start a connection
    • Part 2 replies and either says OK, or declines
      • If part 2 declines, the process stops here
    • Part 1 replies back, to confirm that he’s received part 2’s request
    • Now the actual connection is started and we can send data

    The third step might seem kinda unnecessary, but remember that any packages can get lost. And so if part 2 doesn’t get the answer from part 1, part 2 doesn’t know whether part 1 is still there. Part 1 might have lost the connection or wished to stop it altogether. We can compare this to starting a phone call :

    1. Part 1 calls part 2
    2. Part 2 picks up and says hello
    3. Part 1 says hello back

    After the last step, the conversation is started. All of these step are necessary for us to know there is an actual conversation. Skipping one of them might make you think something is wrong. For instance, if part 2 doesn’t say ‘hello’, you might think there is something wrong with the connection.

    Dealing with the UDP issues

    The major flaw of UDP is that it doesn’t deal with loss of packages. And another issue is that we don’t know if we’re getting the correct package at the correct time. Potentially we can get every package in the wrong order. Which could lead to everything from glitchy gameplay to crashes. So let’s look at the mechanics that makes TCP connections able to give their guarantees :

    ACK numbers

    In TCP every package gets a unique number. This is basically just a counter so the first package gets the number 0, next one 1 and so on…

    This number is then sent back from the recipient to confirm that “okay, so now I have every package up to this package” ( I will refer to the confirmation as ACK numbers.) So when we get an ACK for the last paackage we sent, we know that the other part has received everything we have sent so far. This means we don’t have to worry about any of the packages having been lost.

    But say the receiver misses a package in the middle? For example, what if the receiver gets package 1,2,4, but not 3? The receiver will look at the ACK numbers and think “huh… I’m missing a package here.” And will only send ACK 2 back. At this point, your application might get package 1 and 2, but not 4, since it’s out of order.

    Flow of TCP

    Let’s look at an example to see how the TCP might handle loss of packages.

    1. Sender sends package 1,2,3,4
    2. Receiver receives package 1
      • Your application receives package 1
      • Send ACK 1 all good so far!
    3. Receiver receives package 2 and 4
      • Uh oh! Where’s package 3?
    4. Sender receives ACK 2
    5. Sender waits a while before resending package 3 and 4
      • It sends package 4, which the receiver already has, because it only got ACK2
    6. Receiver receives package 3
      • Now your application will get the last two packages ( 3 and 4 )
      • Receiver sends ACK 4
    7. Sender receives ACK 4
    8. Receiver receives package 3, again!
      • The original package 3 was just delayed get it again.
      • Since we already have these packages, it’ll get discarded

    Now the receiver has got all 4 packages and all is fine and dandy. But if you study the example, you see that in our case the sender has to resend 2 packages ( 3 and 4), while the receiver in reality only needed one ( 3 ) And in addition, we got package 3 twice. But even though we got package 4 early, it wasn’t sent to our application before we got package 3 because TCP we receive everything in order. This is one of the major drawback of TCP ; there can be lots of delays and re-sending of packages.

    TCP send rate

    A final issue about TCP performance is how it regulates package sending. TCP can send several packages at any time. The basic idea is that every time every package is sent successfully and in time, it’ll send more packages the next time.

    So the first time, it might only send 2 packages the first time. But if it gets ACK for all those three in a given time, it might send 4 the next time. Then maybe 8 and keep increasing it until it doesn’t get an ACK for all packages on time. When that happens, it’ll send less packages the next time. Let’s look at a simple example :

    1. Send 2 packages
      • Receive ACK for both packages
    2. Send 4 packages
      • Receive ACK for all packages
    3. Send 8 packages
      • Receive ACK for only 5 packages
      • We’re missing 3 packages! Maybe this was too many packages? Try sending less…
    4. Send 6 packages

    As you can see, TCP will try its best to keep sending the maximum number of packages without having to resend anything.

    Complexity of TCP

    Although TCP is quite old ( more than 30 years old, ) it’s really complicated. There are a lot of different mechanisms involved to deal with anything that might happen. Not only do they handle the cases we’ve seen, but it also, as we saw, needs to control the speed for optimal performance.

    I have purposely simplified it because its nice to have a basic understanding of how TCP works as this might help you to choose whether you should use UDP or TCP

    The two different parts of TCP connections

    Since TCP connection are actual connections, there needs to be a central part ( server ) that the others ( clients ) connect to. I’ll briefly discuss servers and clients, and what their roles are when it comes to TCP


    The server is the part that all the clients will connect to. So the server will always be listening for new connection and accepting them as they come. The server will be accepting connections from any client, so we don’t specify IPs on their server side ( ,ore on this later. ) We will only specify the port to listen to.


    The client tries to connect to the server. It needs to know both the IP and port of the server before it can try to connect to the server. The server doesn’t know anything about the client until it tries to connect to the server.

    When the server accepts the client, the connection is established. At this point in time, the server also has a connection to that client specifically. We’ll see this later.

    Note that these are just different types of connection, not necessarily different computers. A computer can have any number of server and / or client connections.

    Time for some code

    So now that we have all the technical information about TCP is out of the way, we can start setting up a TCP connection of our own. As you might expect, the source code for TCP is a bit more involved than the UDP one.

    This part relies on code from my previous post. If something is unclear, you can go back and read that part if you want more information. I have also added links to the documentation in the headers.


    Just like with UDP, we need to ask SDL net to get correct the correct representation ( because of different endianness). The function is the same ( but it is used in a different way for servers, so do read on)

    Parameters :

    • IPaddress* address – a pointer to an allocated / created IPAdress.
    • const char* hostIP address to send to ( )
    • Uint16 port – the port number to send to

    Return value

    The int value 0 on success, otherwise -1. In this case, will be INADDR_NONE. This can happen if the address is invalid or leads to nowhere.

    But there is a slight difference in how it’s used. Since TCP connections are actual connections it has both a server and a client part :


    The server part needs to be listening for IPs that are trying to connect. So we’re not really resolving a host this time. We’re just preparing the IPaddress for the next step.

    So what we do is that we simply use null as the IP. This tells SDL_net that we don’t want to use this IPaddress to connect to a host, but rather just listen for other connections. SDL_net does this by setting the IP to INADDR_NONE. This comes into play in the next step ( SDLNet_TCP_Open)


    For clients, this function is used more or less exactly like in the previous part ; it’ll prepare the IPaddress with the information we supply.

    Of course, the port on both the server and client side has to be the same.

    Note: no connections has been initiated yet, we’ve just asked SDL to prepare the port and IP address for us.


    This is a new type. It represents a TCP connection. We’ll use it just like UDPSocket but of course this time it represents a TCP connection instead of a UDP connection.


    Now that we have an IPadress correctly set up, we can try to connect to a host. This is done by the function SDLNet_TCP_Open

    Here is the function signature.

    Parameters :

    • IPaddress *ip – an IPaddress* contaning the IP and port we want to connect to. We’ll use the one we got from SDLNet_ResolveHost


    • For clients : a TCPsocket to the server, which can be used for sending and receiving data
    • For servers : a TCPsocket used for listening for new clients trying to connect

    This function will try to open a connection. But just like with SDLNet_ResolveHost, there are two different cases here


    Above we saw that if we call SDLNet_ResolveHost with null as the IP, SDL_net will set the IP of the to INADDR_NONE. This means we will be listening for connections, rather than trying to connect. This is because, as a server, we don’t actively try to connect to another host ( we just accept connections ), so we don’t know about any IP address yet.

    What this function does in this case, is that it tries to open the port for listening.


    For clients, this works much like for UDP : we try to connect to the server with the given IP and port

    At this point, the client is connected to the server, and now they can communicated. This is a little different from how it works in UDP so let’s start by looking at how the communcation can be done in TCP

    A quick example

    Before we jump into the next part, let’s have a quick look at an example of how to use these two functions. These two functions are the initialization part of the TCP code. Since these steps are slightly different form client to server, I’ll cover them separately.


    Simply set up the IP address and use it to open a port for listening :


    Simply set up the IP address and try to connect to server with that IPaddress :

    The job of the client

    The clients are the parts you’ll be dealing with they most. A client communicates with other clients. This is more or less just like in UDP, but there are som differences.


    Sending data using TCP is done using a slightly different function from that of UDP :

    Parameters :

    • TCPsocket sock – the TCPsocket* on which to send the data
    • const void *data – the data to send
    • int len – the length of the data ( in bytes )

    This function is quite straight ahead. The only thing to note is the void*. The type void* is something that is widely used in C but not so much in C++. It’s basically a pointer to anything. So the data can be just about any form of data. This requires a bit of low-level C “hacking” to get right.


    The number of bytes that was sent. If this is less than the size of the data we tried to send ( or the len parameter, ) an error has occured. This error could be the client disconnecting or a network error.

    Using this function correctly is tricky, in a similar way to UDP. Let’s look at a possible way to implement it ;


    Receiving data using TCP is also done using a slightly different function from that of UDP :

    Parameters :

    • TCPsocket sock – the TCPsocket* on which to recv the data from
    • const void *data – the data to receive
    • int maxlen – the maximum data to receive


    The number of data received. If this is less than 0, an error has occured.

    And since this is C ( and not C++ ) we need to allocate a decent sized buffer in advance ( this is the void *date part. It’ll have the same size as maxlen. The setting of the buffer involves a little C-style trickery.

    Let’s look at an example :

    The job of the server

    So now we have a TCPsocket that listens to the port we specified. And now we can try to accept new connections. For now, we’ll try to accept connections right out of the blue. But later we’ll look out how to check for clients trying to connect. Anyways ; here is the method we need:


    This is the essentially the accept part of the three-way handshake. The client has tried to connect to us and we need to accept it before the connection is established. This function does exactly what you might expect : it accepts an incoming TCP connection, informs the client and thus establishing the connection.

    Parameters :

    • TCPsocket *server – the TCPsocket* we use for listening for new connections. This is the TCPConnection we created using SDLNet_TCP_Open.

    Return :

    A different TCPsocket this TCPsocket does represent a connection to a specific client. If it’s valid, it means a connection has been established. If it’s null it means no connection was established. This can mean that there was an error. But it can also mean that there was no clients trying to connect.

    This function might lead to some confusion as there are two TCPsockets, but remember :

    The first one ( the parameter we supply ) is ther server TCPsocket. This is not connected to any client, we just need it to be able to listen for new connection. We create this TCPSocket by callling SDLNet_TCP_Open

    The second TCPsocket is for a specific client.We create this TCPSocket by callling SDLNet_TCP_Accept. When it’s created, it can be used exaclty like the TCPsockets created on the client side. ( As I talked about in the cleint part of SDLNet_TCP_Open )

    Dealing with SDLNet_TCP_Recv

    There is a major issue with the receive function. It blocks. This means the function waits until it has received something. Actually, according to the documentation, it’ll wait til it has received exactly maxlen bytes and then set those in the void* data. But from what I’ve found, this isn’t 100% true.

    What I have found, is that the function will block. But only until it has received something ( at most maxlen bytes. ) So, in other words, it waits til it has received something, no matter how little or much it is. But even though this is better than waiting for maxlen bytes, the fact that it blocks is still an issue we’ll need to solve.

    SDLNet_TCP_Recv will also join together messages if it can. So say client 1 sends




    in two separate messages, SDLnet can join them together so that what client 2 gets is


    in one message.

    This can ( and probably will ) happen if buffer size is large enough.

    Or, if the buffer size is too small one call might only get part of the data. So if client 1 sends :


    But client 2 has the buffer size set to 6, it’ll get


    The first time client 2 calls SDLNet_TCP_Recv. And


    The second time it calls SDLNet_TCP_Recv

    That means there are two issues to fix : the fact that it blocks and the fact that we might not receive everything with one call to SDLNet_TCP_Recv.


    To solve this, we can check if something has happened on a collection of TCPsockets, this includes someone connecting, disconnecting or receiving data.

    We can use a SDLNet_SocketSet to solve this. Think of it as simply a set of sockets. We’ll be using it for storing and checking TCPsockets to see if there is any activity. A SDLNet_SocketSet can contain any number of TCPSockets. Those can be both server and client connections.


    This is a really simple function for adding a socket to a SDLNet_SocketSet. It also exists for UDP, but we’ll be using the TCP version, of course.

    Parameters :

    • SDLNet_SocketSet *set – the SDLNet_SocketSet we want to add the TCPsocket to
    • TCPsocket *sock – the TCPsocket we want to add to the SDLNet_SocketSet

    Return :

    The number of TCPsockets in the SDLNet_SocketSet on success. Or -1 on failure.


    Now that we’ve added sockets to the SDLNet_SocketSet, we can use the SDLNet_CheckSockets function to check for activity. “Activity” in this case basically means that something has happened. This can either mean we have received data, that someone has disconnected or that there is an error.

    Parameters :

    • SDLNet_SocketSet *set – the SDLNet_SocketSet we want to check for activity
    • Uint32 timeout – a variable stating how long ( in milliseconds ) we want to wait for activity. We can wait anything between 0 milliseconds and… well anything up to 49 days.

    Return :

    The number of TCPsockets in the SDLNet_SocketSet with activity on success. Or -1 if either theSDLNet_SocketSet is empty or there was an error.


    After we’ve called SDLNet_CheckSockets, we can use this function to check whether a particular TCPSocket has been marked as active. This function should be called on a socket on a SDLNet_SocketSet after code>SDLNet_CheckSockets has been called on the SocketSet that holds that TCPSocket.

    Parameters :

    • TCPSocket *socket – the TCPSocket we want to check for activity

    Return :

    Count of TCPSockets with activity

    In other words ; we use SDLNet_CheckSockets to see if any of the TCPSockets in a SDLNet_SocketSet has any activity. If so, we can call SDLNet_SocketReady on each of the SDLNet_SocketSets in that SDLNet_SocketSet to see if that TCPSocket in particular has any activity.


    Now let’s look at how you could implement an update function that checks for activity. They’ll be different for server and client connections since client connections checks for incoming messages and disconnections. While on the server side we’ll simply check for clients trying to connect.

    Client side example

    As I mentioned above, on the client side we need to check for connections and incomming messages. Here is a way to do that :

    A problem that arises here, is that calling code>SDLNet_CheckSockets kind of sets the TCPSocket back to “inactive” when you call it. Even if there is several messages waiting to be read.

    So when you have called ReadMessage(), you have no way of knowing if it has any more data. Calling it again, would mean calling SDLNet_TCP_Recv again which could block until the other client sent more data.

    This is an issue lots of tutorials that I’ve seen has. But there is a solution that doesn’t block ; we just need to call SDLNet_CheckSockets again. So just add this to the bottom of the previous function

    Server side example

    On the server side, we need to check for clients trying to connect. This is fortunately a little bit simpler than what we had to do on the client side. Here is the code :

    I think that’s all for now. You can find a working implementation



    Setting up a TCP connection using SDL_Net is quite tricky. Lots of tutorials out there just briefly discuss the topic without going into much detail about the different functions. Hopefully this post has helped you get a better view of the different parts of SDL_net ( I sure did writing it! ) I might also post a third networking post about an even better way of doing network communication using both UDP and TCP for maximum performance.

    I’m also really glad to finally have finished and published a new post. I know it’s been a long time since last time, but I’ve been a bit busy at work and haven’t really had the time or energy. But I feel some of my energy is back. And getting positive feedback is always amazing, they help me keep going. So thanks to everyone who’s commented! : )

    (Semi) Final code :

    Working implementation of TCP connections ( NOTE : work in progress! )

    Github page

    Feel free to comment if you have anything to say or ask questions if anything is unclear. I always appreciate getting comments.

    You can also email me :

    [SDL2 Part – 12 ] Multiplayer


    A huge aspect of game programming is multiplayer. A huge share of all the popular video games today support internet in sme ways. But this introduces several new challanges. Both in the communciation over internet itself but also in what data to send, how to react to data and how to make everything synchronized. This is going to be a sub series in itself starting off with how to do the actual communication.


    When you’re sending something over the Internet, it’ll get split into quite small packages. Each of these packages are usually a few kilobytes or smaller. There are actually several types of packages which get wrapped in each other. I won’t get into the details of this, but what we need to know is that there are several types of packets and they get wrapped in each other.

    One of the basic packet types is the IP ( Internet Protocol ) packet. You can look at this as the package that leaves our computer and is sent out on the Internet. Every package that is sent across the Internet is of this type.

    IP package

    We won’t go into the details of each of these fields, the image is there just to show you how it looks. I might go into the details on a later point in time, but that is a big topic and we don’t really need to know all of that in order to do multiplayer.

    The package is then sent out on the Internet and will eventually find its way to its destination PC. This is similar to sending a letter. You put the letter inside an envelope and write the destination address on it. Then the mail carrier will take care of the rest.

    Actually, that isn’t entirely true because there is another step before the package leaves your computer; we need a protocol.


    Above we saw an example of in IP package.

    So now that we know about packages, the next topic is protocol. A protocol is basically a set of rules about how the two parts should communicate. Most important to us, these rules dictate how to detect package loss and how to deal with them.

    Why do we need protocols?

    Most of the times when you send something, everything goes fine and the recipient gets the data packages he need. But things can go wrong. Let’s take an example.

    Say we have sent our letter to someone. But on it’s way, the letter gets lost. But how can we know? The mail carrier doesn’t know that the letter is lost. The recipient doesn’t know you’ve sent the letter and you yourself is assuming the letter arrived at its destination without any issues.

    The exact same thing could happen on the Internet ; packages can get lost, and none will know. So how can the problem be avoided? This is where protocols come in!

    There are two protocols, we’ll cover, TCP and UDP. This time, I’ll only cover UDP. I’ll cover the other one, TCP, in a later post.


    The most basic of the protocols we’ll look at is UDP ( User Datagram Protocol. ) Actually, UDP is so basic, there is no rules about how to deal with package loss. You have to handle that yourself.

    Another issue is that there is no guarantee that the packages will be received in order. So you could get packages 1,2,3 as 3,2,1 or 3,1,2 or 2,3,1 etc.. And that’s if you do get all packages. Needless to say, using UDP can cause lots of problems. But its’ simple so we’ll start with it.

    UDP is generally used for performance or simplicity reasons :

    • Video streaming
      • If a package is lost, this is just a tiny piece of the stream data and you might not even notive it
    • In cases where you just want to send a state to the server
      • Like using ping where you just get an echo back
      • If you don’t get a message back, you can retry again and again, or report an error
    • Games
      • For reducing lag
      • This means they have their own way of dealing with package loss

    Addressing on the internet

    On the Internet, there are two units used for addressing, the IP address and the port number. We need both of these to communicate over the Internet.

    IP adress

    The IP address is used to address computer. Every unit on the Internet has an IP address that refers to that unit. You can look at it as the address of a house. So when you send a letter to someone in a house, you write the address to that house. When you send a packet to a computer, you send a packet to that IP address.

    Port numbers

    Port numbers are used to distinguish between connections. Each connection has a separate port number tied to it. If we didn’t have port numbers, all data would go into a large buffer and you’d have no idea which of these datas where yours.

    So if an IP adress refers to a house, a port could be looked at as a name.

    When you set up a connection, you need both of these an IP address and a port number. Actually you need two of both since the receiver needs to know who sent the package so that it needs to know who to reply to. So all in all, we need two IPs ( out IP and destination IP ) and two port numbers ( our port numbers, destination port numbers. )

    So basically, you neeed :

    • Your own IP adress and port number
    • The recipients IP address and port number

    Setting up the connection

    Actually, UDP is not really a connection, it’s just two parts sending data back and fort. But both parts still needs to know both IP and port number of each other. And it is common and more practical to think about it as an actual connection.

    There are two roles in a UDP “connection”.

    • A client
      • Tries to connect to a server
    • A server
      • Waits for a connection from a client

    So the procedure for a “connection” will be something like this

    • Server with ip waits for connection, listening to port 123
    • Client sends a packet to, port number 123
    • Server stores IP and port number for client
    • Server and client now knows both port number and IP of each other
    • Now the server and client can now send data back and forth


    The final part we need to cover about connections are sockets. A socket is a combination of IP address and ports that is unique on every PC for every connection. It consists of the following :

    • IP and port of the client side of the connection
    • IP and port of our the remote part of the connection
      • This part is mostly used for TCP connections, we won’t use it
    • Type of connection ( UDP, TCP, etc… )

    We’ll be using sockets as a unit to keep track of a connection. You can look at scokets to the sockets you plug your electric devices to. Following that analogy, the wire would be the connection ( network cable ). So, in essence, it’s what connects your application to the Internet.

    I realize this all might be a lot of information and hard to wrap your hand around it all, but it’ll get clearer when we put it to use.


    Now that we know a tiny bit about UDP connections, let’s try to set up one ourselves. For that purpose, we need the SDL_net library. It is capable of setting up and maintaining both UDP and TCP connections. Since UDP connections are way simpler, we’ll only cover that for now.

    Networking is, just like text rendering and .png loading, a separate part of SDL called SDL_net. We install it the exact same way as before :


    Installing SDL2_net is done exactly like SDL2_image. Just replace SDL2_image with SDL2_net

    Here’s the short version :


    For Linux you can use need to install -lSDL2_net or -libSDL2_net or -SDL2_net ( the actual name might be different in different distributions. )

    The linker flag is -lSDL2_net

    The process is more or less identical to that of setting up SDL2_image.

    If you can’t find SDL2_net in any repositories and it’s not installed by default, you might have to compile it yourself. For more information, see my blog post about setting up SDL2.


    Similar to setting up SDL2 base.

    The difference is that you have to download the development files for SDL2_net

    And similarly add SDL2_net.lib to library includes and add SDL2_net.lib to the library flags ( where you previously added SDL2_image.lib )

    And with that, it should work.


    See the first part of my tutorial. Just install SDL2_net instead of SDL2

    Using SDL_net to set up a connection

    Setting up a connection with SDL_net is a bit more complicated than what we’ve previously seen. This is because there are a few steps, the code will be very C ( not C++ ) and there are some buffers ( raw arrays ) we need to keep track off.

    We’ll be cutting out all GUI because we simply don’t need it. It will make our code shorter and it’ll be easier to display the results.

    Structures of SDL_net

    SDL_net contains two parts we need for out UDP connection. Let’s start with the simplest, IPAddress.


    A simple struct with the following fields :

    • uint_32_t hostIP v4 address
    • uint16_t host – protocol port

    It is used for keeping IP and port number together. Some functins takes this as one of the parameters.


    A pointer to a data type that holds to a pointer. Since it a pointer, it can be NULL, in which case there is no connection and we can’t send data back and forth.


    Our data packet. Contains the data we are sending back and forth along with some other information.

    • int channel
      • The src/dst channel of the packet
      • We won’t be using this
    • Uint8 *data
      • The packet data we’re sending
      • Can be of any length
    • int len
      • The length of the packet data
      • Used to find the end of the data in the data pointer
    • int maxlen
      • The max size of the data buffer
      • Always as large or larger than len
      • Only used for data package creation on the senders side
    • int status
      • Packet status after sending
      • Number of data sent
      • -1 on failure
    • IPaddress address
      • the source/dest address of apacket
      • For received packages this is the IP / port of the remote part.
      • For sent packages this is the IP / port to send to.

    The various fields of a UDP packet is set with various function used for sending and receiving data. It might seem confusing right now, but it’ll get clearer once we get into the actual code.

    Functions of SDL_net


    This function is just like SDL_Init and TTF_Init ; it initializes the SDL_net


    This function is used for creating a socket which we will use later to send data.

    Parameters :

    • Uint16 port – the port we want to use. If you use 0, SDL_Net will assign a port for you.

    Return value :

    A valid UDPsocket, NULL on error. Remember that UDPSocket is a pointer.

    As we saw earlier, UDP isn’t actually a connection. All we are doing is sending data back and forth. And all we need to do that is a socket. Now that we’ve opened this socket, we can start dealing with packages.


    As stated before, we need an IP address and port number in order to send data. The problem is that there are several ways to represent IP addresses and port numbers. The difference between them is the order in which the they are converted to binary. These orders are refereed to as little endian and big endian I won’t dive more into this, but you can read about it here.

    The issue is that different system use different endian. So we need a uniform way of setting the IP address and port number. This is where SDLNet_ResolveHost comes in. What it does, is that it sets the name values of an IPAdress for us so we don’t have to think about endians at all.

    Parameters :

    • IPaddress* address – a pointer to an IPAdress. Needs to be allocated / created in advance. ( In our case, we’ll use a variable and not a pointer so we don’t have to worry about this. )
    • const char* hostIP address to send to ( )
    • Uint16 port – the port number to send to

    Return value :

    0 on success, otherwise -1. In this case, will be INNADDR_NONE. This can happen if the address is invalid or leads to nowhere


    Allocates a UDP_Packet and returns a pointer to it.

    Parameters :

    • int size – size of the packet in bytes. 0 is invalid.

    Return value :

    A valid pointer to UDPpacket, NULL on error ( such as out of memory )

    The size of the packet determines how much data we get every time. It’ll never be more than this size, but it can be less. You can also expect that some packages gets mfSerged or split up into different segments. This is something we’ll need to handle.

    After allocation space for a packet, we can finally fill that packet up with something. Which is kinda the point of this ordeal.


    Sends a UDPpacket

    Parameters :

    • UDPsocket sock – Our socket to send data from ( the one we created with SDLNet_UDP_Open )
    • int channel – We’ll completely ignore this parameter and just set it to -1 ( all channels )
    • UDPpacket* packet – the data we want to send ( finally! )

    Return value :

    The number of destinations the packet was sent to. In our case, this will be 1. But it could be more. Because of this 0 is returned on errors. Anything higher than 0 means partial success ( since we were able to send to at least one destination. )

    In our case, the function should always return 1 but I find it better to just check for 0.


    Now that we know how to send data, we also need to know how to recieve them.

    Parameters :

    • UDPsocket sock – Our socket to receive data from ( the one we created with SDLNet_UDP_Open> )
    • UDPpacket* packet – the data we received

    Return value :

    The int value 1 when a packet is received, 0 when no packets where received, and -1 on errors.


    To make it simpler to use SDL_Net, I’ve made a helper class that takes care of everything. You’ll find an example at how to use it below.


    To compile it on Linux or Mac. Simply run :

    clang++ UDPExample.cpp -std=c++11 -lSDL2_net -o UDPTest

    Starting the example

    To use the example, you need two instance of the application. So start up two instances of it.

    You’ll be asked to enter local IP. This is the IP of the computer you are sitting on. You can use which simply means “this computer”. You can do this in both instances. You’ll also be asked to enter a local port and remote port. These needs to be opposite on the two instances ; the local port of the first one, needs to be the remote port of the other. This is because we need to know where to send it to and where to listen for data on.

    Instance 1

    Instance 2

    ( notice the difference in local and remote port on the two instances. )

    Using the example

    After inserting IP and port data, you’ll be presented with a simple menu :

    Nothing will happen before you do one of the three options. And if your message doesn’t show up on the other instance, make sure you’ve entered ‘2’

    And now you should be able to set up connections. Feel free to use the UDPConnection struct as you like.

    Feel free to comment if you have anything to say or ask questions if anything is unclear. I always appreciate getting comments.

    You can also email me :

    [ SDL2 – Part 11 ] Text styling

    Text styles using SDL2_ttf

    In the last post we looked at how to render text. Now let’s take this a step further and change the appearance of the font. There are two ways you can change how the font looks. Font style and font outline.

    Fon styles

    SDL TTF allows you to set the style of the font. It supports the following font styles

    • bold
    • italic
    • underlined
    • strikthrough

    You can combine these in any way you like. We’ll start off with just setting a single font style at a time, and then move on to see how we can apply several of them at once.

    Font styles

    Settnig font styles in TTF is easy, it just requires a single function. The function let us you set one or more font styles. Let’s start off by looking at how to set just one font style

    Setting the font style

    We can set the font style using the following function

    Arguments :

    •  TTF_Font *font – the font to set the style on
    • int style       – the style to set on the font

    As you can see, the style parameter is an int and not an enum. I’ll get back to why that is later, but for now let’s look at the possible values for style, these are all self-explanatory so I won’t be adding a description.


    Any text you render after setting this font style will have the new effect, but it won’t change any text you have written with a different style. So when you set the style to TTF_STYLE_BOLD, all text you render from that point and until you set a different style will be bold. And as long as you pass any of the above values to the function, the font will only have the one last style you set.

    Let’s do a simple example

    Any text rendered at this point will be normal with no font styles

    Any text rendered at this point will be bold

    Any text rendered at this point will be in italics, but not bold

    Any text rendered at this point will be normal with no font styles

    Any text rendered at this point will be underlined

    As you can see, this is pretty straight forwards. So let’s make things a little bit trickier by setting multiple font styles at once. To do this, we must first look a bit at the binary number system

    Binary numbers

    In order to learn about how to combine these flags, we need to look at binary numbers first of all. If you don’t already know about binary numbers, you should take a look at the above link. It’s not crucial, but it is highly recommended to know a little about them. I might create a blog post about them at some point. For now, I’ll just talk a tiny bit about the binary number system. But as I said, I highly recommend understanding it fully to the point where you can convert back and forth between binary and decimal numbers

    The binary number system

    On a daily basis, we use the decimal number system. The binary numbers system is just a different way of representing numbers. Any numbers can be converted from any other number system. So you can convert binary numbers to decimal numbers ( and the other way around ).

    A computer stores numbers as individual bits ( 0‘s and 1‘s ). They correspond to on / off or true / false.

    Let’s take a look at an 8 bit binary number ( 1 byte )

    1010 0101

    As you can see, it has 8 digits. So that’s eight different flags. Each of these flags have two different possible values : 0 / 1 or false / true. So that’s 8 bools for the price of a single byte!

    Bitwise operations

    So how do we use these 8 booleans? As you know, we have the following boolean operations in C++:

    • and ( && )
    • or ( ||

    These work on an entire variable. An int, for instance will be false if its value is 0, otherwise its true.

    But there are similar operations that does this on all bits of a variable. These are called bitwise operations, simply because they operate on a simple byte. To do a bitwise operation, we need two variables of equal size ( same number of digits ), for instance two bytes. The result of a bitwise operation is a third variable of the same size. So if we do a bitwise operation between two bytes, we get a third byte back as a result.

    Let’s create two bytes, we’ll use these for a few examples

    Byte 1 : 0101 0011 ( 64 + 16 + 2 + 1 = 85 )
    Byte 2 : 0110 0010 ( 32 + 64 + 2 = 98 )

    We’ll be referring to each digit as a position. So the digits in the first position is 0 in both or bytes. In the second position it’s 1 in both bytes and in the third it’s 0 in the first byte and 1 in the second byte.

    Bitwise OR

    A bitwise OR operation means we look at all positions as check if either of them is 1 if so, we set the digit in that position to 0. If no digit in that position is 1, we set it to 0

    The operator for bitwise OR in C++ is | ( just one |, not two )

    Here is a simple example of bitwise OR between two bytes

    0101 0011
    0110 0010
    0111 0011

    Bitwise AND

    In a bitwise AND operation, we look at each position and see if both of them are 1. If so, we set the digit in that position to 1, otherwise we set it to 0. So in OR we set it to 1 if any of the two is 1, here we only set it to 1 if both are 1.

    The operator for bitwise AND in C++ is & ( just one &, not two )

    Here’s the a simple example :

    0101 0011
    0110 0010
    0100 0010

    Bitwise XOR

    XOR or exclusive OR is slightly less known than OR and AND. In an XOR operation, we check if the two values are different. So this is equivalent to != in C++.

    • ( true  != false ) = true
    • ( true  != true  ) = false
    • ( false != true  ) = true
    • ( false != false ) = false

    Simply put, an XOR operation is true if the two parts are different. So in a bitwise XOR operation, we look at each position and see if the two digits are different. If so we set the digit at that position to 1, otherwise we set it to 0.

    The operator for bitwise XOR in C++ is !=

    Here is an example :

    0101 0011
    0110 0010
    0011 0001

    Bitwise NOT

    We also have the a bitwise version of the NOT opeartion this is done by using the ~ operator in C++. If we used ! we would get the result of NOT on the entire variable, not the individual bits which is what we want. This operation only takes a single element and flips all bits ( turns 1‘s into 0‘s and 0‘s into 1‘s. ). Let’s test it on our two bytes

    The operator for bitwise NOT in C++ is !

    Byte 1 :

    NOT 0101 0011
    =   1010 1100

    Byte 2 :

    NOT 0110 0010
    =   1001 1101

    Setting and checking individual bits

    So now that we know how to do bitwise operations, we need a way of checking and setting the individual bits. This is done simply by using OR, AND and XOR. Before we take a look at how to do this, let’s define a few values to check.

    Remember that the different font styles are ints? This is because they are used to perform bitwise operations to set and unset different bits. Here they are again, this time with their values. For simplicity, I’ll only list the last four bits ( the others are always 0 ). The values are in decimal with the binary representation in parenthesis

    • TTF_STYLE_NORMAL = 0 ( 0000 )
    • TTF_STYLE_BOLD = 1 ( 0001 )
    • TTF_STYLE_ITALIC = 2 ( 0010 )
    • TTF_STYLE_UNDERLINE = 4 ( 0100 )
    • TTF_STYLE_STRIKETRHOUGH = 8 ( 1000 )

    As you can see, they all have only one ( or zero ) bit set. This means we can use AND, OR or XOR on just one bit.

    Setting a bit

    To set a bit ( without affect any other bit ) we use the ORoperation. So say that we have four bits set to 0, 0000 and we want to set the bit for bold on it ( 0001 ). In other words, we want the result 0001. What we do is : that we take our original 4 bits ( 0001 ) and set it to the original 4 bits ( 0001 ) OR‘ed with the bitmask for bold ( 0001 ) :

    ( value of TTF_STYLE_BOLD )

    Simple as that! This woks for any of the other flags in the same way. They all will end up setting one bit.

    Note that this will not change any other bits. If we try set the italics font style on the above variable we get :

    ( value of TTF_STYLE_ITALIC )
    0011 ( TTF_STYLE_BOLD and TTF_STYLE_ITALIC set )

    Let’s make a simple function that adds a style to a mask.

    Unsetting a bit

    Sometimes we want to unset a bit. Say for instance we want to remove the italics from the font above. How do we do that without affection the other values? This is a bit more complex, because it requires two operations. What we are trying to do is the following :

    Say we have a bitmask ( 0000 1011 ) and we want to unset the bit for bold text, but leave the rest unchanged. So we need to be able to go :

    From 1011 to 1010

    To do this, we need to use an AND operation. This is because we can’t turn of a bit using OR, and XOR would only flip it back and forth between 0 and 1

    But we can’t use AND with the flag we want to unset alone, because that would keep the flag at 1 and change every other to 0!

    0000 0101
    0000 0001
    0000 0001

    This is the opposite of what we want! Wait? Opposite you say? Well how about we use the NOT operator here to get the opposite result? This works perfectly, because NOT 0000 0001 is 1111 1110. And, as we saw earlier, doing AND 1 won’t change the element we’re AND‘ing with. So we get :

    0000 0101
    1111 1110
    0000 0100

    Success! Only the bit we were trying to unset has changed. So let’s make a function that does this for us :

    Checking a bit

    To check a bit, we also need to use the bitwise AND operation. And since we are only checking and not setting, we don’t have to store the value anywhere which means we don’t have to worry about changing anything.

    To check a bitmask, simply do an AND operation with the value you want to check for ( in this case, any of the TTF_STYLE_.... values ). So, to check if a text is bold, we do an AND between our mask and TTF_STYLE_BOLD :

    0011 ( our bit mask, TTF_STYLE_BOLD and TTF_STYLE_ITALIC set )

    As you can see, we only check the bit that’s set in our variable ( TTF_STYLE_ITALIC set ) the others will be 0 no matter what our mask is. The value 0001 is not 0, and thus this evaluates to true and we now know that the font is bold.

    If our mask didn’t have the bold bit set ( only the italic one ), our mask would be 0010. An AND between 0010 AND 0001 is false ( they have no bit set to 1 in common ) and the result is 0 aka false.

    So let’s create a function for that too!


    With a little knowledge about binary numbers and bitwise operations, we can easily set, add, remove and check various font styles in SDL_TTF.

    Since it does involve a little low level code, I made a simple class that does the apparitions for us in a more intuitive way. I strongly suggest using this opposed to “raw” TTF_Fount*

    Feel free to comment if you have anything to say or ask questions if anything is unclear. I always appreciate getting comments.

    You can also email me :

    [ SDL2 – Part 10 ] Text rendering

    Rendering text

    In the previous parts, we’ve look at how to render rectangles and images, both with and without transparency. Now it’s time to look at how we can render text.

    Rendering text is tricky. You’ll want to be able to render any font, in any size and preferably every possible character. Luckily, with the SDL_ttf library, this is easy.


    SDL2_ttf, just like SDL2_image, is an additional library for SDL2. It can use just about every font, and you can set the size and text styling too!

    What’s TTF?

    TTF, or TrueType Fonts is a type of fonts developed by Apple and Microsoft in the late 90’s. True Type Fonts offers a high degree of control on how the font looks. The internals of TTF fonts and how they work isn’t important here. The important part is that they’re easy to use, will look really nice ( even scaled up. ) And they’re also widely used, so finding fonts shouldn’t be a problem.

    SDL2 TTF?

    As with SDL2_image, SDL2_ttf is an additional library for SDL2 that deals with rendering text and makes it very easy. It is based on libfreetype, a library for writing text using TTF fonts. However, it’s not very practical to use. SDL2_TTF makes using it a lot easier. But if you do want to use it yourself, you can take a look at their tutorial.

    Setting up SDL2_TTF

    Setting up SDL2 requires a tiny bit more work than SDL2_image, but don’t be scared, it’s still very easy. First we need to install the ttf library.


    Installing SDL2_ttf is done exactly like SDL2_image. Just replace SDL2_image with SDL2_ttf

    Here’s the short version :


    For Linux you can use need to install -lSDL2_ttf or -libSDL2_ttf or -SDL2_ttf ( the actual name might be different in different distributions. )

    The linker flag is -lSDL2_ttf

    The process is more or less identical to that of setting up SDL2_image.

    If you can’t find SDL2_ttf in any repositories and it’s not installed by default, you might have to compile it yourself. For more information, see my blog post about setting up SDL2.


    Similar to setting up SDL2 base.

    The difference is that you have to download the development files for SDL2_ttf

    And similarly add SDL2_ttf.lib to library includes and add SDL2_ttf.lib to the library flags ( where you previously added SDL2_image.lib )

    And with that, it should work.


    See the first part of my tutorial. Just install SDL2_ttf instead of SDL2


    Unlike SDL2_image does need to be initialized. Why? Because libfreetype, the library that SDL2_ttf builds upon needs to be initialized, so naturally SDL_ttf needs to be initialized too.

    Initializing SDL2_ttf requires a single function, TTF_Init() :

    Just like SDL_Init(Uint32 flags) this function returns -1 on error, but unlike SDL_Init(Uint32 flags), this method does not have any flags.

    Sine this function can fail and return -1, we should print an error if this happens. This means our routine for initializing SDL2_ttf will be the similar to SDL2, just with the two functions above :


    The basic object for SDL_TTF is TTF_Font. A TTF_Font basically holds information about a font like the font itself and data about styling and size. The exact internals of TTF_Fonts is only known to the library using it so I won’t go into depths about it.

    The only thing you need to remember about TTF_Fonts is that they hold all information about the font that SDL_TTF needs to render it, and that they need to be loaded and unloaded ( we’ll look at this later. )

    Loading fonts

    This is the central structure of SDL2_ttf. It holds the font itself, the size and some other style information ( I’ll go into this in the next part ). So, in order for us to use an TTF_Font we need to load it. This is done using a load function :

    Arguments :

    • const char *file – a pointer to the .ttf file
    • int ptsize – the size of the font

    Return value :

    A pointer to the created TTF_Font<

    The function returns a NULL pointer of it can’t find the file, or there is another error ( like SDL2_ttf isn’t initialized. So this too should be handled by priting the error using SDL_GetError(), just like when initializing ttf

    Cleaning up fonts

    Just like we with SDL_Texture* and SDL_Surface*, we need to clean our fonts when done. This is just as easy for TTF_Fonts as with SDL_Texture* and SDL_Surface*. We simply call a function that does it for us :

    Rendering text

    There are three functions you can use to render text, depending on what you want. Let’s start with the first one :


    This function is used for quick and simple rendering of a text, using a specific font and a font color. The background of this is transparent. Here’s the signature:

    Arguments :

    •  TTF_Font *font – the font to use
    • const char *text – the text to render
    • SDL_Color fg –  the color to use for the text

    Return value :

    A SDL_Surface with the rendered text

    The function returns a NULL pointer of it can’t find the file, or there is another error ( like SDL2_ttf isn’t initialized. So this too should be handled by priting the error using SDL_GetError(), just like when initializing ttf

    The result will look something like this :


    The next function is very similar to the previous one

    Arguments :

    •  TTF_Font *font – the font to use
    • const char *text – the text to render
    • SDL_Color fg –  the color to use for the text
    • SDL_Color fg –  the color to use for the text

    Return value :

    A SDL_Surface with the rendered text

    The function returns a NULL pointer of it can’t find the file, or there is another error ( like SDL2_ttf isn’t initialized. So this too should be handled by priting the error using SDL_GetError(), just like when initializing ttf

    As you can see, both the arguments and return value is the same for TTF_RenderText_Solid and TTF_RenderText_Blended. So what’s the difference between TTF_RenderText_Solid and TTF_RenderText_Blended? The difference is that TTF_RenderText_Solid is very quick, but TTF_RenderText_Blended produces a better result. In our game, we won’t be updating our text surfaces all that often, and there’s not a lot of them either, so TTF_RenderText_Blended is a good choice.

    Here’s what TTF_RenderText_Blended looks like :

    And here’s a comparison between TTF_RenderText_Solid and TTF_RenderText_Blended :

    The difference is not huge, but in the actual game it will be more clear. And the difference might also vary from font to font.


    This function is a bit different from the two other ones. It will render the texture with a specified background color.

    Arguments :

    •  TTF_Font *font – the font to use
    • const char *text – the text to render
    • SDL_Color fg –  the color to use for the text
    • SDL_Color bg –  the color to use for the background

    Return value :

    A SDL_Surface with the rendered text

    The function returns a NULL pointer of it can’t find the file, or there is another error ( like SDL2_ttf isn’t initialized. So this too should be handled by priting the error using SDL_GetError(), just like when initializing ttf

    So it’s almost the same as the other two, just with a fourth argument for the background color. The return value is also the same as the other two. The difference in the resulting function is that you will get a texture with a background color. The background color ( bg ) will fill the entire rect around the text. The text will be rendered on top of it with the specified foreground color ( fg ).

    The result will look something like this :

    An example

    Below is a simple example that should run and compile out of the box. For compilation details, look below.

    Compilation notes

    Running it is just as simple as with SDL2_image. So that means compilation on Windows is already set up when you installed TTF

    Linux / Mac

    If you are compiling using the compiler, you have to add -lSDL2_ttf to the compile string like so :

    clang++ main.cpp -std=c++11 -o Game -lSDL2 -lSDL2_image -lSDL2_ttf

    If you want to run it, you simply do


    Updated game code

    I have done a bit of cleaning up in the game code. I’ve added a new Texture class for text, cleaned up include, removed ( and added ) comments, improve delta calculation++ Everything should be explained in comments, but, of course, if you have any questions of any kinds, just comment or contact me, I’ll be happy to help.

    You can find the code here.


    Text rendering can be hard, but SDL2 makes it quite easy. Just load your TTF_Fonts and you can easily get them as a SDL_Surface.

    Feel free to comment if you have anything to say or ask questions if anything is unclear. I always appreciate getting comments.

    You can also email me :