When working with gaming libraries, one has to drop into system specific APIs(Win SDK on Windows and Xlib and et al on *nix) to access the video related functionalities. These functionalities include initializing the video, setting the best video mode and loading the bitmapped images among other things. But SDL encapsulates all these within the video sub-system. The functions that provide access to these are SDL_Init() and SDL_SetVideoMode.
SDL_Init() initializes the sub-system that has been passed as parameter. To initialize video the parameter would be SDL_INIT_VIDEO. To elucidate:
would initialize video as well as audio.
Once the video is initialized, the next obvious step is setting the best video mode. To achieve this end, SDL contains a method called SDL_SetVideoMode. This method sets up a video mode with specified width, height and bits per pixel i.e. depth. In short using this function one could set up the required resolution. The parameters include width, height, bpp (bits-per-pixel) and flags. The first three parameters take integer values. They represent height, width and depth of the screen respectively.
The fourth parameter needs some consideration. The flags parameter defines the properties of the surface of the screen. They are eleven in number. The important and most commonly used are:
This instructs SDL to create the surface in the system memory. In other words, the rendering area is created using a software renderer. This is useful when support for software-based acceleration is on cards.
To create a surface in hardware memory i.e. the memory of the graphics card, use this value as the flag value. In other words, to support hardware acceleration use this value. It can be used with SDL_SWSURFACE to support both.
When the passed depth value is unsupported on the target machine, then SDL emulates it with a shadow surface, i.e. to emulate required depth, SDL would use shadows. To prevent this pass SDL_ANYFORMAT as the flag value. By using SDL_ANYFORMAT, SDL can be instructed to use the video surface even if the required depth is not available.
This flag enables hardware double buffering. The double buffering works only if called with SDL_HWSURFACE. Otherwise when the flipping function is called only updating of the surface takes place.
This flag creates an OpenGL rendering context. This is useful when SDL is used in conjunction with OpenGL.
By passing this as the flag value, the mode could be changed to full screen. If SDL is unable to do so, it will use the next available higher resolution. But the window will be centered on a black screen.
To show the window without decoration (without the title bar and frame decoration), use this as the value. By setting SDL_FULLSCREEN as the flag value, this flag is automatically set.
All the above values are the same as that of SDL_Surface. The SDL_SetVideoMode() function returns a pointer to the structure SDL_Surface. Now let's see how to use it in a program.
int main(int argc,char* argv)
/*The following code does the initialization for Audio and Video*/
/*If initialization is unsuccessful, then quit */
If you recall, most of the above code covers the same ground as I discussed earlier in this article. The parts I want to focus on are set in bold. First a point to the structure SDL_Surface is declared. When the video mode is set, this comes into the picture. Then the video is initialized using SDL_Init(). If initialization fails, then exit the application.
As I said before, the function to set video mode returns a pointer to the initialized SDL_Surface structure. The above code sets the resolution at 640x480 at 8 bit depth. It also sets the rendering to software based, i.e. the surface is created in system memory and not in the graphics card's memory. Now that video mode has been set we can move to the next section, which will cover loading a bitmap onto the returned surface.
blog comments powered by Disqus