Video Game Design/Programming/Reality Simulation

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Realism (Reality Simulation)

[edit | edit source]


[edit | edit source]

Light and shadows

[edit | edit source]

Lighting and texturing

[edit | edit source]

Light maps

[edit | edit source]


[edit | edit source]

Bump maps

[edit | edit source]

Normal maps

[edit | edit source]

Paralax mapping

[edit | edit source]


[edit | edit source]
Adding details

Having the game include small, even if repetitive details will provide a richer experience, be it a gush of wind, a dust mote, flies over a trashcan, a leaf floating in the wind can be as powerful a more complex effects like dynamic lighting.


[edit | edit source]

Water is probably the most difficult effect to reproduce in a game, it includes reflections, transparencies and distortions, high details as waves, foam and is a fluid, behaving like a solid and liquid, so depending on the level of detail that ones is aiming for this will become not only a difficult task but will also consume a lot of computational power if done in real-time.


[edit | edit source]

For this part of our engine we are going to be using the OpenAL API. Why? For the simple reasons outlined in Choosing an API; It is open-source, cross-platform and powerful while still remaining relatively easy to use. So lets get started...

All objects that can emit sound in our game world have a position (with the exception of background music). Also associated with the sound is some sort of trigger event, so when the player does something to activate the trigger the sound will start to play. Simple hey! So now how are we actually going to implement this?

Well, OpenAL works on the concept of having a source(the sound that plays) and a listener(the person listening), these two objects can be placed anywhere in our 3D environment and given certain properties such as what attenuation model to use, what speed the source is traveling at etc, but we'll get onto that later. You can also have many sources (which makes sense), but only ONE listener (which also makes sense). When you add a sound for OpenAL to play with you first have to do three things; you create a buffer which you use to load your audio data into, you then create a source and then you associate the source with the buffer so that OpenAL knows what audio to play from which source. So taking all that into account we are going to encapsulate each source into a C++ struct. The struct so far, which we will call newSource, will hold the sources positional information as sourcePos[3], a sourceID and a bufferID so that we can uniquely address each source.

Something else we need to take into consideration is that since OpenAL very kindly attenuates sound for us based on distance we need to make the sound start playing when the player reaches the 'outer bounds' of the source (the point at which you can no longer hear the sound play). So well add in an activateDistance value to our struct as well.

Additionally, we need to take into consideration that sound data cannot load instantaneously from the hard drive since hard drives are pretty slow things in comparison to RAM. So we'll add a preloadDistance value to our struct as well so that when we move within that value the sound will load into the buffer and when we move within the activateDistance value the sound will start to play. Cool hey!

And finally, since we are most probably going to have more than one source (would be a pretty boring game if we did not) we are going to shove our structs into a C++ vector (if you do not know what that is, it is just an array but with more functionality) which we will call pipeline. We also need to add some functionality to remove 'dead' sources from the pipeline and free up memory, but well get onto this later on.

To illustrate how all this fits together.

And this one illustrates an 'in-game' view of how preloadDistance, activateDistance and sourcePos fit into the picture.

So, to outline the process:

  • When the player moves within the outer red sphere a new newSource struct is created, the sound is loaded into the buffer and pushed onto the pipeline.
  • When the player moves within the yellow sphere the sound starts playing and as the player moves closer towards the inner white sphere the sound will get louder until it reaches maximum volume at the white sphere.
  • Going in reverse, as the player moves away from the white sphere the sound will decrease in volume until you move outside the yellow sphere at which point the sound switches off but remains in the pipeline.
  • When the player exits the red sphere the source is removed from the pipeline and destroyed (culled) so that we do not take up unnecessary memory.