By MA SNART
See! 3d's not that hard after all...umm...no, really!
Almost every video game you can buy today is presented in 'Incredible NEW!' 3D graphics. However the basics of 3D graphics have been with us long before the computer was invented. Not until recently has the processing speed been fast enough to draw the graphics in real time. In this series of articles I will show you the basics of how 3D is done in games, the concepts involved, and the limitations that have yet to be overcome.
First off there are currently two distinctly different ways for game developers to implement 3D: the extremely popular polygonal way...and the voxel way. The polygonal way is obviously used the most. Because 3D polygon objects in games don't take up a lot of PC memory, Objects can have very few polygons and as such take less time to RENDER, To get realistic results depends more on the 2D textures that cover objects. These 2D textures are easier for artists to make because they are similar in design to the 2D tiles they have made for years. There are also many ways to optimize these 3D engines by using BSP-tree techniques or 3D accelerator cards. However with many developers depending on many large texture-maps and intricate mesh designs you need quite a bit of memory to run some of the latest games...Also with the large burden on texture-mapping related techniques today's 3D game engines are still limited to how many polygons can be viewed...Also the reliance on polygons ultimately limits the ability to render organic forms... With voxels on the other hand organic forms are very easy to render. Also a 'scene' can be much more complex than a comparable polygon environment. Rendering such a scene isn't anymore of a time consuming burden than rendering a simple object [ideally]. However, a voxel model takes up a LOT of memory, in fact this is a primary reason why this form of 3D hasn't been implemented as much. But it has been used [in a limited form] as a 'landscape' render for some games, and as '3D sprites' in such games as SHADOW WARRIOR and TOTAL ANNHIALATION. And a upcoming game called OUTCASTS will use voxels in ways yet unseen in games...[sorry I'm a bit bias, I prefer voxels to polygons :)] Anyhow the concepts that follow are used by both rendering techniques.
Remember this: two points define a line [or vector] three points define a plane and three or more points on the same plane define a polygon. A Verticy is one of a number of points that define a polygon [or in Voxel graphics: it could just be considered a voxel]. Also any number of polygons can use the same verticy [this is important]. A vector is a point that is used to move [or translate and rotate] vertices [and points] from one place to another. That is, 'basically' what thay are used for. Example: [on a tile-engine] to get the player from point A [say x=10, Y=10] to point B [say X=11, Y=10) you find the difference of point A from point B [B'sX=11 minus A'sX=10 which equals 1...and B'sY=10 minus A'sY=10 equaling 0] this difference is the VECTOR. To use the VECTOR you add it to the the current point to transform it to the desired point [point A [X=10,Y=10] plus the vector [X=1,Y=0] equals point B [X=11,Y=10]]. This may seem overly obvious [and simple] but it is a very important concept to understand.
The object space is simply an area that contains a unique object. An object being a 3D model of a unique game element [like the unique models for the player character, or a particular monster, and even the level itself]. Every point [be it a verticy of a polygon or a single voxel] of the object is measured, just like a vector, from the OBJECT space reference point [x=0;y=0;z=0]. None of these points HAVE to be located at 0,0,0; in fact the whole object doesn't HAVE to be anywhere near the reference point [the models in QUAKE are located above it, appearing to 'stand' on it].
The world space is the area where ENTITIES preside. Everything in WORLD space is [just like OBJECT space] a measurement from the WORLD reference point [also 0,0,0]. An ENTITY is actually what players control and interact with during a game. Each ENTITY has variables attached to it for things like location, facing vector, movement vector, rotation angles for the x/y/z axis and a POINTER to whatever object represents it [think of OBJECTS as sprites for your game. ENTITIES are then the specific monsters, creatures and effects that inhabit the world. Even if two monsters use the same set of sprites they would be considered different]. With that understanding then the first step of RENDERing: [in order to view any OBJECT that an ENTITY represents] The object [actually a copy of it residing in OBJECT space] must be transformed to WORLD space by using the attributes of the particular ENTITY. [just remember that ENTITIES have world coordinates, just like, in a tile-engine, the player has map coordinates]
Even though the camera is in essence just another ENTITY [it uses WORLD coordinates] in order for it to be used [for you to 'see the world'] the WORLD space must be transformed into CAMERA space. This is the second step of rendering [after the OBJECTS are transformed to WORLD space]. In CAMERA space [just like OBJECT and WORLD spaces] every point of every polygon and voxel is just a measurement from the CAMERA reference point [again 0,0,0]. At this step in the rendering process the third and final step can be performed...PROJECTION.
SUB-STEP 1: This step takes all the listed [CAMERA space transformed] polygons and removes the ones that wouldn't be visible. Like those that are behind the camera or that are considered to far away. It is also at this point that polygons that are in direct contact with the camera or are partially visible are 'clipped'. In 'clipping' the engine takes the regular full size polygon and 'removes' the unwanted portion. This 'visible' polygon list is then sent to SUB-STEP 2[the 'Z-buffer']
SUB-STEP 2: This is commonly called a z-buffer [NOTE: by doing some pre-runtime calculations and other tricks [this step may not be needed]. Basically what happens is that the polygon list that ends up here gets sorted into a list that re-orders the polygons from those farthest away from the CAMERA to those nearest to the CAMERA [along the Z dimension]. This is done so that, when they are drawn, those farthest away are first followed, in order, by those that are closer [the close ones are usually larger and need to be drawn over portions of those farther away]. At last, this new list of polygons is ready for PROJECTION.
DRAWING THE POLYGONS
And when it's done it can start all over again transforming the OBJECT space to WORLD space and so on for frame number two... As, you can imagine, 3D is a very math intensive format. A lot has to happen between frames in order to 'see' what is happening in the game 'world'. However the math isn't all that complicated once broken down into it's basic concepts: rotation and translation. Next time I'll cover these core math concepts in depth.
Back to Top