The scenegraph on the left illustrates a typical 3D scene, in this case, we are rendering the Sun, with both Mars and Earth orbiting the sun and the moon orbiting the earth. There are several internal nodes representing the pivot points of the planet's positions. The Earth and Mars pivots are translated away from the Sun pivot Node. The Moon's pivot is translated away from the Earth's pivot Node. The Sun, Planets and Moon are each children of their requisite pivot Nodes. These leaf-nodes have no translation applied to them and are thus centred around the pivot points.
Each Leaf Node contains the geometry, textures and renderstates which encapsulate the model of their object. The scene is animated by applying a rotation over time using a controller. The sun's pivot node is rotated which, via propagation automatically causes the Earth and Mars pivots to rotate around the sun, and thus the Earth, Mars and Moon are rotating around the Sun. The Earth's pivot is also rotated over time, causing the Moon to additionally start rotating around the Earth. Each of the Leaf Nodes then have there own rotation cycle causing them to spin around the pivot points to represent day/night cycles.
Simply through organising the scene graph we have produced a (quite primitive ;-) ) solar system-style model. We would also like some advanced lighting effects to be applied to each element of the scene. This is achieved through a Shader. The Shader is placed on the Sun Pivot as a RenderState and is automatically propagated down the scene-graph and applied to all other members of the scene (it has no effect on the pivot points as they contain no geometry). If we added a new planet (e.g. Saturn) to the Sun pivot, it would automatically inherit this shader. If we wished to add a item that was not effected by
any of the transforms or visual effects, then we would simply add a new branch to the root node (e.g. adding a rocket ship). The whole scene is drawn by simply passing the root node of the scene to the renderer, along with the camera positioned somewhere within the scene.
As mentioned previously, Homura utilises Java Monkey Engine version 2 - a Java-based OpenGL scenegraph API for its 3D and 2D Graphics rendering.
A scenegraph is a generic data structure which hierarchically organises the logical and spatial elements of a rendered scene representation. The scenegraph creates a tree-like structure
. A Node can have multiple child Nodes attached to it, but each Node can only be attached to one parent Node. This results in the classification of two types of Node:
- Internal Node: These nodes are used to organise the scene. An Internal Node has children, which themseleves can be either Internal or Leaf Nodes.
- Leaf Node: These typically represent the data that is to be processed by the rendering system, Leaf Nodes have no sub-nodes attached to them. They typically encapsulate the geometry of the scene.
In Homura / jME's Scenegraph API, these two node types share a common base class called a Spatial
. An internal Node is represented by the class Node
and Leaf Nodes are represented by the class Geometry
Spatial encapsulates the data shared by both Internal and Leaf Nodes. There are four main types of data encapsulated by a spatial:
- Transforms: These define the three types of transformation which can be applied to a spatial: Translation, Rotation and Scaling. In the node hierarchy these transforms are propagated to all children, so rotational changes to a parent node will also occur for all children. Therefore, two types
of transformation are stored: Local and World, related to the context in which the spatial is applied. Local Transformations are the characteristics applied to the Spatial structure itself (e.g a rotation of 90 degrees). World transformations represent these characteristics dependent upon the Spatial's placement within the scenegraph. If another spatial has a rotation of 90 degrees applied to it and is the parent of this spatial, then the world transform would be 180 degrees.
- Bounding Volumes: These define the volumes which minimally encapsulate the Node. For Leaf Nodes this is the volume which encapsulates the vertices of the geometry. For Internal Nodes, this is the volume which encapsulates the volumes of all child nodes.
- RenderStates: These define how the geometry will be displayed and are also propagated across any child nodes. RenderStates are describe in detail later on this page.
- Controllers: These define alterations to the spatials during the execution of the application. Controllers typically encapsulate aspects such as Model Animation, Physics calculations, RenderState changes, etc. A Controller object itself is not propagated to child nodes. However, the Controller may alter characteristics of the spatial such as transformations and RenderStates and thus may also effect the characteristics of all child nodes.
class represents an Internal Node. This class is responsible for maintaining a List of child Nodes (Internal or Leaf). This list is of arbitrary length (bounded only by processing and memory requirements). The Bounding Volumes of all children are merged together to form the entire BoundingVolume for this node. Node provides various methods for manipulation of the scenegraph, common to data structures, such as insertion, removal, access and modification. There are various helpers
to return sub-sets of the node List, determined by class type, name, index etc. The Node also provides methods to assist in Picking and Collision detection.
class represents a Leaf Node. This class encapsulates renderable data such as the Colour Buffer; Normal, Tangent and Bi-Normal Buffers; Texture Buffer (for Per Texture, Per Vertex Mappings); VBO Support; and includes various methods to add the manipulation of these buffers, and common operations associated with maintaining geometric data. There are various pre-existing Geometry implementations within the framework, which cover a wide range
of usage scenarios, such as Line, Point, Curve, TriMesh, QuadMesh, Text. TriMesh is the most utilised of these sub-types and is typically used to represent 3D Models loaded from external assets/data files as well as a number of primitive objects include Sphere,Box,Torus, Teapot (The famous Utah Teapot), Quad, Disk, Cylinder and many more.
Whilst the scene-graph organises the data associated with the rendering, it does not actually perform the rendering. This is handled by the Renderer
object, which is created when a Homura game is launched. As mentioned above, the Renderer is associated with various parameters to control aspects of the output such as colour depths, anti-aliasing and screen resolution. The Renderer provides the mechanism to render a scene to the created display (Window, Canvas etc).
This is where LWJGL is utilised within the architecture, providing a concrete implementation of the renderer, which in turn uses OpenGL. The renderer is implemented as a pluggable system, meaning that you could provide your own custom renderer implementation (Software-Based, Sun's OpenGL implementation (JOGL)
The top-most node of a scene-graph (the root node) is sent to the Renderer to display. The Renderer has an associated Camera object, which defines the perspective from which the scene is rendered. The Camera encapsulates the view inside a View Frustum
which bounds the elements that are visible within the scene, based on the camera's characteristics. The Renderer determine the parts of the scene-graph which are inside the Frustum
and culls those which are not in view. The geometry which is inside the Frustum is then placed into a set of RenderBuckets (Transparent, Opaque and Orthographic). The Transparent and Opaque Buckets are sorted to determine what objects are in front of each other from the current viewpoint. The Buckets are then processed by the GPU via OpenGL calls, producing the final scene on the display.
So how do you actually use a scene-graph? Figure 5 illustrates a common method for arranging both a 3D and 2D Scene.
Figure 5: A Typical Homura Scenegraph