Once we have a solution for we're faced with mapping it to a 2D perspective image. Obviously having the solution at a point in 26 directions is not adeqate for making a picture of roughly 1 million pixels each corresponding to an eye direction. Hence we perform a final integration, in this case to achieve a high resolution angular solution for a particular point. We integrate in the traditional volume ray tracing fashion through the volume from the eye as in [7]. We evaluate the brightness S along a ray paramaterised by t,

Note that is the intensity at the point in the direction of the eye . In effect this is performing yet another scattering. The value of used for this stage was calculated in parallel by computing the to interpolate and summing the series 3 (to conserve memory this was actually done a slice at a time).

Since is an approximation to the complete solution at all points in space we can get an orthographic projection of the scene by simply taking a slice out of the volume. This was found to be useful for quick previews while modelling and debugging.

After the first stage of the algorithm we have computed a *view
independent* solution which may be saved. The second ray traced stage
extracts a *view dependent* solution.

The solution provides a lighting solution for the ground plane. Clouds will cast shadows on the ground correctly. Also a solution at a point on the ground can be stochastically sampled in many directions and, coupled with ray casts to determine local obscuring features, account for diffuse shadows cast by objects like trees, small mountains e.t.c.

Thu Nov 17 10:01:16 EST 1994