In trying to port my Offline Maps application to Windows Presentation Foundation I finally started experimenting with WPF.

The ultimate goal is to have a user interface designed with WPF and to view the tiles in a WPF 3D viewport with the possibility to pan, tilt, rotate and zoom, or simply put: travel over the tiles in a 3 dimensional view.

But before all that, I’ll first have to learn WPF. And that is what this series of posts will be all about.

Today, I’ll talk to you about the camera in WPF 3D. To be more specific: the Perspective Camera:

**Creation of a 3D scene**

I will not be rehashing how to create a 3D scene in WPF. You can find some very good tutorials about that on the internet. One of them is this Windows Presentation Foundation (WPF) 3D Tutorial which teaches you the basics of meshes, triangularisation and normals.

**Looking at your scene**

The same above tutorial also shows you how you can look at your scene using a persprective camera. You get the chance to experiment with various viewpoints and angles. It does however not fully explain the intercation between the FieldOfView and Position properties of the camera. That’s the gap I’l l try to fill with this tutorial.

**The interaction between FieldOfView and Position**

What is the field of view and the position of a camera? Well according to the MSDN documentation of the property the field of view for a prespective camera is…

… the horizontal bounds of the camera’s projection between the camera’s position and the image plane.

and the position poperty…

… returns the location of the camera, not the LookDirection on which the camera’s projection is centered.

So, if we visualize this in a picture, we get:

Now, let’s say we have a cube with sides 20 units and we are looking at it with a perspective camera with following properties:

FieldOfView = 90 degrees

Position = 10 units away from the cube

We have a setup similar to this:

The green line represents the projection plane on wich our scene is drawn by WPF. You can see that the projection of the backplane of the cube takes 1:3 of the cubes width and the front of the cube takes up our complete field of view width. This is exactly the same is what we see in the application (which is of course not that supprising):

Now, if we step back further from our cube and set the camera’s position at 20 units away from it, we get following projection:

There are a few things to notice here:

First, the front of our cube does no longer take up our complete field of view: we have some space between the sides of our cube and the intersection of our field of view with the projection plane.

Second, the projection of the backplane of the cube no longer is 1:3 of the frontside but now is 1:2.

Again, unsurprisingly, this matches what we see in our application:

Now, if we stay at our current position, but narrow our field of view, we get following projection:

Notice how the space at the sides of our cube disappeared. With the field of view being 53 degrees and our position at 20 units from the cube, our field of view border touches again the sides of our cube. Also notice how the ratio of the backplane to the frontplane of our cube didn’t change by changing the field of view alone (thus, in respect with our previous example)

Our application again confirms our expectations.

**Conclusion**

In conclusion we can state that:

Changing the position of our camera with respect to the subject we are looking at will change the ratio’s of horizontal lines at different depth’s (Z-vallues)

Changing the field of view will not change the ratio’s but will make the subject appear bigger.

**Downloads**

**Links**