Recently the observation of surveillanced areas scanned by multi-camera systems is getting more and more popular. The newly developed sensors give new opportunities for exploiting novel features. Using the information gained from a conventional camera we have data about the colours, the shape of objects and the micro-structures; and we have additional information while using thermal camera in the darkness. A camera with depth sensor can find the motion and the position of an object in space even in the case when conventional cameras are unusable. How can we register the corresponding elements on different pictures? There are numerous approaches to the solution of this problem. One of the most used solutions is that the registration is based on the motion. In this method it is not necessary to look for the main features on the pictures to register the related objects, since the features would be different because of the different properties of the cameras. It is easier and faster if the registration is based on the motion. But other problems will arise in this case: shadows or shiny specular surfaces cause problems at the motion. This paper is about how we can register the corresponding elements in a multi-camera system, and how we can find a homography between the image planes in real time, so that we can register a moving object in the images of different cameras based on the depth information.