Image-guided surgery offers great advantages to surgeons through the possibility to track tools in 3D space and to navigate based on the virtual model of the patient. In the case of robot-assisted procedures, both the inherent accuracy of the system components and the quality of the registration procedures are critical to provide high precision treatment delivery. One of the major barriers towards more technology-integrated procedures is the fact that alterations in the operating room environment can fundamentally change the performance of the system, decrease the accuracy, and therefore pose significant danger to the patient. Surgical events from the control point of view may include motion of the robot, motion of the camera, or motion of the patient. The paper describes a new concept to treat these events, to track and automatically compensate for abrupt changes that may affect the accuracy of a robot-integrated interventional system. Our solution is to use all available information at a given time, including the intra-operative tracker's internal base frame, to distinguish between different surgical events. The concept has been developed and tested on the neurosurgical robot system at the Johns Hopkins University. Initial experiments performed on data recordings from simulated scenarios showed that the algorithm was able to correctly identify the cases.