For multiple reasons, the automatic annotation of video recordings is challenging. The amount of database video instances to be annotated is huge, tedious manual labeling sessions are required, the multi-modal annotation needs exact information of space, time, and context, and the different labeling opportunities require special agreements between annotators, and alike. Crowd-sourcing with quality assurance by experts may come to the rescue here. We have developed a special tool: individual experts can annotate videos over the Internet, their work can be joined and filtered, the annotated material can be evaluated by machine learning methods, and automated annotation may start according to a predefined confidence level. A relatively small number of manually labeled instances may efficiently bootstrap the machine annotation procedure. We present the new mathematical concepts and algorithms for semi-supervised induction and the corresponding manual annotation tool which features special visualization methods for crowd-sourced users. A special feature is that the annotation tool is usable for users not familiar with machine learning methods; for example, we allow them to ignite and handle a complex bootstrapping process.