Students are using blackboard style lecture videos to learn topics.
But current practices in Video Navigation do not adequately support the kinds of goals the students typically have in finding information in a video.
This calls for more intuitive tools in helping students find information in blackboard-style lecture videos faster.

  • Project NoteVideo analyze and identifies conceptual ‘objects’ in a blackboard-style lecture video, creates an image of the video from it, and uses it as an in-scene navigation interface for the user to interact with.
  • This allows users to directly jump to the video frame where that object first appeared and be discussed instead of navigating it linearly through time.



We rendered black board style lecture videos into an interactive image where students can interact directly with the objects to start the video from there.



The systems work by analysing the video and extracting video objects and elements using a video processor. All extracted objects are then saved in JSON and images sets and then fed to interface player, which runs on web technology packages.

Extraction of elements is done when the video processor detects a drawing event by the author of the video using frame differencing. The timestamp of when the drawing starts, is tagged to the extracted object.



We evaluate our system and using Note Video on typical tasks of finding information in a black-board style lecture video is better than both transcript-based and scrubber-based interfaces.


Future Direction

Mass application of the systems and add features like collaborative-comment tagging and toolkit for interactive videos with embedded assessment. 






Click for Demo



Research Team
PI (Faculty): Prof. Shengdong Zhao
Members: Toni-Jan Keith Monserrat, Kevin McGee, Anshul Vikram Pandey