Error Calibration

Functions:

sample_error_frames(confidences, bodyparts, ...)

Randomly sample frames, enriching for those with low confidence keypoint detections.

load_sampled_frames(sample_keys, video_dir, ...)

Load sampled frames from a directory of videos.

save_annotations(project_dir, annotations, ...)

Save calibration annotations to a csv file.

save_params(project_dir, estimator)

Save config parameters learned via calibration.

noise_calibration(project_dir, coordinates, ...)

Perform manual annotation to calibrate the relationship between keypoint error and neural network confidence.

keypoint_moseq.calibration.sample_error_frames(confidences, bodyparts, use_bodyparts, num_bins=10, num_samples=100, conf_pseudocount=0.001)[source]

Randomly sample frames, enriching for those with low confidence keypoint detections.

Parameters:
  • confidences (dict) – Keypoint detection confidences for a collection of recordings

  • bodyparts (list) – Label for each keypoint represented in confidences

  • use_bodyparts (list) – Ordered subset of keypoint labels to be used for modeling

  • num_bins (int, default=10) – Number of bins to use for enriching low-confidence keypoint detections. Confidence values for all used keypoints are divided into log-spaced bins and an equal number of instances are sampled from each bin.

  • num_samples (int, default=100) – Total number of frames to sample

  • conf_pseudocount (float, default=1e-3) – Pseudocount used to augment keypoint confidences.

Returns:

sample_keys – List of sampled frames as tuples with format (key, frame_number, bodypart)

Return type:

list of tuples

keypoint_moseq.calibration.load_sampled_frames(sample_keys, video_dir, video_frame_indexes, video_extension=None)[source]

Load sampled frames from a directory of videos.

Parameters:
  • sample_keys (list of tuples) – List of sampled frames as tuples with format (key, frame_number, bodypart)

  • video_dir (str) – Path to directory containing videos

  • video_frame_indexes (dict) – Dictionary mapping recording names to arrays of video frame indexes. This is useful when the original keypoint coordinates used for modeling corresponded to a subset of frames from each video (i.e. if videos were trimmed or coordinates were downsampled).

  • video_extension (str, default=None) – Preferred video extension (passed to keypoint_moseq.util.find_matching_videos())

Returns:

sample_keys – Dictionary mapping elements from sample_keys to the corresponding videos frames.

Return type:

dict

keypoint_moseq.calibration.save_annotations(project_dir, annotations, video_frame_indexes)[source]

Save calibration annotations to a csv file.

Parameters:
  • project_dir (str) – Save annotations to {project_dir}/error_annotations.csv

  • annotations (dict) – Dictionary mapping sample keys to annotated keypoint coordinates. (See keypoint_moseq.calibration.sample_error_frames() for format of sample keys)

  • video_frame_indexes (dict) – Dictionary mapping recording names to arrays of video frame indexes. This is useful when the original keypoint coordinates used for modeling corresponded to a subset of frames from each video (i.e. if videos were trimmed or coordinates were downsampled).

keypoint_moseq.calibration.save_params(project_dir, estimator)[source]

Save config parameters learned via calibration.

Parameters:
  • project_dir (str) – Save parameters {project_dir}/config.yml

  • estimator (dict) – Dictionary containing calibration parameters with keys: - conf_threshold: float, confidence threshold for outlier detection - slope: float, slope of error vs confidence regression line - intercept: float, intercept of error vs confidence regression line

keypoint_moseq.calibration.noise_calibration(project_dir, coordinates, confidences, *, bodyparts, use_bodyparts, video_dir, video_extension=None, conf_pseudocount=0.001, video_frame_indexes=None, **kwargs)[source]

Perform manual annotation to calibrate the relationship between keypoint error and neural network confidence.

This function creates a widget for interactive annotation in jupyter lab. Users mark correct keypoint locations for a sequence of frames, and a regression line is fit to the log(confidence), log(error) pairs obtained through annotation. The regression coefficients are used during modeling to set a prior on the noise level for each keypoint on each frame.

Follow these steps to use the widget:
  • Run the cell below. A widget should appear with a video frame.

The yellow marker denotes the automatically detected location of the bodypart.

  • Annotate each frame with the correct location of the labeled bodypart
    • Left click to specify the correct location - an “X” should appear.

    • Use the prev/next buttons to annotate additional frames.

    • Click and drag the bottom-right shaded corner of the widget to adjust image size.

    • Use the toolbar to the left of the figure to pan and zoom.

  • It is suggested to annotate at least 50 frames, tracked by the ‘annotations’ counter.

This counter includes saved annotations from previous sessions if you’ve run this widget on this project before.

  • Annotations will be automatically saved once you’ve completed at least 20 annotations.

Each new annotation after that will trigger an auto-save of all your work. The message at the top of the widget will indicate when your annotations are being saved.

Parameters:
  • project_dir (str) – Project directory. Must contain a config.yml file.

  • coordinates (dict) – Keypoint coordinates for a collection of recordings. Values must be numpy arrays of shape (T,K,2) where K is the number of keypoints. Keys can be any unique str, but must start with the name of a videofile in video_dir.

  • confidences (dict) – Nonnegative confidence values for the keypoints in coordinates as numpy arrays of shape (T,K).

  • bodyparts (list) – Label for each keypoint represented in coordinates

  • use_bodyparts (list) – Ordered subset of keypoint labels to be used for modeling

  • video_dir (str) – Path to directory containing videos. Each video should correspond to a key in coordinates. The key must contain the videoname as a prefix.

  • video_extension (str, default=None) – Preferred video extension (used in keypoint_moseq.util.find_matching_videos())

  • conf_pseudocount (float, default=0.001) – Pseudocount added to confidence values to avoid log(0) errors.

  • video_frame_indexes (dict, default-None) – Dictionary mapping recording names to arrays of video frame indexes. This is useful when the original keypoint coordinates used for modeling corresponded to a subset of frames from each video (i.e. if videos were trimmed or coordinates were downsampled).