opticalFlow#
Executive Summary#
Module reads in a image message, and an attitude message. When two image messages have provided image pointers at seperate times, their attitudes (BN at the time of image) are identified and saved. The first image is processed to find “good features”, meaning using Shi-Tomasi features, while one the second image is processed it is used with Optical Flow in order to match the features pairs between images.
A message is then written containing the new and old features, the number of features, and the attitudes and times at which they were taken.
Descriptions of the core algorithms can be found on the OpenCV documentation here: GoodFeatures, `OpticalFlow`_
Message Connection Descriptions#
The following table lists all the module input and output messages. The module msg connection is set by the user from python. The msg type contains a link to the message structure definition, while the description provides information on what this message is used for.
Msg Variable Name |
Msg Type |
Description |
---|---|---|
keyPointsMsg |
PairedKeyPointsMsgPayload |
Output key point message |
imageInMsg |
CameraImageMsgPayload |
Input camera message containing images |
attitudeMsg |
NavAttMsgPayload |
Input Spacecraft body to inertial attitude |
ephemerisMsg |
EphemerisMsgPayload |
Input Target body to inertial attitude |
Module Description#
The logic of the module flows as such: as updateState is called and messages are processed, it checks for when a newImage is processed. If no image was processed prior, it finds features in the image, and stores this data as the old image. When a second image comes through the module, it is now the new image of a pair. A bool is raised that both an old and a new image are present. Optical Flow can now be used to track the features between images. Once that is done all of the data is sent in a message containing the times of the measurements, attitudes when the images were taken, features, cameraID, and number of features matched.
A mask is applied to the images when used for Shi-Tomasi feature detection in order to constrain the feature detection to the disk, and avoid sharp edges at the limb. This done using the following steps:
First the distance transform of the input is computed providing the distance of each point to the dark background
Second the image is thresholded
Third the image is eroded by a masking value (eroding away N pixels from the edge of the limb)
Finally the mask is converted to the correct type
The module outputs a message containing the key point pairs between two sets of images and the relevant information for each image.
User Guide#
This section is to outline the steps needed to setup a OpticalFlow in Python.
Import the opticalFlow class:
from Basilisk.fswAlgorithms import opticalFlow
Create an instantiation of the module:
module = opticalFlow.OpticalFlow()
#. Define general parameters. The minTimeBetweenPairs forces a minimum number of seconds between two images for feature tracking. For example:
module.modelTag = "opticalFlow"
module.minTimeBetweenPairs = 5
Define parameters for the goodFeaturesToTrack method:
module.maxNumberFeatures = 100 module.qualityLevel = 0.1 module.minumumFeatureDistance = 5 module.blockSize = 10
Define parameters specific to the optical flow:
module.criteriaMaxCount = 10 module.criteriaEpsilon = 0.01 module.flowSearchSize = 10 module.flowMaxLevel = 2
Connect the approriate messages:
module.imageInMsg.subscribeTo(imageInMsg) module.attitudeMsg.subscribeTo(attInMsg) module.ephemerisMsg.subscribeTo(ephemInMsg)
Class OpticalFlow#
-
class OpticalFlow : public SysModel#
feature tracking module
Public Functions
-
void updateState(uint64_t currentSimNanos)#
This method reads the images and follows the following high level logic: If no image has been saved from previous time steps and a second image is present, save it and make a mask If an image was previously saved and a second image arrives, perform the feature matching Return the features and clear the memory of both previous images
- Parameters:
currentSimNanos – The clock time at which the function was called (nanoseconds)
- Returns:
void
-
void reset(uint64_t currentSimNanos)#
This method performs a complete reset of the module. Local module variables that retain time varying states between function calls are reset to their default values.
- Parameters:
currentSimNanos – The clock time at which the function was called (nanoseconds)
- Returns:
void
-
void makeMask(cv::Mat const &inputBWImage, cv::Mat &mask) const#
This method reads an black and white image and makes a mask in order to remove the limb of the target. First the distance transform is computed from the input Image to get the distance of each point from the dark (zero) pixel background. Second the image is thresholded Third the image is eroded by a masking value (eroding away N pixels from the edge of the limb) Finally the mask is converted to the correct type
- Parameters:
inputBWImage – cv::Mat of the input image
mask – cv::Mat of the output mask (binary black and white image)
- Returns:
void
Public Members
-
std::string directoryName = ""#
Directory name for module to read an image directly.
-
std::string imageFileExtension = ".png"#
Directory name for module to read an image directly.
-
Message<PairedKeyPointsMsgPayload> keyPointsMsg#
The name of the output message containing key points.
-
ReadFunctor<CameraImageMsgPayload> imageInMsg#
The name of the camera output message containing images.
-
ReadFunctor<NavAttMsgPayload> attitudeMsg#
The name of the input attitude information.
-
ReadFunctor<EphemerisMsgPayload> ephemerisMsg#
The name of the input central target ephemeris data.
-
uint64_t sensorTimeTag#
[ns] Current time tag for sensor out
-
double minTimeBetweenPairs = 1#
[s] Minimum time between pairs of images
-
bool slidingWindowImages = false#
[bool] Minimum time between pairs of images
-
int32_t maxNumberFeatures = 100#
OpenCV specific arguments needed for Shi-Tomasi “good features to track”
-
int32_t criteriaMaxCount = 10#
OpenCV specific arguments needed for OpticalFlow
-
int32_t thresholdMask = 20#
OpenCV specific arguments needed for masking
-
void updateState(uint64_t currentSimNanos)#