I would think it would be something analogous to a list of cv::Keypoint.

I'll add this to the next release, however note each keypoint consumes a lot of memory (the descriptor alone is 32 bytes), converting a list of keypoints to Python objects could make the camera run out of memory, so this might not be very usable until our next camera is released.

You could do this without accessing individual keypoints, using the delta of kptmatch objects centers (cx2-cx1, cy2-cy1).

I also tried this before using optical flow, it worked very well (I draw a map of my movement through the room) I'm attaching a couple of (very rough) scripts I used (one for the camera, the other for host side, it uses pygame to draw the map).

In my case I'm travelling down a hallway with a fair number of wall features plus occasional obstacles. I'm not localizing/mapping, so the displacement information isn't as useful as position relative to the side walls or obstacles. I currently have my MV 7 pointing forward, and I'm using find_displacement with two small rectangular areas on the left and right. I subtract their absolute values to find differential forward minus common spin.
diff = math.fabs(delta_x_l) - math.fabs(delta_x_r)

This works fairly well for centering movement down a hallway until there is a junction or obstacle. I suspect I could get more useful information with furthur field segmentation, but with keypoint matches I can go directly to more established sfm algorithms.

There could be some reduction in the data. I don't need the full keypoint info, just the matched point locations in the image. Two lists of point locations where the same indexes represent the match should be good.

On a somewhat related issue, most of what I am doing will be memory intensive so if there is an option on the next design iteration for a larger ram capacity at a higher price, I would be interested in that.