I've got NO idea how to pull request to a submodule since qt-creator is NOT administrated by OpenMV.
Anyway, I listed all my modifications here FIRST, and then will figure out how to deal with submodule pull request.

There might be some other modifications .... Well....
Anyway, I'm running OpenMV IDE 2.0.0, with full functionalities, using an OpenMV3 R2 (I think it's STM32F765).

Let me know when you'll be on kickstarter, and I even purchased your wifi shield just now.

Questions:
A. What is the NEWEST firmware version? My built OpenMV comes with firmware 2.9.0, but the IDE keeps informing me to upgrade to a NEWER version? I think I'm using the NEWEST OpenMV firmware already...
B. I'm expecting a lot more functionalities, like: contour, dilate, etc... Have you done that? If done, can you please show me the doc?

1. We already support dilate and erode. https://docs.openmv.io/library/omv.imag ... age.dilate That's been there forever. We support find blobs for blob finding. We do not support contour finding. The reason for this was a lot of our stack was developed for the M4 which has extremely limited memory and contours require you to store a list of points per color region versus a fixed sized struct like find_blobs() returns. Unless you need the shape of the blob you don't need find contours. Even then, with CNNs using any of the old OpenCV techniques isn't the best use of time. For example, after finding a color blob which find_blobs returns you can use a CNN to determine what's going on under that blob.

...

As for adding contours... the return on investment for me doing this isn't there. If you'd like it you can always edit the C code. Any major new feature expansions will be on ML CNN support. Moving forwards me and Ibrahim want to avoid trying to "put openCV on a microcontroller" and instead want to just support deep learning on it and be able to run models trained with a desktop CPU.

The reason for this is to avoid endless feature creep that we are the bottleneck to support. Please understand when we started the OpenMV Cam project the goal was just to create a programmable color tracking sensor. In particular, if you look at our product in regards to the CMUCam1/2/3/4 and the Pixy (CMUcam5) we use the exact same type of blob detection technique and we were just trying to make the sensor more useful. It was never our goal to be OpenCV. However, because we had firmware space we added a lot more features in response to customer demands. We're trying to scale that back now. It's not possible for 2 guys in their free time to adding new features endlessly. Deep learning support is the best way we can scale functionality without being the limiter on it.

1. We already support dilate and erode. https://docs.openmv.io/library/omv.imag ... age.dilate That's been there forever. We support find blobs for blob finding. We do not support contour finding. The reason for this was a lot of our stack was developed for the M4 which has extremely limited memory and contours require you to store a list of points per color region versus a fixed sized struct like find_blobs() returns. Unless you need the shape of the blob you don't need find contours. Even then, with CNNs using any of the old OpenCV techniques isn't the best use of time. For example, after finding a color blob which find_blobs returns you can use a CNN to determine what's going on under that blob.

As for adding contours... the return on investment for me doing this isn't there. If you'd like it you can always edit the C code. Any major new feature expansions will be on ML CNN support. Moving forwards me and Ibrahim want to avoid trying to "put openCV on a microcontroller" and instead want to just support deep learning on it and be able to run models trained with a desktop CPU.

The reason for this is to avoid endless feature creep that we are the bottleneck to support. Please understand when we started the OpenMV Cam project the goal was just to create a programmable color tracking sensor. In particular, if you look at our product in regards to the CMUCam1/2/3/4 and the Pixy (CMUcam5) we use the exact same type of blob detection technique and we were just trying to make the sensor more useful. It was never our goal to be OpenCV. However, because we had firmware space we added a lot more features in response to customer demands. We're trying to scale that back now. It's not possible for 2 guys in their free time to adding new features endlessly. Deep learning support is the best way we can scale functionality without being the limiter on it.

$ git status
On branch master
Your branch is up to date with 'origin/master'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
(commit or discard the untracked or modified content in submodules)
modified: qt-creator (new commits, modified content)
Untracked files:
(use "git add <file>..." to include in what will be committed)