Algorithm of hair color changing

I have two images: a face and a mask. Mask contains only 0's and 1's, and it selects hair. I want to change hair color and paste it back into the image.

I've already tried to change H in HSV representation of the image, but I have a big trouble with black hair. If I change only H then black remains black. If I try to change S or V, the image becomes slightly unnatural because hair becomes too bright and strike the eye.

Also I tried to use histogram matching, but there was the same problem: hair and face didn't match.

Is there any algorithm to do that? I beleive it must be relatively simple, but I can't understand how to do that.

See also questions close to this topic

In the book The Pragmatic Programmer they discuss an algorithm that may be useful for a sports reporter. The algorithm would pull interesting sports facts from a wide data set of baseball statistics, enabling the reporter to discuss on air only the most interesting facts provided.

This is a design I want to employ in a Java project utilizing a SQL database. What would that design look like? As well, if there is a name for this "field of study" please provide it as finding information for such a design is difficult.

Following are 2 algorithms I have written to ascertain if 2 numbers have the same digits and length. They're self-explanatory but if anything is not clear, please comment and I will add more details.

Question is this: Is there a better way to do this check ? Better in any aspect: efficiency (time and/or space), code readability, corner cases handling etc. For example, first approach has the risk of integer overflow but could be an OK approach as long as numbers are not large, because time-complexity is linear.
Appreciate the feedback.

I'm trying to figure out the algorithms to play Gomoku(5 by 5 version of tictactoe) with computers.
In this case, I found that the most commonly used algorithms are Min-max(or Alpha-beta) but these are too hard for me to handles.. So I decided to use following codes which are quite easy to understand but time consuming.
It shows how a computers make its reasonable choice.

//------------------------------------------------------------
// computer_move() checks for all legal moves in the current |
// position. then for each of them it calls the dfs_search() |
// function to get the moves' score. And finally it returns |
// the move with the best score. |
//------------------------------------------------------------
int computer_move() //
{
int best_move; // best move so far
int best_score = -100; // best score so far
int score; // current score
int i;
for (i = 0; i < 16; ++i) { //
if (pos[i] == EMPTY) { // if a legal move can be made
pos[i] = COMPUTER; // mark the move
score = -dfs_search(HUMAN); //
pos[i] = EMPTY; // take back the move
if (score > best_score) {
best_score = score;
best_move = i;
}
}
}
printf("Computer's move: %d\n", best_move);
return best_move; // return the best move found
}
//------------------------------------------------------------
// dfs_search() gets the side to move, find all legal moves |
// for him, and for each move, it recursively calls itself. |
// It returns a score for the position. |
// This recursive function continues on each variation until |
// the game's end is found in that variation. Which means |
// that the variation continues until check_result() returns |
// a value other than CONTINUE. |
// Note that this can be done in tic-tac-toe, since it's a |
// deterministic game. For games like chess or checkers we |
// can't continue the variation until reaching the game's end |
// so we have to cut the variation at some point. |
//------------------------------------------------------------
int dfs_search(int player) //
{
int best_score = -100;
int score;
int result;
int i;
result = check_result(player);
if (result != CONTINUE) return result; // return the result
for (i = 0; i < 16; ++i) {
if (pos[i] == EMPTY) {
pos[i] = player;
score = -dfs_search(CHANGE_PLAYER(player)); //
pos[i] = EMPTY;
if (score > best_score)
best_score = score;
}
}
return best_score; // return the best score
}

For 3 by 3 matrix, it works pretty well. For 4 by 4, however, it takes too long to leave a next stone. Since the reason of long time consuming is the first three or four decisions, I thought that just making the computer to search for best points only around the human's last choice(point) would be a solution.
After the first three or four decisions, above formal algorithm will work well for the few remaining points. How do you think about it? And give some advices to modify the current algorithm.

I use hog feature extraction in images to detect the horizon line. Default cell size for hog feature extraction is 8x8. When I use 16x16 I get better result of the accuracy of identifying horizon line. But in some images 8x8 is working better. However in general 16x16 is more successful. I didn't understand why 8x8 is working better in some images while 16x16 is more successful in general? What is the effectiveness of cell size in hog feature extraction?

I have an OpenCL kernel that I am writing to fit gaussian functions to a star field, to attempt to accurately find the position and size of each star. In order to do this, I am iteratively stepping the inputs to the gaussian function, finding the residual of the function, and then comparing that against the running minimum residual. If the residual is less, then it sets the latest iteration to be the best fit, a basic and standard way to go about this problem.

However, the issue arises with the speed of the execution as soon as I start changing the value of the running minimum. Without the check and assignment, the code runs very fast, allowing a gaussian to be solved for each of ~1,500 stars in just under a second. However, as soon as I add the check and assignment, the code slows down so much that I had to stop it after a minute of no indication of progress.

I know that the if statement is not slowing down the program at all, since it runs just as fast as code without the if statement, and with the assignment still commented out. It seems that the assignment itself is somehow causing slowdowns over an order of magnitude.

I am very confused by this problem, and I am chocking it up to some elusive quirk of OpenCL/Parallel Processing/Memory/Cache/Optimization/ect... that I am not seeing or not understanding.

Quick Edit: I have also tried using the min and select functions, and they are experiencing the same slowdown as the assignment.

I am relatively a newbie to computer vision and now currently doing a learning project on shape detection where I have a fixed region of interest(ROI) in all the images where the object is most likely present and I have to compare their shapes to give whether the object present in two input images are same or not.There are slight translational and scale changes and illumination changes.

I am trying to compare the shape of the object between two input images and trying to provide an output value describing their similarity. If the similarity is above a certain threshold, I can tell that the same object is present in both input images.

I have tried contours, but it does not give reliable results(thresholding either gives too many details or misses some vital details) and doesn't generalize well to all images.

I am thinking of using global shape descriptors like HOG or local shape descriptors like SIFT, SURF, ORB,... Can anyone guide me in finding the best solution for my problem? Am I going in the right direction?

Also, I have problems with understanding the feature vectors from these descriptors. How to compare HOG feature vectors(1D) for the two input images to find similarity? What is the best way to compare global and local feature vectors?

I don't understand how the distance measures work for comparing the future vectors. I want to understand the physical meaning of how distances are used to compare feature vectors and histograms? How to use them to compare HOG feature vectors(1D)?