I'm searching for a combination of sensitivity and specificity cost function because i want have more weight for sensitivity ( sensitivity is more impotent for me rather than specificity). After searching i found this :

Final_Cost = ( (Cb/Cg)/( 1+(Cb/Cg) )*Bg + ( 1/( 1+(Cb/Cg) ) )*Gb

Cb is misclassification cost of positive and Cg is misclassification cost of negative. Bg is number of false positive detected and Ggis number of false negative detected. We should specify Cb/Cg. Is this a good function for calculating cost? Is there any other better functions?

2 Answers
2

Sensitivity and specificity are in backwards time order (i.e., use reverse conditioning or Prob(X|Y)). Hence they are not relevant to decision making. I recommend developing a well-calibrating direct probability model then using standard decision theory.

(1) The ratio of FP to FN is the standard way defining a cost function. It is build into some packages: C50 and rpart or part packages I think.
(2) It is rare that I see a reasonable use of cost functions in the machine learning field. Most use the F1 score or similar metrics. If you are working in this field, I'd spend sometime finding out what the expectations are. They might not be reasonable, but you should at least know what they are.
(3) I would be think about the problem at hand before trying to develop a well-calibrated model as suggested above - unless your model is likelihood based. Most machine learning algorithms don't naturally produce well-calibrated results - they need costs defined upfront - and imagining that the output you're getting is a probability is misleading. Though much smarter people - referenced below - seem to think calibrated models are feasible. So you might want to ignore my comments.