ROCout=roc(varargin)

ROC - Receiver Operating Characteristics.
The ROC graphs are a useful technique for organizing classifiers and visualizing their performance. ROC graphs are commonly used in medical decision making.
YOU CAN USE THIS FUNCTION ONLY AND ONLY IF YOU HAVE A BINARY CLASSIFICATOR.

The input is a Nx2 matrix: in the first column you will put your test values (i.e. glucose blood level); in the second column you will put only 1 or 0 (i.e. 1 if the subject is diabetic; 0 if he/she is healthy).

By itself (without arguments) roc will run a demo.

The function computes and plots the classical ROC curve and curves for Sensitivity, Specificity and Efficiency (see the screenshot).

the Equal Error Rate (EER) is the point on the ROC curve that corresponds to have an equal probability of miss-classifying a positive or negative sample. This point is obtained by intersecting the ROC curve with a diagonal of the unit square.
Anyway, in the results, it should be the "cost-effective" cut-off point.

As I previously wrote, the main paper you have to read is Hanley JA, McNeil BJ. The Meaning and Use of the Area under a Receiver Operating Characteristic (ROC) Curve. Radiology. 1982 Apr;143(1):29-36.
Now I think it is quite impossible to find a paper describing each bayesian parameter, so you could email me in private and I could try to help you.

Hi,
Great code. Thanks. Some questions:
1) Is there any document explaning the output - what each result means and how is it calculated?
2) We found that a negative value in the data causing an error. Is that true? Will adding a constant to bring all data above zero will affect the results?
Thanks.

By definition, the efficiency is the fraction of subjects that are correctly classified.
TRACE(M) is the sum of the elements on the main diagonal; in our case it is the sum of true positives and negatives.
SUM(M(:)) is the sum of the elements of the matrix and so it is the number of studied subjects.
TRACE(M)/SUM(M(:)) is the efficiency.

I run the same problem again on matlab 7.8 ( R2009a) and it was perfect. I was using matlab 7.4 before. I fixed the error mentioned by Segun Oshin and run some examples with matlab 7.4 and it was ok. However, when I ran the example above there was an error in 7.4 but not in Mtalab 7.8. Thanks
Jorge

this error shows that something doesnt work on rocdata and so you have not x in your workspace. Now I have changed and uploaded a new version of roc so, if you call roc without arguments, it will run the demo by itself. If you dont want to wait for the FEX updating simply change in the code the default.value in this way

Dear Benjamin, I think that Pythagora don't care if you acknowledge him or not :-).
For the standard error I used an equation described in: Hanley JA, McNeil BJ. Radiology 1982 143 29-36. The meaning and use of the area under the Receiver Operating Characteristic (ROC) curve.

Please cite me only if you use all my function: if you took pieces of code, you can decide to cite me or not.

Lastly, I prefer to use quantile and not a fixed size step because real data usually are not equally spaced.

I was also going to suggest adding a varagin to delineate a step-size, ex:
%add a varagin, in this case I am calling it step which can describe the distance between thresholds to be calculated
if(nargin<2 )
z=sortrows(x,1);
%find unique values in z
step=unique(z(:,1));
elseif length(step)==1 % the fixed step size is being requested
step=[min(pred):step:max(pred)]
end
% later in guiseppe code just do labels=step

Also, Guiseppe, I implemented your standard error and pythagoras into my code which generated data that will probably used in an upcoming paper. Do you mind being acknowledged or are there any actual articles to cite? Your call. And lastly, I have a GUI that is pretty beta, but works.

Dear Jay, thank you for your comment. I don't agree so much with you. If you look at the code:
1) all vectors are preallocated;
2) True and false positive and negative are computed using logical indexing.
So the computations are very fast.
Anyway, I introduced your suggest and now you can choose if you want to use all or 3<=N<all unique values as thresholds. I have just uploaded the file.

However, you use each element of the data as a threshold and you calculate the fpr tpr etc. This means a very very long time and many many points on the curve for a large vector (which makes your program useless). To avoid this, I suggest you let the user to choose the number of thresholds.

Perhaps you are right: I uploaded the file at July to fix the bug by Segun Oshin; it is clear that somewhat in the upload went wrong. I have just reupload the file.
If you are using my roc dataset, you will see that 0's and 1's are not in the same proportion. If you invert 0's and 1's the curve is slight different and so the SE is quite different.

Thanks for answering all of my questions, I really do appreciate it. I still have an issue with your answer for number 3.

First, I was using an inverted data set when I stated the answer should be 151 not 150 (previous post). Second, using the download available on this page right now, running roc(x) gives a cutoff of 153. As you state, the correct answer is in fact 152. Therefore, I am not sure if you changed something and didn't update, since the cutoff value is still using the row of the minimum distance from xroc,yroc and grabbing that rows value from 'labels', hence the wrong answer of 153. (lines 184-186)

To confirm this, run the download from here and see what cutoff you get. Maybe it is something on my end?

As for the standard error calculation (#4), I was playing around and found that if I inverted the 1's and 0's before running, I would get a different Serror for the AUC, which I assumed should be the same regardless of whether they were inverted. The Serror of the sample data is 0.02713, and if I invert the observations, it becomes 0.0364. This is probably trivial.

I'll try to answer the questions by Benjamin.
1) The SE of the area is calculated using this equation from Hanley JA, McNeil BJ. Radiology 1982 143 29-36.
2) I haven't project to implement a GUI. Anyway this function is under GPL license, so you can modify and redistribute it without any problems but correct citations.
3) I took in account that there are 2 more points in xroc and yroc arrays than labels array. If you look deeper in the code (line 138):
table=[labels'; yroc(2:end-1)'; 1-xroc(2:end-1)';]';
As you can see, the displayed xroc and yroc points go from 2 to end-1 (and so the points 0,0 and 1,1 are excluded). Anyway, using the demo dataset the cut-off point is 152 (that is the closest to green line)...
4) The standard error of the area is a function of the area and points used to draw the ROC curve: if you have two ROC curves, the first with 10 points and the second with 100 points the first will have a greater SE than the second. hbar and ubar are used to correctly compute the false and true positives and negatives. Their values don't influence the SE computation.

Also, when hbar>ubar, I think values in standard error calculations should be changed. Otherwise, you can get to different standard error values from the same area under the curve depending on whether healthy average is higher than disease average.

Sorry to keep bugging you here, but this is the best way I can see to make suggestions. As you can tell, I have been digging into this lately.

I think there may be an issue within the code, but I could be wrong. When you create xroc and yroc using
xroc=flipud([1; 1-a(:,2); 0]) , the additional two rows are not also added to labels. For instance, in your example data, this yields 72 paired points for the ROC curve (# rows in xroc or yroc) but only 70 thresholds (# rows in labels). This causes issues when reporting the threshold value since this is determined by a row reference back to labels (in the example, the threshold by your math should be 151 not 150 (using 'labels' and 'a').

If this doesn't make sense, or I am wrong, please let me know. Its really not a big deal with large datasets with many points on the curve, but becomes an issue with smaller sets where points are farther apart.

Hi Giuseppe.
I have had a look at the new release today and I think it is still not perfectly correct. I have validated the scripts using the example data of Hanley and McNeil's 1982 paper: "The Meaning and Use of the Area under a Receiver Operating Characteristic (ROC) Curve", which seems to be the basis for the calculations (such as the approximation of Q_1 and Q_2) anyways. To my opinion the problem is that when integrating over the ROC curve to compute the AUC, the data point (sensitivity=1, specificity=0) is not considered when using the trapezoidal rule. Consequently the AUC value (and all AUC dependent measures) differ slightly from the example in the mentioned article (which becomes more severe for non-continous tests with only a few cut-off points).

Agree with cabrego, this algorithm does not work correctly. Depending on the input data, it generates ROC curves with specificity and sensitivity backward. I believe this is because elements that fall below a cutoff value (I in the code) are called "true positives" when they should be "false positives". The convention is that higher values of a test are abnormal (positive).

I confirmed that other software (online ROC calculator, ROCR in R, STATA) does not behave this way with the same input data and all others produce correct results.