Goals

Conference papers are important to computer scientists. Research evaluation is important to Universities and policy makers.
This initiative is sponsored by
GII (Group of Italian Professors of Computer Engineering),
GRIN (Group of Italian Professors of Computer Science), and
SCIE (Spanish Computer-Science Society).
The goal of this initiative is to develop a unified rating of computer science conferences.
The process is organized in two stages.

Stage 1: a joint committee of GII, GRIN and SCIE members
generates the rating by using an automatic algorithm
based on well-known, existing international classifications. This automatically-generated rating
is updated periodically, usually every two years. Please use the menu above to navigate the
previous editions of the rating.

Stage 2: each of the three societies (GII, GRIN and SCIE) may independently
submit the automatically-generated rating to the respective
communities, in order to revised and correct it.

This site reports the result of Stage 1 of the process.

The Stage 1 GII-GRIN-SCIE Joint Committee

Nino Mazzeo (GII President)

Rita Cucchiara (GII)

Giansalvatore Mecca (GII)

Stefano Paraboschi (GII)

Enrico Vicario (GII)

Paolo Ciancarini (GRIN President)

Carlo Blundo (GRIN)

Alessandro Mei (GRIN)

Pierangela Samarati (GRIN)

Davide Sangiorgi (GRIN)

Antonio Bahamonde (SCIE President)

Inmaculada García (SCIE)

Ernest Teniente (SCIE)

Francisco Tirado (SCIE)

Antonio Vallecillo (SCIE)

Disclaimer

We realize that using bibliometric indicators may introduce distortions in the evaluation of scientific papers.
We also know that the source rankings may have flaws and contain errors. It is therefore unavoidable that the
unified rating that we publish in turn contains errors or omissions. Our goal was to limit these errors to the minimum,
by leveraging all of the indicators that were available at the sources, and by combining them in such a way to reduce
distortions. We expect that in the majority of cases our algorithm classifies conferences in a way that reflecs quite
closely the consideration of that conference within the international scientific community. There might be cases in which
this is not true, and these may be handled in Stage 2 using public feedback from the community.

Changelog

March, 3rd 2017 - second preliminary version of the 2017 update, to incorporate a number of changes in Microsoft Academic;

February, 23rd 2017 - first preliminary version of the 2017 update to the joint GII-GRIN-SCIE rating;

March, 1st 2015 - added references to a collection of comments to this proposal sent to the Joint Committee by GII and GRIN members,
and to the response to these comments prepared by the Committee;

January, 24 2015 - new version of the rating (Jan-24), changed to address comments made in the joint meeting of November, 7 2014;
more specifically:

to make tier 3 (B, B- conferences) larger, the thresholds for H-like indexes in MAS and SHINE were increased;

based on comments from colleagues, a few of the three-classes-to-one translation rules have been fixed to remove
inconsistencies wrt the translation algorithm described below;

based on comments from colleagues, a few entity-resolution errors have been fixed (by merging records with different
names/acronyms that represent the same event).

October, 30 2014 - first version of the rating (Oct-16)

The Rating Algorithm

The Sources

Recently three rankings/ratings of computer science conferences have emerged:

The CORE 2017 Conference Rating -
Australians have a long-standing experience in ranking publication venues; CORE (the Computing Research and Education Association
of Australia) has been developing its own rating of computer science conferences since 2008, first on its own, and then as part
of the ERA research evaluation initiative. Despite the fact that the ERA rating effort has been discontinued, CORE has decided
to keep its initiative, and is now regularly updating the conference rating. The rating inherits from previous
versions, and uses a mix of bibliometric indexes and peer-review by a committee of experts to classify conferences in the
A+ (top tier), A, B, C tiers (plus a number of Australasian and local conferences);

Microsoft Academic -
Microsoft Academic is part of the Microsoft Knowledge API, and represents the Microsoft counterpart to the popular Google Scholar;
it inherits from the original The Microsoft Academic Seach
and provides an API
to get bibliometric indicators about computer-science conferences and papers;

LiveSHINE -
LiveSHINE is the successor to the SHINE Google-Scholar-Based Conference Ranking.
The original SHINE (Simple H-Index Estimator) was a collection of software tools conceived to calculate the H-Index of computer science
conferences, based on Google Scholar data. LiveSHINE is now based on a plug-in for the Google Chrome browser.
The plug-in allows administrators to browse the LiveSHINE conference database, and at the same time, triggers queries to Google Scholar to
progressively update citation numbers.

We adopt these as the base data sources for our algorithm. The three represent a good mix of bibliometric
and non-bibliometric approaches, and
are backed by prominent international organizations, and large and authoritative data sources.

The Rating Algorithm

Our unified rating brings together the three sources listed above and tries to unify them according to
an automatic algorithm initially adopted by GII and GRIN for their 2015 rating
(see here for the original description of the
algorithm). We summarize the algorithm below.

We refer to the following set of classes, in decreasing order: A++, A+, A, A-, B, B-, C.
Our purpose is to classify conferences within four main tiers, as follows:

Tier

Class

Description

1

A++, A+

top notch conferences

2

A, A-

very high-quality events

3

B, B-

events of good quality

-

Work in progress

work in progress

Loading the Sources

Data at the sources were downloaded on May, 30th 2017.
The collected data were used as follows:

The CORE 2017 Conference Rating was downloaded as it is by selecting "CORE 2017" as the only source of ratings
(i.e., we discarded all previous ratings); in addition, conferences with rank "Australasian" or "L" (local) were removed;
the distribution of tiers is as follows:

A+

mapped to A++

63 conferences

A

mapped to A

221 conferences

B

mapped to B

426 conferences

C

mapped to C

794 conferences

While the CORE Conference Rating comes as a set of classified venues, LiveSHINE and Microsoft Academic
simply report a number of citation-based bibliometric indicators about conferences, especially the
H-Index of the conference.

H-Indexes are usually considered as robust indicators. However, they suffer from a dimensionality issue:
it is possible that
conferences with a very high number of published papers have high H-Indexes, regardless of the actual quality of these papers.
The opposite may happen for small conferences that publish less papers.

To reduce these distortions, using data available in LiveSHINE and Microsoft Academic, we computed
a secondary indicator, called "average citations",
obtained by dividing the total number of citations received by papers of the conference,
by the total number of pulished papers.
This is, in essence, a lifetime impact factor (IF) for the conference.
IF-like indicators are based on the average, and are therefore sensible to the presence of outliers.
This suggests that they should
not be used as primary ranking indicators.
However, they may help to correct distortions due to the dimensional nature of the H-Index.

To do this, we assigned a class to each Microsoft Academic and LiveSHINE conference using the following algorithm.
In the following, we refer to the conference average
citations as the "IF-like indicator".

to start, each Microsoft Academic/LiveSHINE conference receives two different class values:

a class wrt the H-index, as follows: conferences are sorted in decreasing order wrt the value of the H-index;
then, classes are assigned as follows based on ranks:

Ranks

Class

ranks from 1 to 50

rank A++

ranks from 51 to 75

rank A+

ranks from 76 to 200

rank A

ranks from 201 to 250

rank A-

ranks from 251 to 575

rank B

ranks from 576 to 650

rank B-

rest of the items

rank C

a class wrt the IF-like indicator, with the following thresholds:

Value

Class

25 or more

rank A++

23 to 25 (excl.)

rank A+

18 to 23 (excl.)

rank A

16 to 18 (excl.)

rank A-

12 to 16 (excl.)

rank B

10 to 12 (excl.)

rank B-

7 to 10 (excl.)

rank C

rest of the items

rank D

at the end of this process, each conference in Microsoft Academic/LiveSHINE has two classes; we need to assign
a final class to them. To do this, we consider as the primary class the one based on the H-index, and use the second
one, based on the IF-like indicator, to correct the first, according to the following rules:

Primary Class

Secondary Class

Final Class

A++

B, B-, C, or D

A+

A+

B-, C, or D

A

A

C or D

A-

A-

D

B

A

A++

A+

A-

A++, A+, or A

A

B

A++, A+, or A

A-

B-

A++, A+, A, or A-

B

C

A++, A+, A, or A-

B-

Integration

Classified venues in the three base data sources are integrated in order to bring together all available classes for a
single conference. After this step, each conference in the integrated rating received from one to three classifications,
depending on the number of sources it appears within. In the case in which the same conference was ranked multiple times by
a single source, the highest rating was taken.

Based on the collected ratings, a final class was assigned to each conference. The rules we used to do this follow
the principles described below (in the following we assign integer scores to the classes as follows:
A++=7, A+=6, A=5, A-=4, B=3, B-=2, C=1):

for conferences with three ratings: (a) a majority rule is followed: when at least two sources assign at least class X, then:
if the third assigns X-1, then the final class is X; if the third assigns X-2 or X-3, then the final class is X-1;
if the third assigns X-4 or lower, the final class is X-2; (b) then, if necessary, this assignment is corrected
using a numerical rule: we assign an integer score to each conference by giving scores to classes, and then taking the sum;
the numerical rule states that conferences with higher numerical scores cannot have a class that is lower than the one of a
conference with a lower score;

for conferences with only two ratings: these conferences cannot be ranked higher than A; to assign the score, we assign a third
"virtual" class, equal to the minimun one among the ones available; then, we follow the rules above;

for conferences with only one rating: these conferences are all considered not classifiable based on the data at our disposal.

This gives rise to the following class-assignment rules reporte below.

A Note on Publication Styles

In the last few years, several computer-science related conferences have started
publishing their proceedings as special issues of well-known journals -- e.g., the ACM SIGGRAPH conference
now publishes its proceedings as special issues of the ACM Transactions on Graphics (ACM TOG). Being based
on bibliometric indicators, our algorithm cannot accomodate these cases: sources as LiveSHINE are unable to
estimate the H-Index of the conference, since it does not represent a publication venue "per se".
Notice that it would be possible to calculate the H-index of the hosting journal, but this is quite different
from the one of the conference itself, since the journal also publishes research papers that are
not related to the conference. Therefore, we have excluded these events from the rating. Please notice that
ratings assigned in previous versions of the rating are still available on this site (see the menu above).
In addition, papers published in these conferences can be evaluated anyway, since they
receive the bibliometric indicators of the respective journal

A Note on Source Coverage

When downloading data from the base sources for the purpose of the 2017 update, we noticed that two of those -- namely LiveSHINE and
Microsoft Academic -- have quite significantly reduced the number of events they cover wrt to the 2015 counterparts -- i.e., SHINE and
Microsoft Academic Search. In the case of LiveSHINE, the reduction amounts to 45\% (from 1880 to 1029 events).
In the case of Microsoft Academic, there was a reduction of 34\% (from 2000 to 1307). We decided to investigate this issue in depth.
More specifically:

we ran a comparison of the new ratings obtained from the 2017 base data to the ones from 2015, and identified all events that
had missing sources;

we analyzed these events to distinguish the ones that have been active in the last 5 years from the ones that have been discontinued;
the latter were explicitly rated as "Not rated";

we worked in close collaboration with the LiveSHINE group to add back to their db a number of active events that were in SHINE but
were initially missing from LiveSHINE; our sincere gratitude goes to thank Altigran Da Silva and his group for the
excellent work they did in updating LiveSHINE.

Based on this work, we found that in the vast majority of cases events are now missing from the lists because they have been discontinued. However, in
a minority of cases, there was the risk of misclassifying events that are still active, simply because they are missing from one of more of
the base sources. Noticeably, none of this belongs to tier 1 (classes A++ and A+). We decided to handle these cases according to the
following rules:

active events with at least one source in 2017, and two or more sources in 2015: if classification has worsened due to
the missing sources, we added back the old 2015 ratings from the missing sources (overall: 16 events; 1 moves to rating A-,
8 move to rating B, 7 move to rating B-);

active events with only one source in 2015: left as "Work in Progress".