I do believe that most scientific performance indicators (impact factor, h-index, Erdös number) have no absolute meaning. Maybe absolutely no meaning at all. Their computation (assumptions, algorithms, database, counts) is somehow obscure, often flawed, and influenced by disputable practices like author and journal, conscious or imposed, self-citations. Many folks (especially those with power) nevertheless trust and worship them. To understand their construction, to observe their evolution over time, and to assess their relative performance with respect to your publishing goals is thus a side work in the process of scholarly publishing. Here are graphs comparing Elsevier SCImago Journal Ranking and a Thomson-Reuters-like Impact Factor (IF) with signal, image, video processing links. Google Scholar also provides an (Hirsch) h-index for Top publications - Engineering & Computer Science, Top publications - Chemical & Material Sciences, Top publications - Physics & Mathematics or Top publications - Life Sciences & Earth Sciences, where you can see that arXiv, though unrefereed performs quite well.
You may as well check SIVA Conferences (no ranking).