About Us

Prospective students, parents, faculty, and university administrators use rankings to make decisions. CSmetrics.org is GOTO-ranking compliant. It seeks to inform the process by creating better quantitative metrics on publications. These metrics are intrinsically incomplete because they should be combined with qualitative expert opinion and other metrics, such as area citation practices, faculty size, awards, student graduation rates, and PhD placement. It is a work in progress.

To start, select discipline category(s), and optionally customize the year range, venue weightings, and venue list, and then wait for the institutional metrics to update. See our quick start guide and FAQ on how to use the tool.

Please fill out this short survey to tell us if the tool is useful, how you are using this information, and to influence the future of this tool. We welcome corrections to the data and scripts, and feature suggestions via github pull requests and issues. See motivation and methodology for more details on our motivation and how we cleaned the data and acknowledgements and contributors for sponsors and contributors.

Introduction to the metrics and data. This web page computes user-configured institutional publication and citation metrics for computer science. These metrics are principally focused on institutions, as opposed to faculty or authors, and are for use in assessing both the past and predicting future research impact of publications. This tool is complementary to Computer Science Rankings, which analyzes current faculty and their publications. We cleaned publication, venue, and institution data from DBLP and Microsoft Academic Search for 2007 to 2020 appearing in 229 conferences and 90 journals from 6793 institutions. For each paper in this corpus, we divide credit equally among all authors and their institutions at time of publication. A publication never changes institutions and all authors (graduate students, post docs, faculty, undergraduates, etc.) accrue credit to their institution.

Users combine a measured (retrospective) and predictive (prospective) metric to compute an institutional measure. Users first select year ranges for each metric and a venue weight. We suggest a disjoint year range for the two metrics and that prediction only be used on recent publications that have not had sufficient time to accrue citations (e.g., 2018-2020). Users select categories to determine venues and may select individual venues to create custom venue lists. The measured metric uses citation counts and includes all citations at any time to papers published in the specified year range. The predictive metric counts papers in an independently specified year range and weighs them by venue. Selecting ‘Equal’ assigns all venues equal weight (one). Selecting the geometric mean assigns each venue the geometric mean of citations to papers appearing in the venue from 2007-2020, thus more highly cited venues weigh more. (Venue weight is displayed next to selected venues.) Since the metrics are not comparable, the combined metric uses geometric mean, where the user assigns a relative weight (alpha) to the measured and predictive metrics.

Please see the quick start guide for more help.

Acknowledgements

We thank Computer Research Association (CRA), ANU College of Engineering and Computer Science, and ANU Research School of Computer Science for supporting the development of the data and web application. We thank Microsoft Academic Search for their help and data, and DBLP for their data.

The csmetrics.org web app is available under CC BY 4.0. Data from Microsoft Academic Search and DBLP are under their respective licenses.

Contributors

Steve Blackburn, Australian National University (ANU)
Carla Brodley, Northeastern University
H. V. Jagadish, University of Michigan
Kathryn S McKinley, Google
Mario Nascimento, University of Alberta
Benjamin Readshaw, ANU
Minjeong Shin, ANU
Sean Stockwell, University of Michigan
Lexing Xie, ANU
Qiongkai Xu, ANU

We also thank

following people for suggesting and reviewing publication venues.

Alwen Tiu, Australian National University (ANU)
Tao Mei, JD.COM
[@wpzdm](https://github.com/wpzdm)
Saurabh Jha [@saurabhjha1](https://github.com/saurabhjha1)
Brad Reaves (NCSU) [@bradreaves](https://github.com/bradreaves)
[@sceccarelli](https://github.com/sceccarelli)

Social and newspapers talk about us:

https://csgrad.cs.vt.edu/resources/

https://people.umass.edu/tongping/misc.html

https://www.cs.uic.edu/~indexlab/mithra.htm

https://www.cse.unr.edu/~fyan/resource.html

https://userweb.cs.txstate.edu/~hn12/teaching/cs7300/fall2022/schedule.html

https://deepai.org/publication/how-reliable-are-university-rankings

Why Big Tech Companies Should Engage With Academia, and Why They Don’t

https://www.wiwi-treff.de/Hochschulort-Wo-studieren/Warum-immer-nur-TUM/Diskussion-57703

https://nitter.allella.fr/trecheck_feeld

https://cacm.acm.org/magazines/2019/7/237709-goto-rankings-considered-helpful/fulltext