Prospective students, parents, faculty, and university administrators use rankings to make decisions. CSmetrics.org is GOTO-ranking compliant. It seeks to inform the process by creating better quantitative metrics on publications. These metrics are intrinsically incomplete because they should be combined with qualitative expert opinion and other metrics, such as area citation practices, faculty size, awards, student graduation rates, and PhD placement. It is a work in progress.
To start, select discipline category(s), and optionally customize the year range, venue weightings, and venue list, and then wait for the institutional metrics to update. See our quick start guide and FAQ on how to use the tool.
Please fill out this short survey to tell us if the tool is useful, how you are using this information, and to influence the future of this tool. We welcome corrections to the data and scripts, and feature suggestions via github pull requests and issues. See motivation and methodology for more details on our motivation and how we cleaned the data and acknowledgements and contributors for sponsors and contributors.
Introduction to the metrics and data. This web page computes user-configured institutional publication and citation metrics for computer science. These metrics are principally focused on institutions, as opposed to faculty or authors, and are for use in assessing both the past and predicting future research impact of publications. This tool is complementary to Computer Science Rankings, which analyzes current faculty and their publications. We cleaned publication, venue, and institution data from DBLP and Microsoft Academic Search for 2007 to 2018 appearing in 219 conferences and 87 journals from 6231 institutions. For each paper in this corpus, we divide credit equally among all authors and their institutions at time of publication. A publication never changes institutions and all authors (graduate students, post docs, faculty, undergraduates, etc.) accrue credit to their institution.
Users combine a measured (retrospective) and predictive (prospective) metric to compute an institutional measure. Users first select year ranges for each metric and a venue weight. We suggest a disjoint year range for the two metrics and that prediction only be used on recent publications that have not had sufficient time to accrue citations (e.g., 2016-2018). Users select categories to determine venues and may select individual venues to create custom venue lists. The measured metric uses citation counts and includes all citations at any time to papers published in the specified year range. The predictive metric counts papers in an independently specified year range and weighs them by venue. Selecting 'Equal' assigns all venues equal weight (one). Selecting the geometric mean assigns each venue the geometric mean of citations to papers appearing in the venue from 2007-2018, thus more highly cited venues weigh more. (Venue weight is displayed next to selected venues.) Since the metrics are not comparable, the combined metric uses geometric mean, where the user assigns a relative weight (alpha) to the measured and predictive metrics.Please see the quick start guide for more help.