ICSE 2017 Reviewing Instructions
The following instructions cover the main aspects of the creation and use of reviews as part of the ICSE 2017 Technical Track evaluation process. We provide these instructions to create a common ground for decision making, limit the number of stalled discussions, and help reach a quality threshold consensus for accepted papers.
We are using a structured review format. For each paper, reviewers are requested to enter a categorical assessment for the four dimensions listed on the Call of Papers: soundness, significance, verifiability, and presentation quality. We ask reviewers to organize their reviews so that each assessment is explicitly justified.
We ask reviewers to personally author each review. In particular, we ask reviewers to avoid the "contact subreviewer" feature or "subreviewer" fields because in practice this reassigns authorship to a different person. Reviewers can ask students and/or colleagues for feedback, but their input should be integrated into the reviewer's own review.
Interpretation of the Reviews
We ask that everyone involved in the evaluation process take into account the following principles when both authoring and interpreting reviews:
- A review is not a vote. The review form requires the input of an overall recommendation that ranges from "reject" to "accept". The importance of these recommendations should be weighted in proportion to the strength of the argumentation provided in the corresponding review. We wish to avoid directly using the numerical score that aggregates the review ratings as part of the decision process. Also, in making final decisions we wish to avoid arguments whose sole basis is the number of recommendations for or against a paper. Ultimately, it is the content of the reviews and discussions that should matter.
- Evaluation criteria are not equal. The four criteria listed in the structured review template support the methodical evaluation of submissions along (relatively) decoupled dimensions. However, there is a natural progression of importance between the four dimensions.
- Soundness is critical. If a submission is clearly shown to be unsound, the three other criteria become irrelevant.
- Significance and verifiability increase the acceptability of a submission that describes a sound contribution.
- Presentation problems decrease the acceptability of a submission.
- Subjectivity is noise. General arguments based on personal feelings, positive or negative, about a research topic, research method, application domain, types of stakeholders, etc., should not be a factor in the decision process.
- Sympathy without indulgence. The evaluation of a submission should be respectful and appreciative of the work submitted and report the strengths of a submission as precisely and extensively as possible. At the same time, appreciation for the work described in a submission should not be cause for indulgence. We wish to select submissions by appreciating their strengths while acknowledging their weaknesses; We do not wish to accept submissions by overlooking their weaknesses.
Reviewers can find many reviewing guides on-line. We emphasize two points in particular:
- Depersonalizing reviews. We ask reviewers to comment on the work, not the authors. This should naturally lead to more constructive reviews. We recommend avoiding the use of the second person ("you").
- Proofreading reviews. All versions of all reviews will be retained and visible in the conference management system, so we recommend taking a minute to proofread reviews before submitting them.
- Conflicts of interest. In addition to the usual rules, we ask reviewers to indicate a conflict of interest if they are currently working on research that could be seen to be in direct competition to what is presented in a submission.
- Empty reviews. Submitting an empty or near-empty review will void a reviewer's eligibility to review the paper in question, requiring the assignment of a different reviewer to the paper. We ask reviewers to be extremely careful not to submit an empty review.
- Ethical issues. Any evidence of professional misconduct related to a submission or its evaluation (plagiarism, concurrent submissions, conflict of interest, etc.) should be reported directly to the program co-chairs.