3 min read

Learning from the CIKM 2019 Reviews

Just this morning I received the reviews of my submission to the CIKM conference. My paper was rejected. With the hope that I can reduce future rejections, this post describes the lessons I’ve learned from the CIKM reviews.

About the CIKM conference

CIKM (ACM International Conference on Information and Knowledge Management) is a competitive conference that is highly ranked by researchers and industry experts that specialise in machine learning, information retrieval, recommender systems and other topics related to knowledge management. This year (2019), about 1030 full papers were submitted but only 200-ish were accepted. In other words, probabilistically speaking, I had a 19% chance of being selected from a pool of reputable researchers from all around the world.

How the reviews are done

Each paper was reviewed by at least three program committee members, and moderated by one senior program committee member. Since these reviewers are expected to be the crème de la crème in their research areas, the feedback they provide should help the authors improve their papers for future action.

What I submitted to for review

Unfortunately, I can’t describe the details of the paper because I plan to immediately resubmit it to another conference venue. I wouldn’t want to risk divulging my identity to any reviewer that’s part of the double blind peer-review process. Suffice to say that the idea lies in the intersection of machine learning and recommender systems — a novel approach that aims to faciliate making more accurate, scalable and efficient recommendations in a user-friendly manner.

Handling the peer reviews

I had high hopes for my paper but I was well-prepared for a rejection. This is one of the upsides of doing a PhD: it teaches you to cope with failure and to continue pursuing excellence regardless of the circumstances. Knowing that the likelihood of being selected was low and there’s an element of luck associated with the process, I wasn’t surprised that the reviewers nit-picked on finer details that might appear inconsequential, e.g. typos. But that’s okay. When one competes at such a high-level, excellence is required. My only disappointment is that the two of the three reviews were surprisingly short and somewhat unhelpful in improving the work; they mentioned the limitation that I thought I had convincingly argued for in the paper. Fortunately for me, the third review was impressively detailed and helpful. Thanks dear anonymous Reviewer 3.

Lessons learned

To be fair, I can summarize the reasons for my rejection into two broad problems: (a) Insufficient experiments to convince the reviewers about the robustness of my solution, especially in the context of a privacy-preserving system; (b) lack of adequate “polish” to meet the standards of the CIKM conference.

In addition, these tips are noteworthy for future submissions:

  1. Convincing reviewers about the scope of your work is paramount. Provide stronger and clearer arguments in section discussing the limitation of your work. It is easy for reviewers to find fault with your work if you are not convincing and persuasive enough.
  2. Cite everything, then cite some more. The related works section of doesn’t often get as much attention as the other sections of paper. Even so, don’t assume that reviewers are deeply familiar with the related work for your work. It is understandable that citing more papers invariably consumes more space that might be needed for describing your own work. But despite this constraint, cite as much as you can, including those studies at the fringes of your work.
  3. Substance without presentation is unappreciated. Academia is full of pedantic researchers. A simple typo can ruin months of effort. Get a proofreader and always be selling.
  4. Explicit is better than implicit. Don’t assume that your readers are experts in your research area. For example, when talking about user profile in collaborative filtering recommender system, don’t assume that the reader knows that a user is represented by a vector of their rated items.

It goes without saying that I’ve still got a lot to learn.

Could the review process improved?

I think the review process should allow for rebuttals, e.g. IJCAI. This will allow authors to clarify any confusions that reviewers have about the paper. But since the number of submissions are increasingly growing. I’m not sure the conferences have enough (wo)manpower to implement such a change. C’est la vie.


This article isn’t meant to be a rant to dismiss the thankless service done by the reviewers. In fact, I am already better, thanks to their feedback. Rather, this is a lesson for my future self so that I may not repeat the same mistakes. Congratulations to those that got accepted for CIKM 2019; can’t wait to read their papers.

Rejection aside, I must say that I am thoroughly impressed by the work my co-authors and I have done on the paper. Had my paper been accepted, it would’ve definitely sparked a lot of interest and debate in the ML and RecSys community. I am disappointed that I won’t get to visit Beijing China this year. Even more annoying is that I might miss the KPIs that contribute towards my growth in academia.

Nevertheless, I’ll definitely be making more submissions to CIKM in the future. Aluta Continua, Victoria Acerta.