Alternatively, you are looking at papers from a field that you're not very familiar with. You've found a paper you think is interesting, but you're not sure if it's a good paper from a reputable publication venue, or junk science from The International Journal of Crackpot-O-Rama. How do you find out? (Hint: citation count is not the whole story.)
Here, then, is a brief guide to figuring out out whether a conference or journal is a good venue, and you should send papers to it, or trust papers from it. This guide is mostly drawn from computer science, since it's the area I know best, but I think most of the principles generalize to other scientific disciplines. The humanities are entirely different, so someone else will need to help you out there.
- How broad is the charter of the venue? Do they cover everything from AI to Z-order curves?
Venues with a very broad purview are often paper mills. The canonical example of this is, of course, The World Multiconference on Systemics, Cybernetics and Informatics, but there are many others in that vein. A good venue has a clearly defined scope, usually on a single area of specialization or special interest topic. (For instance, SuperComputing takes papers on systems, programming languages, computer engineering, and visualization, but it's all very clearly focused on "problems people with supercomputers have".) - How diverse is the program committee? Is their nationality all the same, are they all from the same few universities, or does the nationality of the members closely match the country where the conference is held?
There are a lot of smaller regional conferences in a topic area which are held in a specific country, and draw their entire PC from there (China and India have a number of these, and since they're large countries with lots of universities, some of them are pretty good, but you should still examine them closely. The US does not get a pass on this, either.) This isn't a terrible thing, and sometimes good papers wind up in regional conferences, but it suggests that these are smaller fry, not the top tier venues in a field. A really good venue is prestigious to serve on, as well as to publish in, and it can usually attract PC members from all over the world. - What is their acceptance rate? (Do they publish their acceptance rate?)
A good venue will probably get more papers than they can publish. Some good venues are extremely exclusive. Others try to be able to accept more good papers, so they have more tracks, special issues, short papers, etc., but they're still going to wind up rejecting a lot. Somewhere in the 10%-20% range is good, 30% is OK (The World Multiconference not withstanding), more than that is a little suspicious. Less than 10% probably means that the papers they accept are really good, but they're probably not a great submission venue, unless you have something truly earth-shattering, AND you write like a god. If they don't publish it at all, be very wary. - Pick a steering committee member, or a couple, and go look at them on Google Scholar. What's their H-index like? Are they well cited? (How does their H-index stack up to your advisor's?)
While PC members are sometimes recent PhDs who have shown show promise in their field, the steering committee should be well established, and have done a lot of high-quality research in the venue's area.
If a venue hits all four of these points, it's probably a good venue where you can submit your work without besmirching your good name forever. However!
- Not all papers in good venues are fantastic papers. If you're looking at a paper from a good venue, it's been through a round of peer review, so the blatantly awful has probably been thrown out. But that doesn't mean that it's a stellar paper. A bunch of things can collude to let a less than good paper slip in.
- They didn't get a lot of good papers.
Sometimes the pickings are slim, and venues aren't going to cut back a whole day, or skip an entire issue, just because they didn't get great papers. So less good papers will, grudgingly, get accepted. (Although they'll often also step up the invited talks and highlight papers, which is another indicator you can look for that a given year was not as good as usual.)
- The reviewers were really swamped, and not as on the ball as they should have been, or weren't as expert in that sub-area as they needed to be.
Sadly, this happens too. Reviews are run entirely on volunteer effort, and that means sometimes there just aren't enough expert reviewers to go around. A paper which looks superficially good may have deep flaws that someone in a rush, or not deeply read in a sub-area, might miss. Maybe the idea was sound, but their evaluation methodology was flawed, or vice versa. Maybe it's a great idea, but someone invented it 20 years ago in a different discipline. Always, always, do your own leg work, especially when it comes to related work and experimental methodology. Never assume that just because someone said it in a conference last year that it's right, or appropriate for your problem.
- They didn't get a lot of good papers.
No comments:
Post a Comment