Wednesday, June 26, 2013

The Idiot's Guide to Finding a Good Publication Venue (mostly in Computer Science)

So, here you are. You've gotten into grad school. You have had your first big idea, or maybe your second or third. You've done a bit of research, confirmed that it's viable, and now you want to find a conference or journal to submit to. But which one?

Alternatively, you are looking at papers from a field that you're not very familiar with. You've found a paper you think is interesting, but you're not sure if it's a good paper from a reputable publication venue, or junk science from The International Journal of Crackpot-O-Rama. How do you find out? (Hint: citation count is not the whole story.)

Here, then, is a brief guide to figuring out out whether a conference or journal is a good venue, and you should send papers to it, or trust papers from it. This guide is mostly drawn from computer science, since it's the area I know best, but I think most of the principles generalize to other scientific disciplines. The humanities are entirely different, so someone else will need to help you out there.
  1. How broad is the charter of the venue? Do they cover everything from AI to Z-order curves?
    Venues with a very broad purview are often paper mills. The canonical example of this is, of course, The World Multiconference on Systemics, Cybernetics and Informatics, but there are many others in that vein. A good venue has a clearly defined scope, usually on a single area of specialization or special interest topic. (For instance, SuperComputing takes papers on systems, programming languages, computer engineering, and visualization, but it's all very clearly focused on "problems people with supercomputers have".)
  2. How diverse is the program committee? Is their nationality all the same, are they all from the same few universities, or does the nationality of the members closely match the country where the conference is held?
    There are a lot of smaller regional conferences in a topic area which are held in a specific country, and draw their entire PC from there (China and India have a number of these, and since they're large countries with lots of universities, some of them are pretty good, but you should still examine them closely. The US does not get a pass on this, either.) This isn't a terrible thing, and sometimes good papers wind up in regional conferences, but it suggests that these are smaller fry, not the top tier venues in a field. A really good venue is prestigious to serve on, as well as to publish in, and it can usually attract PC members from all over the world.
  3. What is their acceptance rate? (Do they publish their acceptance rate?)
    A good venue will probably get more papers than they can publish. Some good venues are extremely exclusive. Others try to be able to accept more good papers, so they have more tracks, special issues, short papers, etc., but they're still going to wind up rejecting a lot. Somewhere in the 10%-20% range is good, 30% is OK (The World Multiconference not withstanding), more than that is a little suspicious. Less than 10% probably means that the papers they accept are really good, but they're probably not a great submission venue, unless you have something truly earth-shattering, AND you write like a god. If they don't publish it at all, be very wary.
  4. Pick a steering committee member, or a couple, and go look at them on Google Scholar. What's their H-index like? Are they well cited? (How does their H-index stack up to your advisor's?)
    While PC members are sometimes recent PhDs who have shown show promise in their field, the steering committee should be well established, and have done a lot of high-quality research in the venue's area.

    If a venue hits all four of these points, it's probably a good venue where you can submit your work without besmirching your good name forever. However!

  5. Not all papers in good venues are fantastic papers. If you're looking at a paper from a good venue, it's been through a round of peer review, so the blatantly awful has probably been thrown out. But that doesn't mean that it's a stellar paper. A bunch of things can collude to let a less than good paper slip in.
    1. They didn't get a lot of good papers.
      Sometimes the pickings are slim, and venues aren't going to cut back a whole day, or skip an entire issue, just because they didn't get great papers. So less good papers will, grudgingly, get accepted. (Although they'll often also step up the invited talks and highlight papers, which is another indicator you can look for that a given year was not as good as usual.)
    2. The reviewers were really swamped, and not as on the ball as they should have been, or weren't as expert in that sub-area as they needed to be.
      Sadly, this happens too. Reviews are run entirely on volunteer effort, and that means sometimes there just aren't enough expert reviewers to go around. A paper which looks superficially good may have deep flaws that someone in a rush, or not deeply read in a sub-area, might miss. Maybe the idea was sound, but their evaluation methodology was flawed, or vice versa. Maybe it's a great idea, but someone invented it 20 years ago in a different discipline. Always, always, do your own leg work, especially when it comes to related work and experimental methodology. Never assume that just because someone said it in a conference last year that it's right, or appropriate for your problem.
    This is where things like citation count start to come in. Now that you've determined that the venue itself is good, now you can start asking questions like "How many people have cited this paper?" and "How many people cite papers by people who cite this paper?" (In other words, doing the whole PageRank thing.) But there are a lot of reasons not to trust raw citation counts, especially in contentious areas. People may cite the paper just so they can debunk it. Bad scientists may cite the paper because it agrees with what they believe, even if it's not actually that great of a paper. The best papers will have lots of citations, from good researchers, who say positive things about it in their papers. If it's a really recent paper, then you're going to have to go off the person's previous publication track record, or their advisor's, if they're really new. (On the other hand, it's rare for someone's first or second paper to be really good solid work.)
As always, however, your best guide to publication venues is going to be people who know that area. If it's your area, and you're new to it, ask your advisor or senior lab mates to tell you what's good. If you're looking outside of your area, see if you can find a research buddy in that discipline. Good luck, happy reading, and happy publishing!

Thursday, June 20, 2013

Coriander versus cilantro: the confusion continues.

Things I learn... I thought I knew that coriander (the dried herb) and cilantro (the fresh leaf), both came from the same plant, but that Europe doesn't distinguish between the two, calling them both coriander. However, I was flipping through On Food and Cooking just now, and discovered that the reason why Americans call fresh coriander cilantro is because there's a Central and South American herb called culantro (Eryngium foetidum), which tastes much like the Middle Eastern herb, coriander (Coriandrum sativum), and has mostly been displaced by it throughout Latin America. So odds are good that your guacamole was made with coriander. But to add another twist to the story, culantro lives on. It's still sometimes used in Latin America, but it has been enthusiastically adopted by Asia, especially the Vietnamese, who use the leaves as a substitute for Vietnamese coriander, Persicaria odorata (which is neither a coriander nor a culantro). So your cilantro is probably coriander, unless you're eating Vietnamese, in which case it might be culantro, or something else entirely. As a side note, the unpleasant "soapy" flavor that some people are sensitive to is the result of a fatty aldehyde, decenal, also present in citrus peels. So if you don't like cilantro, you probably won't like citrus zest, or citron, the much-maligned fruitcake ingredient. (Then again, why are you eating fruitcake, when you could be eating this?) It's not present in the seeds, however, so most people who hate the taste have no problem eating coriander heavy cuisines such as Indian. It's also heat sensitive, so it's possible that lightly cooking the cilantro would drive off the offending soapy component. However, most of the other flavor compounds in cilantro are even more volatile, so it'll probably just taste grassy. Try it at your own risk.