It's easy to imagine that things are predictable if you could only the right stuff about them. The market research industry's methodological armsrace is predicated on just this thought.
In the music industry, the ability to predict the future has always had real and tangible value (spotting the successful and signing it before it gets to be successful is a sure fire way for labels and managers to make cash).That's why the labels constantly refer to A&R folk as "the guy/girl who discovered/signed X…"
This piece about the UK's Xfactor (whose final was last night) from Hitwise which turned out to be precisely wrong is a case in point: Matt won.[HT @Dancall1]
But it's the thinking behind the approach that's nonsense: we've known for a long time that the way music spreads through a population is not based on the quality of the music itself but on the jumbled interaction of the agents involved (aka random copying). We don't buy/download/vote independently in the real world (online or off-); we do what others are doing – so what assume otherwise?
This distributed form of social learning creates patterns of great volatility that make it genuinely impossible to predict what will win (as in be most popular or most voted for etc). Unless of course, you find yourself (by chance) the winner or runner up (the reason why these kind of competitions do such good business for the labels afterward is that being the winner acts as a shorthand for popularity to everyone else – for at least one release). For most contestents, then, the best bet is probably "horsemeat".
If you really want to improve (but not beat) the odds, then you could do a lot worse than talk to these lovely peeps. Using the power of "We-research".