Tuesday, October 12, 2010
Some days I get bored or maybe I’m just procrastinating. Today is one of those days and the result is that I dove into the scores I give my reviews. There is a lot of debate about the usefulness/uselessness of scores on reviews. Generally, I find them useless, but I do give each of my reviews a score for my own reasons. The reason I keep doing it is that I can play with the statistics of those scores. I can actually do quite a bit with them and probably will in future editions of boredom/procrastination.
In some of the recent discussion in quality of reviews, Adam of The Wertzone posed the question if ‘scoring’ varies with books that provided by publishers versus books not provided by publishers. So, I took a look at the scores of my reviews. I have a total of 184 scores from my near-5 years of blogging, 114 of those scores were for books provided by publishers and 70 of those scores are for books I purchased or otherwise acquired on my own. The chart below shows the frequency of scores for all of my reviews, for reviews of books provided by publishers, and for books not provided by publishers.
The most interesting thing I see in this chart is that there appears to be no real difference in the distribution of scores for books provided by publishers vs. books not provided by publishers. This is good – it indicates that I’m consistent in my reviewing regardless of the source of the book. I also think that it indicates that my reading choices don’t vary much either – basically I read what looks interesting at the moment, and what governs those choices doesn’t seem to be any different for books publishers have provided versus books that publishers didn’t provide.
And yes, my review scores are skewed to the right. This is because I pick books that I think I’ll like. Yes, I do challenge myself from time to time, but in general I want to enjoy the books I read and I have pretty good idea of what I like. To learn a bit more about how I score those reviews, I touch on it here.