I found through the election that my model's weakness, though it was not that bad, was with smaller parties. The final results reflect that as well as no other predictor had a lower error rate for the two largest parties combined and only The Truth Hurts predicted one of those two parties exactly. There is also some irony that the forecast with the absolutely worst results was from a professional polling firm. All of the others were reasonably close to one another.
|TOTAL WRONG||99||124||60||25||0||0||The Truth Hurts|
|64||150||58 ||36 ||0||0||IPSOS|
|80 ||39||26||7 ||7 ||0||1||Difference|
I should point out that I have adjusted my forecast for the NDP from 27 down to 25 because of a data entry error. That actually increases the error rate for The Truth Hurts in the chart above, but it better reflects the accuracy of the model because the error was caused by a data entry mistake not a mistake in the model (SES Research had the NDP at 12% in Montreal and I mistakenly entered that as 22%). But feel free to call this mistake differently than I have if you like.