The high turnout in this race makes the results harder to predict.
-
@futurebird @Wyatt_H_Knott @magicalthinking@noauthority.social it used to be possible to dial a random number and get a fairly representative sample of households in some particular geographic area. Now 1) nobody uses a land line at home, so the geographic factor is out the window 2) nobody younger than a boomer picks up the phone anymore because 99% of calls are scams or spam, so your sample will be super biased.
Now if you want a representative sample you have to go to a lot more trouble and you're still going to miss out on populations that you just don't reach with your recruiting ads.
@futurebird @Wyatt_H_Knott there's also a fun effect where if you compensate people for their time in a survey online, people have hobbies of gaming surveys for the rewards, and will coordinate their efforts, trashing your results. Pew did a study of this effect and found for example that 13% of respondents claimed to be licensed to operate a specific class of nuclear submarine.
-
@futurebird @Wyatt_H_Knott @magicalthinking@noauthority.social it used to be possible to dial a random number and get a fairly representative sample of households in some particular geographic area. Now 1) nobody uses a land line at home, so the geographic factor is out the window 2) nobody younger than a boomer picks up the phone anymore because 99% of calls are scams or spam, so your sample will be super biased.
Now if you want a representative sample you have to go to a lot more trouble and you're still going to miss out on populations that you just don't reach with your recruiting ads.
They are doing texting polls with correlation with voter rolls in NYC it seems. But they get uneven responses so the results must be weighted by expected turnout... which you can base on the last election or primary or both by a wide range of voter "types"
The races you choose and the categories can totally change the poll outcomes.
-
They are doing texting polls with correlation with voter rolls in NYC it seems. But they get uneven responses so the results must be weighted by expected turnout... which you can base on the last election or primary or both by a wide range of voter "types"
The races you choose and the categories can totally change the poll outcomes.
@futurebird @Wyatt_H_Knott I can't even imagine how you'd usefully adjust text responses without massive unknowable error. Like for starters, a huge portion of people are never going to see your text because it went straight to spam. A bunch of the remaining ones will send it to spam themselves as soon as they see it's political, because a ton of fundraising spam poses as a survey and then is like "<candidate> agrees that's a critical issue, give us money!" You're left with people who voluntarily choose to engage with spam for no personal benefit. How do you infer anything about the general population from that?
-
@futurebird @Wyatt_H_Knott I can't even imagine how you'd usefully adjust text responses without massive unknowable error. Like for starters, a huge portion of people are never going to see your text because it went straight to spam. A bunch of the remaining ones will send it to spam themselves as soon as they see it's political, because a ton of fundraising spam poses as a survey and then is like "<candidate> agrees that's a critical issue, give us money!" You're left with people who voluntarily choose to engage with spam for no personal benefit. How do you infer anything about the general population from that?
Because the phone numbers are paired with voter records (and you ask demographic questions in the poll) you can be somewhat certain of who you might be talking to.
But will they tell you who they really support? (Historically mostly yes... however)
But things like how old they are, how often they vote, where they live etc. you can know that and use it for weighting.
-
@futurebird @Wyatt_H_Knott I can't even imagine how you'd usefully adjust text responses without massive unknowable error. Like for starters, a huge portion of people are never going to see your text because it went straight to spam. A bunch of the remaining ones will send it to spam themselves as soon as they see it's political, because a ton of fundraising spam poses as a survey and then is like "<candidate> agrees that's a critical issue, give us money!" You're left with people who voluntarily choose to engage with spam for no personal benefit. How do you infer anything about the general population from that?
I'm not saying this works well. I'm really worried they are age weighting the polls based on the primary and not the last election which would probably overestimate younger voters.
-
Because the phone numbers are paired with voter records (and you ask demographic questions in the poll) you can be somewhat certain of who you might be talking to.
But will they tell you who they really support? (Historically mostly yes... however)
But things like how old they are, how often they vote, where they live etc. you can know that and use it for weighting.
@futurebird @Wyatt_H_Knott sure, you can adjust for demographics like that. But there's an axis that is something like "spam susceptibility", it probably has some correlations with age, income, etc., and you will get most of your responses for people high on that scale and effectively zero from people low on that scale.
Suppose it turns out that 1 in 10 people will just never, under any circumstances, answer a SMS survey (probably an under-estimate). You could assume they'll vote the same way as their age/location/income/etc peers, but that implicitly assumes that this difference has no impact on their vote. If instead it turns out to be something like "that's the most online 10% of the population" then it might be a big source of error.
Probably (hopefully?) people have studied this, but I don't know the results. It's just something I keep in mind when I read survey results nominally covering me that used a methodology that I am 100% sure could never have included me in the sample.
-
@futurebird @Wyatt_H_Knott sure, you can adjust for demographics like that. But there's an axis that is something like "spam susceptibility", it probably has some correlations with age, income, etc., and you will get most of your responses for people high on that scale and effectively zero from people low on that scale.
Suppose it turns out that 1 in 10 people will just never, under any circumstances, answer a SMS survey (probably an under-estimate). You could assume they'll vote the same way as their age/location/income/etc peers, but that implicitly assumes that this difference has no impact on their vote. If instead it turns out to be something like "that's the most online 10% of the population" then it might be a big source of error.
Probably (hopefully?) people have studied this, but I don't know the results. It's just something I keep in mind when I read survey results nominally covering me that used a methodology that I am 100% sure could never have included me in the sample.
For various reasons people answering your poll may also be a problem. If they are as grouchy as I am.
myrmepropagandist (@futurebird@sauropods.win)
So last week I was trying to get everyone I knew to salt the "Clear Insights" poll from Cuomo. I didn't find many people on here who got the poll (it went out to people on NYC voter roles) BUT posting on my co-op message board where we complain about how long it's taking to replace the roof and the laundry room got some action.
Sauropods.win (sauropods.win)
-
@futurebird @Wyatt_H_Knott sure, you can adjust for demographics like that. But there's an axis that is something like "spam susceptibility", it probably has some correlations with age, income, etc., and you will get most of your responses for people high on that scale and effectively zero from people low on that scale.
Suppose it turns out that 1 in 10 people will just never, under any circumstances, answer a SMS survey (probably an under-estimate). You could assume they'll vote the same way as their age/location/income/etc peers, but that implicitly assumes that this difference has no impact on their vote. If instead it turns out to be something like "that's the most online 10% of the population" then it might be a big source of error.
Probably (hopefully?) people have studied this, but I don't know the results. It's just something I keep in mind when I read survey results nominally covering me that used a methodology that I am 100% sure could never have included me in the sample.
@adrake @futurebird @Wyatt_H_Knott Wait! Does anyone ever answer those damn things? Surely that's got to be less than a percent, right?
-
@adrake @futurebird @Wyatt_H_Knott Wait! Does anyone ever answer those damn things? Surely that's got to be less than a percent, right?
-
@adrake @futurebird @Wyatt_H_Knott Wait! Does anyone ever answer those damn things? Surely that's got to be less than a percent, right?
@adrake @futurebird @Wyatt_H_Knott
Do you answer online or text polls?