[EM] High Resolution Inferred Approval version of ASM

classic Classic list List threaded Threaded
23 messages Options
12
Reply | Threaded
Open this post in threaded view
|

[EM] High Resolution Inferred Approval version of ASM

Forest Simmons
Chris, I like it especially the part about nive voters voting sincerely being at no appreciable disadvantage while resistiing burial and complying with  the CD criterion. 

From your experience in Australia where full rankings are required (as I understand it) what do you think about the practicality of rating on a scle of zero to 99, as compared with rankiing a long list of candidates?  Is it a big obstacle?

----
Election-Methods mailing list - see https://electorama.com/em for list info
Reply | Threaded
Open this post in threaded view
|

Re: [EM] High Resolution Inferred Approval version of ASM

C.Benham

Forest,

With paper and pencil ballots and the voters only writing in their numerical scores it probably isn't very practical for the Australian Electoral Commission
hand vote-counters.

But if it isn't compulsory to mark each candidate and the default score is zero, I'm sure the voters could quickly adapt.

In the US I gather that there is at least one reform proposal to use these type of ballots. One of these, "Score Voting" aka "Range Voting",
proposes to just use Average Ratings with I gather the default score being "no opinion"  rather than zero and some tweak to prevent an unknown
candidate from winning.

So it struck me that if we can collect such a large amount of detailed information from the voters then we could do a lot more with it, and if we
want something that meets the Condorcet criterion this is my suggestion.

Chris Benham

https://rangevoting.org/

How score voting works:

  1. Each vote consists of a numerical score within some range (say 0 to 99) for each candidate. Simpler is 0 to 9 ("single digit score voting").

On 21/06/2019 5:33 am, Forest Simmons wrote:
Chris, I like it especially the part about naive voters voting sincerely being at no appreciable disadvantage while resisting burial and complying with  the CD criterion. 

From your experience in Australia where full rankings are required (as I understand it) what do you think about the practicality of rating on a scale of zero to 99, as compared with ranking a long list of candidates?  Is it a big obstacle?

Virus-free. www.avg.com

----
Election-Methods mailing list - see https://electorama.com/em for list info
Reply | Threaded
Open this post in threaded view
|

Re: [EM] High Resolution Inferred Approval version of ASM

John
Voters can't readily provide meaningful information as score voting. It's highly-strategic and the comparison of cardinal values is not natural.

All valuation is ordinal.  Prices are based from cost; but what people WILL pay, given no option to pay less, is based on ordinal comparison.

Is X worth 2 Y?

For the $1,000 iPhone I could have a OnePlus 6t and a Chromebook. The 6t...I can get a cheaper smartphone, but I prefer the 6t to that phone plus whatever else I buy.

I have a higher paying job, so each dollar is worth fewer hours, so the ordinal value of a dollar to me is lower.  $600 of my dollars is fewer hours than $600 minimum wage dollars.  I have access to my most-preferred purchases and can buy way down into my less-preferred purchases.

Information about this is difficult to pin down by voter.  Prices in the stock market set by a constant, public auction among millions of buyers and sellers.  A single buyer can hardly price one stock against another, and prices against what they think their gains will be relative to current price.

When pricing candidates, you'll see a lot like Mohs hardness: 2 is 200, 3 is 500, 4 is 1,500; but we label things that are 250 or 450 as 2.5, likewise between 500 and 1,500 is 3.5.  Being between X and Y is always immediately HALFWAY between X and Y, most intuitively.

The rated system sucks even before you factor in strategic concerns (which only matter if actually using a score-driven method).

Approval is just low-resolution (1 bit) score voting.

On Fri, Jun 21, 2019, 12:01 AM C.Benham <[hidden email]> wrote:

Forest,

With paper and pencil ballots and the voters only writing in their numerical scores it probably isn't very practical for the Australian Electoral Commission
hand vote-counters.

But if it isn't compulsory to mark each candidate and the default score is zero, I'm sure the voters could quickly adapt.

In the US I gather that there is at least one reform proposal to use these type of ballots. One of these, "Score Voting" aka "Range Voting",
proposes to just use Average Ratings with I gather the default score being "no opinion"  rather than zero and some tweak to prevent an unknown
candidate from winning.

So it struck me that if we can collect such a large amount of detailed information from the voters then we could do a lot more with it, and if we
want something that meets the Condorcet criterion this is my suggestion.

Chris Benham

https://rangevoting.org/

How score voting works:

  1. Each vote consists of a numerical score within some range (say 0 to 99) for each candidate. Simpler is 0 to 9 ("single digit score voting").

On 21/06/2019 5:33 am, Forest Simmons wrote:
Chris, I like it especially the part about naive voters voting sincerely being at no appreciable disadvantage while resisting burial and complying with  the CD criterion. 

From your experience in Australia where full rankings are required (as I understand it) what do you think about the practicality of rating on a scale of zero to 99, as compared with ranking a long list of candidates?  Is it a big obstacle?

Virus-free. www.avg.com

----
Election-Methods mailing list - see https://electorama.com/em for list info
Reply | Threaded
Open this post in threaded view
|

Re: [EM] High Resolution Inferred Approval version of ASM

C.Benham

John,

With the VIASME method I'm proposing the voters just have to give candidates they rank higher more points than those they rank lower,
and score the candidates approximately accurately according to how they rate them relative to each other.

I don't understand why you think that is a problem. What types of ballot do you like?

Chris Benham

On 21/06/2019 9:53 pm, John wrote:
Voters can't readily provide meaningful information as score voting. It's highly-strategic and the comparison of cardinal values is not natural.

All valuation is ordinal.  Prices are based from cost; but what people WILL pay, given no option to pay less, is based on ordinal comparison.

Is X worth 2 Y?

For the $1,000 iPhone I could have a OnePlus 6t and a Chromebook. The 6t...I can get a cheaper smartphone, but I prefer the 6t to that phone plus whatever else I buy.

I have a higher paying job, so each dollar is worth fewer hours, so the ordinal value of a dollar to me is lower.  $600 of my dollars is fewer hours than $600 minimum wage dollars.  I have access to my most-preferred purchases and can buy way down into my less-preferred purchases.

Information about this is difficult to pin down by voter.  Prices in the stock market set by a constant, public auction among millions of buyers and sellers.  A single buyer can hardly price one stock against another, and prices against what they think their gains will be relative to current price.

When pricing candidates, you'll see a lot like Mohs hardness: 2 is 200, 3 is 500, 4 is 1,500; but we label things that are 250 or 450 as 2.5, likewise between 500 and 1,500 is 3.5.  Being between X and Y is always immediately HALFWAY between X and Y, most intuitively.

The rated system sucks even before you factor in strategic concerns (which only matter if actually using a score-driven method).

Approval is just low-resolution (1 bit) score voting.

On Fri, Jun 21, 2019, 12:01 AM C.Benham <[hidden email]> wrote:

Forest,

With paper and pencil ballots and the voters only writing in their numerical scores it probably isn't very practical for the Australian Electoral Commission
hand vote-counters.

But if it isn't compulsory to mark each candidate and the default score is zero, I'm sure the voters could quickly adapt.

In the US I gather that there is at least one reform proposal to use these type of ballots. One of these, "Score Voting" aka "Range Voting",
proposes to just use Average Ratings with I gather the default score being "no opinion"  rather than zero and some tweak to prevent an unknown
candidate from winning.

So it struck me that if we can collect such a large amount of detailed information from the voters then we could do a lot more with it, and if we
want something that meets the Condorcet criterion this is my suggestion.

Chris Benham

https://rangevoting.org/

How score voting works:

  1. Each vote consists of a numerical score within some range (say 0 to 99) for each candidate. Simpler is 0 to 9 ("single digit score voting").

On 21/06/2019 5:33 am, Forest Simmons wrote:
Chris, I like it especially the part about naive voters voting sincerely being at no appreciable disadvantage while resisting burial and complying with  the CD criterion. 

From your experience in Australia where full rankings are required (as I understand it) what do you think about the practicality of rating on a scale of zero to 99, as compared with ranking a long list of candidates?  Is it a big obstacle?

Virus-free. www.avg.com

----
Election-Methods mailing list - see https://electorama.com/em for list info
Reply | Threaded
Open this post in threaded view
|

Re: [EM] High Resolution Inferred Approval version of ASM

Felix Sargent
In reply to this post by John
Valuation can be ordinal, in that you can know that 3 is more than 2.
There are two questions before us: Which voting method collects more data? Which tabulation method picks the best winner from that data?

Which voting method collects more data?
Cardinal voting collects higher resolution data than ordinal voting. Consider this thought experiment. If I give you a rating of A:5 B:2 C:1 D:3 E:5 F:2 you should create an ordered list from that -- AEDFBC. If I gave you AEDFBC you couldn't convert that back into its cardinal data.

Which tabulation picks a better winner from the data?
Both Score and Approval voting pick the person with the highest votes.
Summing ordinal data, on the other hand, is very complicated, as to avoid loops. Methods like Condorcet or IRV have been proposed to eliminate those but ultimately they're hacks for dealing with incomplete information.


On Fri, Jun 21, 2019 at 5:23 AM John <[hidden email]> wrote:
Voters can't readily provide meaningful information as score voting. It's highly-strategic and the comparison of cardinal values is not natural.

All valuation is ordinal.  Prices are based from cost; but what people WILL pay, given no option to pay less, is based on ordinal comparison.

Is X worth 2 Y?

For the $1,000 iPhone I could have a OnePlus 6t and a Chromebook. The 6t...I can get a cheaper smartphone, but I prefer the 6t to that phone plus whatever else I buy.

I have a higher paying job, so each dollar is worth fewer hours, so the ordinal value of a dollar to me is lower.  $600 of my dollars is fewer hours than $600 minimum wage dollars.  I have access to my most-preferred purchases and can buy way down into my less-preferred purchases.

Information about this is difficult to pin down by voter.  Prices in the stock market set by a constant, public auction among millions of buyers and sellers.  A single buyer can hardly price one stock against another, and prices against what they think their gains will be relative to current price.

When pricing candidates, you'll see a lot like Mohs hardness: 2 is 200, 3 is 500, 4 is 1,500; but we label things that are 250 or 450 as 2.5, likewise between 500 and 1,500 is 3.5.  Being between X and Y is always immediately HALFWAY between X and Y, most intuitively.

The rated system sucks even before you factor in strategic concerns (which only matter if actually using a score-driven method).

Approval is just low-resolution (1 bit) score voting.

On Fri, Jun 21, 2019, 12:01 AM C.Benham <[hidden email]> wrote:

Forest,

With paper and pencil ballots and the voters only writing in their numerical scores it probably isn't very practical for the Australian Electoral Commission
hand vote-counters.

But if it isn't compulsory to mark each candidate and the default score is zero, I'm sure the voters could quickly adapt.

In the US I gather that there is at least one reform proposal to use these type of ballots. One of these, "Score Voting" aka "Range Voting",
proposes to just use Average Ratings with I gather the default score being "no opinion"  rather than zero and some tweak to prevent an unknown
candidate from winning.

So it struck me that if we can collect such a large amount of detailed information from the voters then we could do a lot more with it, and if we
want something that meets the Condorcet criterion this is my suggestion.

Chris Benham

https://rangevoting.org/

How score voting works:

  1. Each vote consists of a numerical score within some range (say 0 to 99) for each candidate. Simpler is 0 to 9 ("single digit score voting").

On 21/06/2019 5:33 am, Forest Simmons wrote:
Chris, I like it especially the part about naive voters voting sincerely being at no appreciable disadvantage while resisting burial and complying with  the CD criterion. 

From your experience in Australia where full rankings are required (as I understand it) what do you think about the practicality of rating on a scale of zero to 99, as compared with ranking a long list of candidates?  Is it a big obstacle?

Virus-free. www.avg.com
----
Election-Methods mailing list - see https://electorama.com/em for list info

----
Election-Methods mailing list - see https://electorama.com/em for list info
Reply | Threaded
Open this post in threaded view
|

Re: [EM] High Resolution Inferred Approval version of ASM

John
Cardinal voting collects higher-resolution data, but not necessarily precise data.

Let's say you score candidates:

A: 1.0
B: 0.5
C: 0.25
D: 0.1

In reality, B is 90% as favored as A. C is 70% as favored as B.  The real numbers would be:

A: 1.0
B: 0.9
C: 0.63
D: etc.

How would this happen?

Cardinal: I approve of A 90% as much as B.

Natural and honest: I prefer A to win, and I am not just as happy with B winning, or close to it.  I feel maybe half as good about that?  B is between C and D and I don't like C, but I like D less.

Strategic: even voting 0.5 for B means possibly helping B beat A, but what if C wins...

The strategic nightmare is inherent to score and approval systems.  When approvals aren't used to elect but only for data, people are not naturally inclined to analyze a score representing their actual approval.

Why?

Because people decide by simulation. Simulation of ordinal preference is easy: I like A over B.  Even then, sometimes you can't seem to decide who is better.

Working out precisely how much I approve of A versus B is harder.  It takes a lot of effort and the basic simulation approach responds heavily to how good you feel about A losing to B, not about how much B satisfies you on a scale of 0 to A.

Score and approval voting source a high-error, low-confidence sample.  It's like recording climate data by licking your finger and holding it in the wind each day, then writing down what you think is the temperature.  Someone will say, "it's more data than warmer/colder trends!" While ignoring that you are not Mercury in a graduated cylinder.


On Fri, Jun 21, 2019, 3:10 PM Felix Sargent <[hidden email]> wrote:
Valuation can be ordinal, in that you can know that 3 is more than 2.
There are two questions before us: Which voting method collects more data? Which tabulation method picks the best winner from that data?

Which voting method collects more data?
Cardinal voting collects higher resolution data than ordinal voting. Consider this thought experiment. If I give you a rating of A:5 B:2 C:1 D:3 E:5 F:2 you should create an ordered list from that -- AEDFBC. If I gave you AEDFBC you couldn't convert that back into its cardinal data.

Which tabulation picks a better winner from the data?
Both Score and Approval voting pick the person with the highest votes.
Summing ordinal data, on the other hand, is very complicated, as to avoid loops. Methods like Condorcet or IRV have been proposed to eliminate those but ultimately they're hacks for dealing with incomplete information.


On Fri, Jun 21, 2019 at 5:23 AM John <[hidden email]> wrote:
Voters can't readily provide meaningful information as score voting. It's highly-strategic and the comparison of cardinal values is not natural.

All valuation is ordinal.  Prices are based from cost; but what people WILL pay, given no option to pay less, is based on ordinal comparison.

Is X worth 2 Y?

For the $1,000 iPhone I could have a OnePlus 6t and a Chromebook. The 6t...I can get a cheaper smartphone, but I prefer the 6t to that phone plus whatever else I buy.

I have a higher paying job, so each dollar is worth fewer hours, so the ordinal value of a dollar to me is lower.  $600 of my dollars is fewer hours than $600 minimum wage dollars.  I have access to my most-preferred purchases and can buy way down into my less-preferred purchases.

Information about this is difficult to pin down by voter.  Prices in the stock market set by a constant, public auction among millions of buyers and sellers.  A single buyer can hardly price one stock against another, and prices against what they think their gains will be relative to current price.

When pricing candidates, you'll see a lot like Mohs hardness: 2 is 200, 3 is 500, 4 is 1,500; but we label things that are 250 or 450 as 2.5, likewise between 500 and 1,500 is 3.5.  Being between X and Y is always immediately HALFWAY between X and Y, most intuitively.

The rated system sucks even before you factor in strategic concerns (which only matter if actually using a score-driven method).

Approval is just low-resolution (1 bit) score voting.

On Fri, Jun 21, 2019, 12:01 AM C.Benham <[hidden email]> wrote:

Forest,

With paper and pencil ballots and the voters only writing in their numerical scores it probably isn't very practical for the Australian Electoral Commission
hand vote-counters.

But if it isn't compulsory to mark each candidate and the default score is zero, I'm sure the voters could quickly adapt.

In the US I gather that there is at least one reform proposal to use these type of ballots. One of these, "Score Voting" aka "Range Voting",
proposes to just use Average Ratings with I gather the default score being "no opinion"  rather than zero and some tweak to prevent an unknown
candidate from winning.

So it struck me that if we can collect such a large amount of detailed information from the voters then we could do a lot more with it, and if we
want something that meets the Condorcet criterion this is my suggestion.

Chris Benham

https://rangevoting.org/

How score voting works:

  1. Each vote consists of a numerical score within some range (say 0 to 99) for each candidate. Simpler is 0 to 9 ("single digit score voting").

On 21/06/2019 5:33 am, Forest Simmons wrote:
Chris, I like it especially the part about naive voters voting sincerely being at no appreciable disadvantage while resisting burial and complying with  the CD criterion. 

From your experience in Australia where full rankings are required (as I understand it) what do you think about the practicality of rating on a scale of zero to 99, as compared with ranking a long list of candidates?  Is it a big obstacle?

Virus-free. www.avg.com
----
Election-Methods mailing list - see https://electorama.com/em for list info

----
Election-Methods mailing list - see https://electorama.com/em for list info
Reply | Threaded
Open this post in threaded view
|

[EM] The Problem with Score Voting and Approval Voting (was: High Resolution Inferred Approval version of ASM)

robert bristow-johnson

 

i am not a member of the RangeVoting list so i do not think my response will post there.

John Moser reiterates the complaint that I have always had with Score Voting (i think that term is a better semantic than "Range"), which, perhaps surprizingly, is also the end complaint i have with Approval Voting, *even* *though* Score Voting requires too much information from voters and Approval Voting collects too little information from voters (as does FPTP). 

Score voting requires more thought (and expertise, as if they are Olympic figure-skating judges) from voters for them to determine exactly how much they should score a particular candidate.  But the real problem for the voter is that the voter is a partisan.  They know they wanna score their favorite candidate a "10".  They may like their second favorite, but they do not want their second choice to beat their first choice.  But they may hate any of the remaining candidates and they sure-as-hell want either their first or second choice to beat any of the remaining candidates.  So their tactical burden in the voting booth is "How much do I score my second choice?"

And Approval Voting has the same problem, but for the opposite reason that Approval Voting is less "expressive" than Ranked-Choice.  The voter has the same tactical decision to make regarding their second favorite candidate: "Do I approve my second choice or not?"

These tactical decisions would also be affected by how likely the voter believes (from the pre-election polls) that the race will end up essentially between their first and second-choice candidates.  If the voter thinks that will be the case, the partisan voter is motivated to score his/her favorite a "10" and the second favorite a "0" (or approve the favorite and not approve the second choice).

This really essentially comes down to a fundamental principle of voting and elections in a democracy, which is: "One person - one vote."  If I really really like Candidate A far better than Candidate B and you prefer Candidate B only slightly more than your preference for Candidate A, then my vote for A>B should count no more (nor less) than your vote for B>A.  Even if your feelings about the candidates is not as strong as mine,  your franchise should be as strong as my franchise.  But Score Voting explicitly rejects that notion and in doing so, will lead to a burden of tactical voting for regular voters.

Only the ordinal ranked-ballot extracts from voters the "right amount" of information.  If a voter ranks A>B>C, all that voter is saying is that if the election were held between A and B, this voter is voting for A.  If the election is between A and C, this voter is voting for A.  And if the election ends up being between B and C, this voter votes for B.  That's **all** that this ballot says.  We should not read more into it and we should not expect more information from the voter such as "How much more do you prefer A over C than your preference of A over B?"  It shouldn't matter.

my $0.02 .

r b-j


---------------------------- Original Message ----------------------------
Subject: Re: [EM] High Resolution Inferred Approval version of ASM
From: "John" <[hidden email]>
Date: Fri, June 21, 2019 2:14 pm
To: "Felix Sargent" <[hidden email]>
Cc: [hidden email]
"Forest Simmons" <[hidden email]>
"EM" <[hidden email]>
--------------------------------------------------------------------------

> Cardinal voting collects higher-resolution data, but not necessarily
> precise data.
>
> Let's say you score candidates:
>
> A: 1.0
> B: 0.5
> C: 0.25
> D: 0.1
>
> In reality, B is 90% as favored as A. C is 70% as favored as B. The real
> numbers would be:
>
> A: 1.0
> B: 0.9
> C: 0.63
> D: etc.
>
> How would this happen?
>
> Cardinal: I approve of A 90% as much as B.
>
> Natural and honest: I prefer A to win, and I am not just as happy with B
> winning, or close to it. I feel maybe half as good about that? B is
> between C and D and I don't like C, but I like D less.
>
> Strategic: even voting 0.5 for B means possibly helping B beat A, but what
> if C wins...
>
> The strategic nightmare is inherent to score and approval systems. When
> approvals aren't used to elect but only for data, people are not naturally
> inclined to analyze a score representing their actual approval.
>
> Why?
>
> Because people decide by simulation. Simulation of ordinal preference is
> easy: I like A over B. Even then, sometimes you can't seem to decide who
> is better.
>
> Working out precisely how much I approve of A versus B is harder. It takes
> a lot of effort and the basic simulation approach responds heavily to how
> good you feel about A losing to B, not about how much B satisfies you on a
> scale of 0 to A.
>
> Score and approval voting source a high-error, low-confidence sample. It's
> like recording climate data by licking your finger and holding it in the
> wind each day, then writing down what you think is the temperature.
> Someone will say, "it's more data than warmer/colder trends!" While
> ignoring that you are not Mercury in a graduated cylinder.
>
 

--

r b-j                         [hidden email]

"Imagination is more important than knowledge."
 

 

 

 


----
Election-Methods mailing list - see https://electorama.com/em for list info
Reply | Threaded
Open this post in threaded view
|

Re: [EM] High Resolution Inferred Approval version of ASM

John
In reply to this post by John
The error comes when you make inferences.

The great purported benefit of score systems is that more voters can rank A over B, yet due to the scores score can elect B:

A:1.0 B:0.9 C:0.1
C:1.0 A:0.5 B:0.4
B:1.0 A:0.2 C:0.1

A=1.7, B=2.3, C=2.2

Both B and C defeat A, despite A defeating both ranked.

If the first voter scores B as 0.7, C wins.

Whenever a system attempts to use score or its low-resolution Approval variant, it is relying on this information.

So why does this matter?

The voters are 100% certain and precise that these are their votes:

A>B>C
C>A>B
B>A>C

We know A defeats B, A defeats C, and B defeats C.  A is the Condorcet winner.

For score votes, 1.0 is always 1.0.  It's the first rank, the measure.  This is of course another source of information distortion in cardinal systems: how is the information meaningful as a comparison between two voters?

How do you know 10 voters voting A first at 1.0 aren't half as invested in A as 6 voters voting B 1.0, this really A=5 B=6?

Ten of us prefer strawberry to peanut butter.

Six of us WILL DIE IF YOU OPEN A JAR OF PEANUT BUTTER HERE.

Score systems claim to represent this and capture this information, but they can't.

(Notice I used the negative: that 1.0 vote is an expression of the damage of their 0.0-scored alternative.)

Even setting that aside, however, you have a problem where an individual might put down 0.7 or 0.9 or 0.5 for the SAME candidate in the SAME election, solely based on how bad they are at creating a cardinal comparison.  Humans are universally bad at cardinal comparison.

So now you can actually elect A, B, or C based on how well-rested people are, how hungry they are, or anything else that impacts their mood and thus the sharpness or softness by which they critically compare candidates.

It's a sort of random number generator.

Wrapping it in a better system and using that information to make auxiliary decisions is still incorporating bad data.  Bad data is worse than no data.

On Fri, Jun 21, 2019, 7:27 PM Felix Sargent <[hidden email]> wrote:
I don't know how you can think that blurrier data would end up with a more precise result.
No matter how you cut it, if you rank ABCD then it translates into a score of
A: 1.0
B: .75
C: 0.5
D: 0.25

There's no way of describing differences between candidates beyond a straight line between first place and last place.
Even if the voter is imprecise in the difference between A and B they will never make the error of rating B more than A, whereas the error between a voter's actual preferences and the preferences that are recorded with an ordinal ballot has the liability of being massive. Consider I like A and B but HATE C. ABC does not tell you that.
That's not even going into what happens when a voter ranks an ordinal ballot strategically, placing "guaranteed losers" to 2nd and 3rd places in order to improve the chances of their first choice candidate (in IRV at least).

Your analysis depends on the question of how intelligent you believe the average voter to be.
If voters can use Amazon and Yelp star ratings, they can do score voting.


On Fri, Jun 21, 2019 at 2:14 PM John <[hidden email]> wrote:
Cardinal voting collects higher-resolution data, but not necessarily precise data.

Let's say you score candidates:

A: 1.0
B: 0.5
C: 0.25
D: 0.1

In reality, B is 90% as favored as A. C is 70% as favored as B.  The real numbers would be:

A: 1.0
B: 0.9
C: 0.63
D: etc.

How would this happen?

Cardinal: I approve of A 90% as much as B.

Natural and honest: I prefer A to win, and I am not just as happy with B winning, or close to it.  I feel maybe half as good about that?  B is between C and D and I don't like C, but I like D less.

Strategic: even voting 0.5 for B means possibly helping B beat A, but what if C wins...

The strategic nightmare is inherent to score and approval systems.  When approvals aren't used to elect but only for data, people are not naturally inclined to analyze a score representing their actual approval.

Why?

Because people decide by simulation. Simulation of ordinal preference is easy: I like A over B.  Even then, sometimes you can't seem to decide who is better.

Working out precisely how much I approve of A versus B is harder.  It takes a lot of effort and the basic simulation approach responds heavily to how good you feel about A losing to B, not about how much B satisfies you on a scale of 0 to A.

Score and approval voting source a high-error, low-confidence sample.  It's like recording climate data by licking your finger and holding it in the wind each day, then writing down what you think is the temperature.  Someone will say, "it's more data than warmer/colder trends!" While ignoring that you are not Mercury in a graduated cylinder.


On Fri, Jun 21, 2019, 3:10 PM Felix Sargent <[hidden email]> wrote:
Valuation can be ordinal, in that you can know that 3 is more than 2.
There are two questions before us: Which voting method collects more data? Which tabulation method picks the best winner from that data?

Which voting method collects more data?
Cardinal voting collects higher resolution data than ordinal voting. Consider this thought experiment. If I give you a rating of A:5 B:2 C:1 D:3 E:5 F:2 you should create an ordered list from that -- AEDFBC. If I gave you AEDFBC you couldn't convert that back into its cardinal data.

Which tabulation picks a better winner from the data?
Both Score and Approval voting pick the person with the highest votes.
Summing ordinal data, on the other hand, is very complicated, as to avoid loops. Methods like Condorcet or IRV have been proposed to eliminate those but ultimately they're hacks for dealing with incomplete information.


On Fri, Jun 21, 2019 at 5:23 AM John <[hidden email]> wrote:
Voters can't readily provide meaningful information as score voting. It's highly-strategic and the comparison of cardinal values is not natural.

All valuation is ordinal.  Prices are based from cost; but what people WILL pay, given no option to pay less, is based on ordinal comparison.

Is X worth 2 Y?

For the $1,000 iPhone I could have a OnePlus 6t and a Chromebook. The 6t...I can get a cheaper smartphone, but I prefer the 6t to that phone plus whatever else I buy.

I have a higher paying job, so each dollar is worth fewer hours, so the ordinal value of a dollar to me is lower.  $600 of my dollars is fewer hours than $600 minimum wage dollars.  I have access to my most-preferred purchases and can buy way down into my less-preferred purchases.

Information about this is difficult to pin down by voter.  Prices in the stock market set by a constant, public auction among millions of buyers and sellers.  A single buyer can hardly price one stock against another, and prices against what they think their gains will be relative to current price.

When pricing candidates, you'll see a lot like Mohs hardness: 2 is 200, 3 is 500, 4 is 1,500; but we label things that are 250 or 450 as 2.5, likewise between 500 and 1,500 is 3.5.  Being between X and Y is always immediately HALFWAY between X and Y, most intuitively.

The rated system sucks even before you factor in strategic concerns (which only matter if actually using a score-driven method).

Approval is just low-resolution (1 bit) score voting.

On Fri, Jun 21, 2019, 12:01 AM C.Benham <[hidden email]> wrote:

Forest,

With paper and pencil ballots and the voters only writing in their numerical scores it probably isn't very practical for the Australian Electoral Commission
hand vote-counters.

But if it isn't compulsory to mark each candidate and the default score is zero, I'm sure the voters could quickly adapt.

In the US I gather that there is at least one reform proposal to use these type of ballots. One of these, "Score Voting" aka "Range Voting",
proposes to just use Average Ratings with I gather the default score being "no opinion"  rather than zero and some tweak to prevent an unknown
candidate from winning.

So it struck me that if we can collect such a large amount of detailed information from the voters then we could do a lot more with it, and if we
want something that meets the Condorcet criterion this is my suggestion.

Chris Benham

https://rangevoting.org/

How score voting works:

  1. Each vote consists of a numerical score within some range (say 0 to 99) for each candidate. Simpler is 0 to 9 ("single digit score voting").

On 21/06/2019 5:33 am, Forest Simmons wrote:
Chris, I like it especially the part about naive voters voting sincerely being at no appreciable disadvantage while resisting burial and complying with  the CD criterion. 

From your experience in Australia where full rankings are required (as I understand it) what do you think about the practicality of rating on a scale of zero to 99, as compared with ranking a long list of candidates?  Is it a big obstacle?

Virus-free. www.avg.com
----
Election-Methods mailing list - see https://electorama.com/em for list info

----
Election-Methods mailing list - see https://electorama.com/em for list info
Reply | Threaded
Open this post in threaded view
|

Re: [EM] High Resolution Inferred Approval version of ASM

John
Also it's well-understood ratings are of poor quality in online reviews.  There is a lot of research into how people overemphasize bad experiences as low ratings, and give inflated good ratings.  A lot of products have more 5 and 1 star ratings than 3 star ratings.

That's not the same thing as comparatively scaling things, which is even harder.  There's a reason we have a lot of top 10 lists and a lot of rankings for things.

On Fri, Jun 21, 2019, 7:45 PM John <[hidden email]> wrote:
The error comes when you make inferences.

The great purported benefit of score systems is that more voters can rank A over B, yet due to the scores score can elect B:

A:1.0 B:0.9 C:0.1
C:1.0 A:0.5 B:0.4
B:1.0 A:0.2 C:0.1

A=1.7, B=2.3, C=2.2

Both B and C defeat A, despite A defeating both ranked.

If the first voter scores B as 0.7, C wins.

Whenever a system attempts to use score or its low-resolution Approval variant, it is relying on this information.

So why does this matter?

The voters are 100% certain and precise that these are their votes:

A>B>C
C>A>B
B>A>C

We know A defeats B, A defeats C, and B defeats C.  A is the Condorcet winner.

For score votes, 1.0 is always 1.0.  It's the first rank, the measure.  This is of course another source of information distortion in cardinal systems: how is the information meaningful as a comparison between two voters?

How do you know 10 voters voting A first at 1.0 aren't half as invested in A as 6 voters voting B 1.0, this really A=5 B=6?

Ten of us prefer strawberry to peanut butter.

Six of us WILL DIE IF YOU OPEN A JAR OF PEANUT BUTTER HERE.

Score systems claim to represent this and capture this information, but they can't.

(Notice I used the negative: that 1.0 vote is an expression of the damage of their 0.0-scored alternative.)

Even setting that aside, however, you have a problem where an individual might put down 0.7 or 0.9 or 0.5 for the SAME candidate in the SAME election, solely based on how bad they are at creating a cardinal comparison.  Humans are universally bad at cardinal comparison.

So now you can actually elect A, B, or C based on how well-rested people are, how hungry they are, or anything else that impacts their mood and thus the sharpness or softness by which they critically compare candidates.

It's a sort of random number generator.

Wrapping it in a better system and using that information to make auxiliary decisions is still incorporating bad data.  Bad data is worse than no data.

On Fri, Jun 21, 2019, 7:27 PM Felix Sargent <[hidden email]> wrote:
I don't know how you can think that blurrier data would end up with a more precise result.
No matter how you cut it, if you rank ABCD then it translates into a score of
A: 1.0
B: .75
C: 0.5
D: 0.25

There's no way of describing differences between candidates beyond a straight line between first place and last place.
Even if the voter is imprecise in the difference between A and B they will never make the error of rating B more than A, whereas the error between a voter's actual preferences and the preferences that are recorded with an ordinal ballot has the liability of being massive. Consider I like A and B but HATE C. ABC does not tell you that.
That's not even going into what happens when a voter ranks an ordinal ballot strategically, placing "guaranteed losers" to 2nd and 3rd places in order to improve the chances of their first choice candidate (in IRV at least).

Your analysis depends on the question of how intelligent you believe the average voter to be.
If voters can use Amazon and Yelp star ratings, they can do score voting.


On Fri, Jun 21, 2019 at 2:14 PM John <[hidden email]> wrote:
Cardinal voting collects higher-resolution data, but not necessarily precise data.

Let's say you score candidates:

A: 1.0
B: 0.5
C: 0.25
D: 0.1

In reality, B is 90% as favored as A. C is 70% as favored as B.  The real numbers would be:

A: 1.0
B: 0.9
C: 0.63
D: etc.

How would this happen?

Cardinal: I approve of A 90% as much as B.

Natural and honest: I prefer A to win, and I am not just as happy with B winning, or close to it.  I feel maybe half as good about that?  B is between C and D and I don't like C, but I like D less.

Strategic: even voting 0.5 for B means possibly helping B beat A, but what if C wins...

The strategic nightmare is inherent to score and approval systems.  When approvals aren't used to elect but only for data, people are not naturally inclined to analyze a score representing their actual approval.

Why?

Because people decide by simulation. Simulation of ordinal preference is easy: I like A over B.  Even then, sometimes you can't seem to decide who is better.

Working out precisely how much I approve of A versus B is harder.  It takes a lot of effort and the basic simulation approach responds heavily to how good you feel about A losing to B, not about how much B satisfies you on a scale of 0 to A.

Score and approval voting source a high-error, low-confidence sample.  It's like recording climate data by licking your finger and holding it in the wind each day, then writing down what you think is the temperature.  Someone will say, "it's more data than warmer/colder trends!" While ignoring that you are not Mercury in a graduated cylinder.


On Fri, Jun 21, 2019, 3:10 PM Felix Sargent <[hidden email]> wrote:
Valuation can be ordinal, in that you can know that 3 is more than 2.
There are two questions before us: Which voting method collects more data? Which tabulation method picks the best winner from that data?

Which voting method collects more data?
Cardinal voting collects higher resolution data than ordinal voting. Consider this thought experiment. If I give you a rating of A:5 B:2 C:1 D:3 E:5 F:2 you should create an ordered list from that -- AEDFBC. If I gave you AEDFBC you couldn't convert that back into its cardinal data.

Which tabulation picks a better winner from the data?
Both Score and Approval voting pick the person with the highest votes.
Summing ordinal data, on the other hand, is very complicated, as to avoid loops. Methods like Condorcet or IRV have been proposed to eliminate those but ultimately they're hacks for dealing with incomplete information.


On Fri, Jun 21, 2019 at 5:23 AM John <[hidden email]> wrote:
Voters can't readily provide meaningful information as score voting. It's highly-strategic and the comparison of cardinal values is not natural.

All valuation is ordinal.  Prices are based from cost; but what people WILL pay, given no option to pay less, is based on ordinal comparison.

Is X worth 2 Y?

For the $1,000 iPhone I could have a OnePlus 6t and a Chromebook. The 6t...I can get a cheaper smartphone, but I prefer the 6t to that phone plus whatever else I buy.

I have a higher paying job, so each dollar is worth fewer hours, so the ordinal value of a dollar to me is lower.  $600 of my dollars is fewer hours than $600 minimum wage dollars.  I have access to my most-preferred purchases and can buy way down into my less-preferred purchases.

Information about this is difficult to pin down by voter.  Prices in the stock market set by a constant, public auction among millions of buyers and sellers.  A single buyer can hardly price one stock against another, and prices against what they think their gains will be relative to current price.

When pricing candidates, you'll see a lot like Mohs hardness: 2 is 200, 3 is 500, 4 is 1,500; but we label things that are 250 or 450 as 2.5, likewise between 500 and 1,500 is 3.5.  Being between X and Y is always immediately HALFWAY between X and Y, most intuitively.

The rated system sucks even before you factor in strategic concerns (which only matter if actually using a score-driven method).

Approval is just low-resolution (1 bit) score voting.

On Fri, Jun 21, 2019, 12:01 AM C.Benham <[hidden email]> wrote:

Forest,

With paper and pencil ballots and the voters only writing in their numerical scores it probably isn't very practical for the Australian Electoral Commission
hand vote-counters.

But if it isn't compulsory to mark each candidate and the default score is zero, I'm sure the voters could quickly adapt.

In the US I gather that there is at least one reform proposal to use these type of ballots. One of these, "Score Voting" aka "Range Voting",
proposes to just use Average Ratings with I gather the default score being "no opinion"  rather than zero and some tweak to prevent an unknown
candidate from winning.

So it struck me that if we can collect such a large amount of detailed information from the voters then we could do a lot more with it, and if we
want something that meets the Condorcet criterion this is my suggestion.

Chris Benham

https://rangevoting.org/

How score voting works:

  1. Each vote consists of a numerical score within some range (say 0 to 99) for each candidate. Simpler is 0 to 9 ("single digit score voting").

On 21/06/2019 5:33 am, Forest Simmons wrote:
Chris, I like it especially the part about naive voters voting sincerely being at no appreciable disadvantage while resisting burial and complying with  the CD criterion. 

From your experience in Australia where full rankings are required (as I understand it) what do you think about the practicality of rating on a scale of zero to 99, as compared with ranking a long list of candidates?  Is it a big obstacle?

Virus-free. www.avg.com
----
Election-Methods mailing list - see https://electorama.com/em for list info

----
Election-Methods mailing list - see https://electorama.com/em for list info
Reply | Threaded
Open this post in threaded view
|

Re: [EM] High Resolution Inferred Approval version of ASM

Richard Lung
In reply to this post by Forest Simmons
Points systems (Borda method is the archetype) are an assumed weighting of preferences. Gregory method transfer value or Meek method keep values are a real weighting of preferences.

Richard L.

On 20/06/2019 21:03, Forest Simmons wrote:
Chris, I like it especially the part about nive voters voting sincerely being at no appreciable disadvantage while resistiing burial and complying with  the CD criterion. 

From your experience in Australia where full rankings are required (as I understand it) what do you think about the practicality of rating on a scle of zero to 99, as compared with rankiing a long list of candidates?  Is it a big obstacle?

----
Election-Methods mailing list - see https://electorama.com/em for list info



----
Election-Methods mailing list - see https://electorama.com/em for list info
Reply | Threaded
Open this post in threaded view
|

Re: [EM] The Problem with Score Voting and Approval Voting

Richard Lung
In reply to this post by robert bristow-johnson
I agree with all this.
It was said long ago, with regard to many votes per seats and cumulative voting, as by Enid Lakeman: Multiple votes count against each other. Single transferable voting is the way to go.

Richard L.

On 22/06/2019 00:29, robert bristow-johnson wrote:

 

i am not a member of the RangeVoting list so i do not think my response will post there.

John Moser reiterates the complaint that I have always had with Score Voting (i think that term is a better semantic than "Range"), which, perhaps surprizingly, is also the end complaint i have with Approval Voting, *even* *though* Score Voting requires too much information from voters and Approval Voting collects too little information from voters (as does FPTP). 

Score voting requires more thought (and expertise, as if they are Olympic figure-skating judges) from voters for them to determine exactly how much they should score a particular candidate.  But the real problem for the voter is that the voter is a partisan.  They know they wanna score their favorite candidate a "10".  They may like their second favorite, but they do not want their second choice to beat their first choice.  But they may hate any of the remaining candidates and they sure-as-hell want either their first or second choice to beat any of the remaining candidates.  So their tactical burden in the voting booth is "How much do I score my second choice?"

And Approval Voting has the same problem, but for the opposite reason that Approval Voting is less "expressive" than Ranked-Choice.  The voter has the same tactical decision to make regarding their second favorite candidate: "Do I approve my second choice or not?"

These tactical decisions would also be affected by how likely the voter believes (from the pre-election polls) that the race will end up essentially between their first and second-choice candidates.  If the voter thinks that will be the case, the partisan voter is motivated to score his/her favorite a "10" and the second favorite a "0" (or approve the favorite and not approve the second choice).

This really essentially comes down to a fundamental principle of voting and elections in a democracy, which is: "One person - one vote."  If I really really like Candidate A far better than Candidate B and you prefer Candidate B only slightly more than your preference for Candidate A, then my vote for A>B should count no more (nor less) than your vote for B>A.  Even if your feelings about the candidates is not as strong as mine,  your franchise should be as strong as my franchise.  But Score Voting explicitly rejects that notion and in doing so, will lead to a burden of tactical voting for regular voters.

Only the ordinal ranked-ballot extracts from voters the "right amount" of information.  If a voter ranks A>B>C, all that voter is saying is that if the election were held between A and B, this voter is voting for A.  If the election is between A and C, this voter is voting for A.  And if the election ends up being between B and C, this voter votes for B.  That's **all** that this ballot says.  We should not read more into it and we should not expect more information from the voter such as "How much more do you prefer A over C than your preference of A over B?"  It shouldn't matter.

my $0.02 .

r b-j


---------------------------- Original Message ----------------------------
Subject: Re: [EM] High Resolution Inferred Approval version of ASM
From: "John" [hidden email]
Date: Fri, June 21, 2019 2:14 pm
To: "Felix Sargent" [hidden email]
Cc: [hidden email]
"Forest Simmons" [hidden email]
"EM" [hidden email]
--------------------------------------------------------------------------

> Cardinal voting collects higher-resolution data, but not necessarily
> precise data.
>
> Let's say you score candidates:
>
> A: 1.0
> B: 0.5
> C: 0.25
> D: 0.1
>
> In reality, B is 90% as favored as A. C is 70% as favored as B. The real
> numbers would be:
>
> A: 1.0
> B: 0.9
> C: 0.63
> D: etc.
>
> How would this happen?
>
> Cardinal: I approve of A 90% as much as B.
>
> Natural and honest: I prefer A to win, and I am not just as happy with B
> winning, or close to it. I feel maybe half as good about that? B is
> between C and D and I don't like C, but I like D less.
>
> Strategic: even voting 0.5 for B means possibly helping B beat A, but what
> if C wins...
>
> The strategic nightmare is inherent to score and approval systems. When
> approvals aren't used to elect but only for data, people are not naturally
> inclined to analyze a score representing their actual approval.
>
> Why?
>
> Because people decide by simulation. Simulation of ordinal preference is
> easy: I like A over B. Even then, sometimes you can't seem to decide who
> is better.
>
> Working out precisely how much I approve of A versus B is harder. It takes
> a lot of effort and the basic simulation approach responds heavily to how
> good you feel about A losing to B, not about how much B satisfies you on a
> scale of 0 to A.
>
> Score and approval voting source a high-error, low-confidence sample. It's
> like recording climate data by licking your finger and holding it in the
> wind each day, then writing down what you think is the temperature.
> Someone will say, "it's more data than warmer/colder trends!" While
> ignoring that you are not Mercury in a graduated cylinder.
>
 

--

r b-j                         [hidden email]

"Imagination is more important than knowledge."
 

 

 

 


----
Election-Methods mailing list - see https://electorama.com/em for list info



----
Election-Methods mailing list - see https://electorama.com/em for list info
Reply | Threaded
Open this post in threaded view
|

Re: [EM] The Problem with Score Voting and Approval Voting

Greg Dennis-2
Agreed. My favorite paper on this topic is Niemi, 1984:

Greg


On Sun, Jun 23, 2019, 10:26 AM Richard Lung <[hidden email]> wrote:
I agree with all this.
It was said long ago, with regard to many votes per seats and cumulative voting, as by Enid Lakeman: Multiple votes count against each other. Single transferable voting is the way to go.

Richard L.

On 22/06/2019 00:29, robert bristow-johnson wrote:

 

i am not a member of the RangeVoting list so i do not think my response will post there.

John Moser reiterates the complaint that I have always had with Score Voting (i think that term is a better semantic than "Range"), which, perhaps surprizingly, is also the end complaint i have with Approval Voting, *even* *though* Score Voting requires too much information from voters and Approval Voting collects too little information from voters (as does FPTP). 

Score voting requires more thought (and expertise, as if they are Olympic figure-skating judges) from voters for them to determine exactly how much they should score a particular candidate.  But the real problem for the voter is that the voter is a partisan.  They know they wanna score their favorite candidate a "10".  They may like their second favorite, but they do not want their second choice to beat their first choice.  But they may hate any of the remaining candidates and they sure-as-hell want either their first or second choice to beat any of the remaining candidates.  So their tactical burden in the voting booth is "How much do I score my second choice?"

And Approval Voting has the same problem, but for the opposite reason that Approval Voting is less "expressive" than Ranked-Choice.  The voter has the same tactical decision to make regarding their second favorite candidate: "Do I approve my second choice or not?"

These tactical decisions would also be affected by how likely the voter believes (from the pre-election polls) that the race will end up essentially between their first and second-choice candidates.  If the voter thinks that will be the case, the partisan voter is motivated to score his/her favorite a "10" and the second favorite a "0" (or approve the favorite and not approve the second choice).

This really essentially comes down to a fundamental principle of voting and elections in a democracy, which is: "One person - one vote."  If I really really like Candidate A far better than Candidate B and you prefer Candidate B only slightly more than your preference for Candidate A, then my vote for A>B should count no more (nor less) than your vote for B>A.  Even if your feelings about the candidates is not as strong as mine,  your franchise should be as strong as my franchise.  But Score Voting explicitly rejects that notion and in doing so, will lead to a burden of tactical voting for regular voters.

Only the ordinal ranked-ballot extracts from voters the "right amount" of information.  If a voter ranks A>B>C, all that voter is saying is that if the election were held between A and B, this voter is voting for A.  If the election is between A and C, this voter is voting for A.  And if the election ends up being between B and C, this voter votes for B.  That's **all** that this ballot says.  We should not read more into it and we should not expect more information from the voter such as "How much more do you prefer A over C than your preference of A over B?"  It shouldn't matter.

my $0.02 .

r b-j


---------------------------- Original Message ----------------------------
Subject: Re: [EM] High Resolution Inferred Approval version of ASM
From: "John" [hidden email]
Date: Fri, June 21, 2019 2:14 pm
To: "Felix Sargent" [hidden email]
Cc: [hidden email]
"Forest Simmons" [hidden email]
"EM" [hidden email]
--------------------------------------------------------------------------

> Cardinal voting collects higher-resolution data, but not necessarily
> precise data.
>
> Let's say you score candidates:
>
> A: 1.0
> B: 0.5
> C: 0.25
> D: 0.1
>
> In reality, B is 90% as favored as A. C is 70% as favored as B. The real
> numbers would be:
>
> A: 1.0
> B: 0.9
> C: 0.63
> D: etc.
>
> How would this happen?
>
> Cardinal: I approve of A 90% as much as B.
>
> Natural and honest: I prefer A to win, and I am not just as happy with B
> winning, or close to it. I feel maybe half as good about that? B is
> between C and D and I don't like C, but I like D less.
>
> Strategic: even voting 0.5 for B means possibly helping B beat A, but what
> if C wins...
>
> The strategic nightmare is inherent to score and approval systems. When
> approvals aren't used to elect but only for data, people are not naturally
> inclined to analyze a score representing their actual approval.
>
> Why?
>
> Because people decide by simulation. Simulation of ordinal preference is
> easy: I like A over B. Even then, sometimes you can't seem to decide who
> is better.
>
> Working out precisely how much I approve of A versus B is harder. It takes
> a lot of effort and the basic simulation approach responds heavily to how
> good you feel about A losing to B, not about how much B satisfies you on a
> scale of 0 to A.
>
> Score and approval voting source a high-error, low-confidence sample. It's
> like recording climate data by licking your finger and holding it in the
> wind each day, then writing down what you think is the temperature.
> Someone will say, "it's more data than warmer/colder trends!" While
> ignoring that you are not Mercury in a graduated cylinder.
>
 

--

r b-j                         [hidden email]

"Imagination is more important than knowledge."
 

 

 

 


----
Election-Methods mailing list - see https://electorama.com/em for list info


----
Election-Methods mailing list - see https://electorama.com/em for list info

----
Election-Methods mailing list - see https://electorama.com/em for list info
Reply | Threaded
Open this post in threaded view
|

Re: [EM] High Resolution Inferred Approval version of ASM

C.Benham
In reply to this post by John


On 22/06/2019 9:15 am, John wrote:

The great purported benefit of score systems is that more voters can rank A over B, yet due to the scores score can elect B:

John,

Is every method that uses score ballots a "score system"?   My suggested VIASME method meets Smith and therefore avoids
the "benefit" you refer to.

Wrapping it in a better system and using that information to make auxiliary decisions is still incorporating bad data.  Bad data is worse than no data.

As it relates to VIASME, I'm afraid you've lost me. A few years ago James Green-Armytage proposed a Condorcet method that asked the voters to both
rank the candidates (with equal ranking and truncation allowed) and also give each of them a high-resolution score and the ranking and the scoring
had to be consistent with each other.  If there was a Condorcet winner the scoring was ignored.

Well it seems to me that the ranking is a redundant extra chore for the voter because it can be inferred from the scoring. That is what I propose for
VIASME.  The Green-Armytage method was called Cardinal-Weighted Pairwise  and was designed to try to resist Burial strategy. He had a simpler-ballot
version called Approval-Weighted Pairwise. One of the reasons I don't much like it is that it can elect a candidate that is pairwise-beaten by a more approved
candidate.

https://electowiki.org/wiki/Cardinal_pairwise

On 22/06/2019 8:57 am, Felix Sargent wrote:

That's not even going into what happens when a voter ranks an ordinal ballot strategically, placing "guaranteed losers" to 2nd and 3rd places in order to improve the chances of their first choice candidate (in IRV at least).

Felix, the Burial strategy you describe doesn't work in IRV because your 2nd and 3rd place preferences won't be counted if your  first choice candidate is still alive.
It is methods that fail Later-no-Help (such as all the Condorcet methods) that are vulnerable to that, some more than others.

Chris Benham

On 22/06/2019 9:15 am, John wrote:
The error comes when you make inferences.

The great purported benefit of score systems is that more voters can rank A over B, yet due to the scores score can elect B:

A:1.0 B:0.9 C:0.1
C:1.0 A:0.5 B:0.4
B:1.0 A:0.2 C:0.1

A=1.7, B=2.3, C=2.2

Both B and C defeat A, despite A defeating both ranked.

If the first voter scores B as 0.7, C wins.

Whenever a system attempts to use score or its low-resolution Approval variant, it is relying on this information.

So why does this matter?

The voters are 100% certain and precise that these are their votes:

A>B>C
C>A>B
B>A>C

We know A defeats B, A defeats C, and B defeats C.  A is the Condorcet winner.

For score votes, 1.0 is always 1.0.  It's the first rank, the measure.  This is of course another source of information distortion in cardinal systems: how is the information meaningful as a comparison between two voters?

How do you know 10 voters voting A first at 1.0 aren't half as invested in A as 6 voters voting B 1.0, this really A=5 B=6?

Ten of us prefer strawberry to peanut butter.

Six of us WILL DIE IF YOU OPEN A JAR OF PEANUT BUTTER HERE.

Score systems claim to represent this and capture this information, but they can't.

(Notice I used the negative: that 1.0 vote is an expression of the damage of their 0.0-scored alternative.)

Even setting that aside, however, you have a problem where an individual might put down 0.7 or 0.9 or 0.5 for the SAME candidate in the SAME election, solely based on how bad they are at creating a cardinal comparison.  Humans are universally bad at cardinal comparison.

So now you can actually elect A, B, or C based on how well-rested people are, how hungry they are, or anything else that impacts their mood and thus the sharpness or softness by which they critically compare candidates.

It's a sort of random number generator.

Wrapping it in a better system and using that information to make auxiliary decisions is still incorporating bad data.  Bad data is worse than no data.

On Fri, Jun 21, 2019, 7:27 PM Felix Sargent <[hidden email]> wrote:
I don't know how you can think that blurrier data would end up with a more precise result.
No matter how you cut it, if you rank ABCD then it translates into a score of
A: 1.0
B: .75
C: 0.5
D: 0.25

There's no way of describing differences between candidates beyond a straight line between first place and last place.
Even if the voter is imprecise in the difference between A and B they will never make the error of rating B more than A, whereas the error between a voter's actual preferences and the preferences that are recorded with an ordinal ballot has the liability of being massive. Consider I like A and B but HATE C. ABC does not tell you that.
That's not even going into what happens when a voter ranks an ordinal ballot strategically, placing "guaranteed losers" to 2nd and 3rd places in order to improve the chances of their first choice candidate (in IRV at least).

Your analysis depends on the question of how intelligent you believe the average voter to be.
If voters can use Amazon and Yelp star ratings, they can do score voting.


On Fri, Jun 21, 2019 at 2:14 PM John <[hidden email]> wrote:
Cardinal voting collects higher-resolution data, but not necessarily precise data.

Let's say you score candidates:

A: 1.0
B: 0.5
C: 0.25
D: 0.1

In reality, B is 90% as favored as A. C is 70% as favored as B.  The real numbers would be:

A: 1.0
B: 0.9
C: 0.63
D: etc.

How would this happen?

Cardinal: I approve of A 90% as much as B.

Natural and honest: I prefer A to win, and I am not just as happy with B winning, or close to it.  I feel maybe half as good about that?  B is between C and D and I don't like C, but I like D less.

Strategic: even voting 0.5 for B means possibly helping B beat A, but what if C wins...

The strategic nightmare is inherent to score and approval systems.  When approvals aren't used to elect but only for data, people are not naturally inclined to analyze a score representing their actual approval.

Why?

Because people decide by simulation. Simulation of ordinal preference is easy: I like A over B.  Even then, sometimes you can't seem to decide who is better.

Working out precisely how much I approve of A versus B is harder.  It takes a lot of effort and the basic simulation approach responds heavily to how good you feel about A losing to B, not about how much B satisfies you on a scale of 0 to A.

Score and approval voting source a high-error, low-confidence sample.  It's like recording climate data by licking your finger and holding it in the wind each day, then writing down what you think is the temperature.  Someone will say, "it's more data than warmer/colder trends!" While ignoring that you are not Mercury in a graduated cylinder.


On Fri, Jun 21, 2019, 3:10 PM Felix Sargent <[hidden email]> wrote:
Valuation can be ordinal, in that you can know that 3 is more than 2.
There are two questions before us: Which voting method collects more data? Which tabulation method picks the best winner from that data?

Which voting method collects more data?
Cardinal voting collects higher resolution data than ordinal voting. Consider this thought experiment. If I give you a rating of A:5 B:2 C:1 D:3 E:5 F:2 you should create an ordered list from that -- AEDFBC. If I gave you AEDFBC you couldn't convert that back into its cardinal data.

Which tabulation picks a better winner from the data?
Both Score and Approval voting pick the person with the highest votes.
Summing ordinal data, on the other hand, is very complicated, as to avoid loops. Methods like Condorcet or IRV have been proposed to eliminate those but ultimately they're hacks for dealing with incomplete information.


On Fri, Jun 21, 2019 at 5:23 AM John <[hidden email]> wrote:
Voters can't readily provide meaningful information as score voting. It's highly-strategic and the comparison of cardinal values is not natural.

All valuation is ordinal.  Prices are based from cost; but what people WILL pay, given no option to pay less, is based on ordinal comparison.

Is X worth 2 Y?

For the $1,000 iPhone I could have a OnePlus 6t and a Chromebook. The 6t...I can get a cheaper smartphone, but I prefer the 6t to that phone plus whatever else I buy.

I have a higher paying job, so each dollar is worth fewer hours, so the ordinal value of a dollar to me is lower.  $600 of my dollars is fewer hours than $600 minimum wage dollars.  I have access to my most-preferred purchases and can buy way down into my less-preferred purchases.

Information about this is difficult to pin down by voter.  Prices in the stock market set by a constant, public auction among millions of buyers and sellers.  A single buyer can hardly price one stock against another, and prices against what they think their gains will be relative to current price.

When pricing candidates, you'll see a lot like Mohs hardness: 2 is 200, 3 is 500, 4 is 1,500; but we label things that are 250 or 450 as 2.5, likewise between 500 and 1,500 is 3.5.  Being between X and Y is always immediately HALFWAY between X and Y, most intuitively.

The rated system sucks even before you factor in strategic concerns (which only matter if actually using a score-driven method).

Approval is just low-resolution (1 bit) score voting.

On Fri, Jun 21, 2019, 12:01 AM C.Benham <[hidden email]> wrote:

Forest,

With paper and pencil ballots and the voters only writing in their numerical scores it probably isn't very practical for the Australian Electoral Commission
hand vote-counters.

But if it isn't compulsory to mark each candidate and the default score is zero, I'm sure the voters could quickly adapt.

In the US I gather that there is at least one reform proposal to use these type of ballots. One of these, "Score Voting" aka "Range Voting",
proposes to just use Average Ratings with I gather the default score being "no opinion"  rather than zero and some tweak to prevent an unknown
candidate from winning.

So it struck me that if we can collect such a large amount of detailed information from the voters then we could do a lot more with it, and if we
want something that meets the Condorcet criterion this is my suggestion.

Chris Benham

https://rangevoting.org/

How score voting works:

  1. Each vote consists of a numerical score within some range (say 0 to 99) for each candidate. Simpler is 0 to 9 ("single digit score voting").

On 21/06/2019 5:33 am, Forest Simmons wrote:
Chris, I like it especially the part about naive voters voting sincerely being at no appreciable disadvantage while resisting burial and complying with  the CD criterion. 

From your experience in Australia where full rankings are required (as I understand it) what do you think about the practicality of rating on a scale of zero to 99, as compared with ranking a long list of candidates?  Is it a big obstacle?

Virus-free. www.avg.com
----
Election-Methods mailing list - see https://electorama.com/em for list info

----
Election-Methods mailing list - see https://electorama.com/em for list info
Reply | Threaded
Open this post in threaded view
|

Re: [EM] High Resolution Inferred Approval version of ASM

John
My point is mostly that score is useless, a d hybrid methods are essentially trying to cover for Score by incorporating it while avoiding it's use in practice.  It's kind of like saying you have a new ear infection treatment where you use amoxicillin, and if that doesn't work you attach leeches to the earlobes.

I have found that even e.g. Tideman's Alternative resisits burying, although I cover it with a robust candidate selection via a proportional primary election specifically to prevent formation of useful oligarchy coalitions.  Someone should quantify "resists burying" for all these methods one day.

Note that resistance doesn't mean burying does nothing.  In some 4-candidate examples, I had to inflate a candidate's voter base (to about 31% in one example) to eliminate the Condorcet winner, and the practical result was if 4% of voters whose first choice was the Condorcet winner preferred a candidate less-desirable to the burying coalition, that candidate was elected.  In simple terms, it produced worse results for the tactical voters than if they had voted honestly.

The single-election approach simply cannot provide a good election on its own for statistical reasons, and mixing bad rules into good rules won't make better rules.

On Sun, Jun 23, 2019, 12:50 PM C.Benham <[hidden email]> wrote:


On 22/06/2019 9:15 am, John wrote:

The great purported benefit of score systems is that more voters can rank A over B, yet due to the scores score can elect B:

John,

Is every method that uses score ballots a "score system"?   My suggested VIASME method meets Smith and therefore avoids
the "benefit" you refer to.

Wrapping it in a better system and using that information to make auxiliary decisions is still incorporating bad data.  Bad data is worse than no data.

As it relates to VIASME, I'm afraid you've lost me. A few years ago James Green-Armytage proposed a Condorcet method that asked the voters to both
rank the candidates (with equal ranking and truncation allowed) and also give each of them a high-resolution score and the ranking and the scoring
had to be consistent with each other.  If there was a Condorcet winner the scoring was ignored.

Well it seems to me that the ranking is a redundant extra chore for the voter because it can be inferred from the scoring. That is what I propose for
VIASME.  The Green-Armytage method was called Cardinal-Weighted Pairwise  and was designed to try to resist Burial strategy. He had a simpler-ballot
version called Approval-Weighted Pairwise. One of the reasons I don't much like it is that it can elect a candidate that is pairwise-beaten by a more approved
candidate.

https://electowiki.org/wiki/Cardinal_pairwise

On 22/06/2019 8:57 am, Felix Sargent wrote:

That's not even going into what happens when a voter ranks an ordinal ballot strategically, placing "guaranteed losers" to 2nd and 3rd places in order to improve the chances of their first choice candidate (in IRV at least).

Felix, the Burial strategy you describe doesn't work in IRV because your 2nd and 3rd place preferences won't be counted if your  first choice candidate is still alive.
It is methods that fail Later-no-Help (such as all the Condorcet methods) that are vulnerable to that, some more than others.

Chris Benham

On 22/06/2019 9:15 am, John wrote:
The error comes when you make inferences.

The great purported benefit of score systems is that more voters can rank A over B, yet due to the scores score can elect B:

A:1.0 B:0.9 C:0.1
C:1.0 A:0.5 B:0.4
B:1.0 A:0.2 C:0.1

A=1.7, B=2.3, C=2.2

Both B and C defeat A, despite A defeating both ranked.

If the first voter scores B as 0.7, C wins.

Whenever a system attempts to use score or its low-resolution Approval variant, it is relying on this information.

So why does this matter?

The voters are 100% certain and precise that these are their votes:

A>B>C
C>A>B
B>A>C

We know A defeats B, A defeats C, and B defeats C.  A is the Condorcet winner.

For score votes, 1.0 is always 1.0.  It's the first rank, the measure.  This is of course another source of information distortion in cardinal systems: how is the information meaningful as a comparison between two voters?

How do you know 10 voters voting A first at 1.0 aren't half as invested in A as 6 voters voting B 1.0, this really A=5 B=6?

Ten of us prefer strawberry to peanut butter.

Six of us WILL DIE IF YOU OPEN A JAR OF PEANUT BUTTER HERE.

Score systems claim to represent this and capture this information, but they can't.

(Notice I used the negative: that 1.0 vote is an expression of the damage of their 0.0-scored alternative.)

Even setting that aside, however, you have a problem where an individual might put down 0.7 or 0.9 or 0.5 for the SAME candidate in the SAME election, solely based on how bad they are at creating a cardinal comparison.  Humans are universally bad at cardinal comparison.

So now you can actually elect A, B, or C based on how well-rested people are, how hungry they are, or anything else that impacts their mood and thus the sharpness or softness by which they critically compare candidates.

It's a sort of random number generator.

Wrapping it in a better system and using that information to make auxiliary decisions is still incorporating bad data.  Bad data is worse than no data.

On Fri, Jun 21, 2019, 7:27 PM Felix Sargent <[hidden email]> wrote:
I don't know how you can think that blurrier data would end up with a more precise result.
No matter how you cut it, if you rank ABCD then it translates into a score of
A: 1.0
B: .75
C: 0.5
D: 0.25

There's no way of describing differences between candidates beyond a straight line between first place and last place.
Even if the voter is imprecise in the difference between A and B they will never make the error of rating B more than A, whereas the error between a voter's actual preferences and the preferences that are recorded with an ordinal ballot has the liability of being massive. Consider I like A and B but HATE C. ABC does not tell you that.
That's not even going into what happens when a voter ranks an ordinal ballot strategically, placing "guaranteed losers" to 2nd and 3rd places in order to improve the chances of their first choice candidate (in IRV at least).

Your analysis depends on the question of how intelligent you believe the average voter to be.
If voters can use Amazon and Yelp star ratings, they can do score voting.


On Fri, Jun 21, 2019 at 2:14 PM John <[hidden email]> wrote:
Cardinal voting collects higher-resolution data, but not necessarily precise data.

Let's say you score candidates:

A: 1.0
B: 0.5
C: 0.25
D: 0.1

In reality, B is 90% as favored as A. C is 70% as favored as B.  The real numbers would be:

A: 1.0
B: 0.9
C: 0.63
D: etc.

How would this happen?

Cardinal: I approve of A 90% as much as B.

Natural and honest: I prefer A to win, and I am not just as happy with B winning, or close to it.  I feel maybe half as good about that?  B is between C and D and I don't like C, but I like D less.

Strategic: even voting 0.5 for B means possibly helping B beat A, but what if C wins...

The strategic nightmare is inherent to score and approval systems.  When approvals aren't used to elect but only for data, people are not naturally inclined to analyze a score representing their actual approval.

Why?

Because people decide by simulation. Simulation of ordinal preference is easy: I like A over B.  Even then, sometimes you can't seem to decide who is better.

Working out precisely how much I approve of A versus B is harder.  It takes a lot of effort and the basic simulation approach responds heavily to how good you feel about A losing to B, not about how much B satisfies you on a scale of 0 to A.

Score and approval voting source a high-error, low-confidence sample.  It's like recording climate data by licking your finger and holding it in the wind each day, then writing down what you think is the temperature.  Someone will say, "it's more data than warmer/colder trends!" While ignoring that you are not Mercury in a graduated cylinder.


On Fri, Jun 21, 2019, 3:10 PM Felix Sargent <[hidden email]> wrote:
Valuation can be ordinal, in that you can know that 3 is more than 2.
There are two questions before us: Which voting method collects more data? Which tabulation method picks the best winner from that data?

Which voting method collects more data?
Cardinal voting collects higher resolution data than ordinal voting. Consider this thought experiment. If I give you a rating of A:5 B:2 C:1 D:3 E:5 F:2 you should create an ordered list from that -- AEDFBC. If I gave you AEDFBC you couldn't convert that back into its cardinal data.

Which tabulation picks a better winner from the data?
Both Score and Approval voting pick the person with the highest votes.
Summing ordinal data, on the other hand, is very complicated, as to avoid loops. Methods like Condorcet or IRV have been proposed to eliminate those but ultimately they're hacks for dealing with incomplete information.


On Fri, Jun 21, 2019 at 5:23 AM John <[hidden email]> wrote:
Voters can't readily provide meaningful information as score voting. It's highly-strategic and the comparison of cardinal values is not natural.

All valuation is ordinal.  Prices are based from cost; but what people WILL pay, given no option to pay less, is based on ordinal comparison.

Is X worth 2 Y?

For the $1,000 iPhone I could have a OnePlus 6t and a Chromebook. The 6t...I can get a cheaper smartphone, but I prefer the 6t to that phone plus whatever else I buy.

I have a higher paying job, so each dollar is worth fewer hours, so the ordinal value of a dollar to me is lower.  $600 of my dollars is fewer hours than $600 minimum wage dollars.  I have access to my most-preferred purchases and can buy way down into my less-preferred purchases.

Information about this is difficult to pin down by voter.  Prices in the stock market set by a constant, public auction among millions of buyers and sellers.  A single buyer can hardly price one stock against another, and prices against what they think their gains will be relative to current price.

When pricing candidates, you'll see a lot like Mohs hardness: 2 is 200, 3 is 500, 4 is 1,500; but we label things that are 250 or 450 as 2.5, likewise between 500 and 1,500 is 3.5.  Being between X and Y is always immediately HALFWAY between X and Y, most intuitively.

The rated system sucks even before you factor in strategic concerns (which only matter if actually using a score-driven method).

Approval is just low-resolution (1 bit) score voting.

On Fri, Jun 21, 2019, 12:01 AM C.Benham <[hidden email]> wrote:

Forest,

With paper and pencil ballots and the voters only writing in their numerical scores it probably isn't very practical for the Australian Electoral Commission
hand vote-counters.

But if it isn't compulsory to mark each candidate and the default score is zero, I'm sure the voters could quickly adapt.

In the US I gather that there is at least one reform proposal to use these type of ballots. One of these, "Score Voting" aka "Range Voting",
proposes to just use Average Ratings with I gather the default score being "no opinion"  rather than zero and some tweak to prevent an unknown
candidate from winning.

So it struck me that if we can collect such a large amount of detailed information from the voters then we could do a lot more with it, and if we
want something that meets the Condorcet criterion this is my suggestion.

Chris Benham

https://rangevoting.org/

How score voting works:

  1. Each vote consists of a numerical score within some range (say 0 to 99) for each candidate. Simpler is 0 to 9 ("single digit score voting").

On 21/06/2019 5:33 am, Forest Simmons wrote:
Chris, I like it especially the part about naive voters voting sincerely being at no appreciable disadvantage while resisting burial and complying with  the CD criterion. 

From your experience in Australia where full rankings are required (as I understand it) what do you think about the practicality of rating on a scale of zero to 99, as compared with ranking a long list of candidates?  Is it a big obstacle?

Virus-free. www.avg.com
----
Election-Methods mailing list - see https://electorama.com/em for list info

----
Election-Methods mailing list - see https://electorama.com/em for list info
Reply | Threaded
Open this post in threaded view
|

Re: [EM] High Resolution Inferred Approval version of ASM

C.Benham

On 24/06/2019 3:13 am, John wrote:

The single-election approach simply cannot provide a good election on its own for statistical reasons, and mixing bad rules into good rules won't make better rules.

John,

"Bad rules" give bad results.  Instead of this dismissive philosophical hand-waving, why don't you be so kind as to furnish an example where VIASME
gives a result that you consider to be bad (and where your specified preferred method gives a better result)?

Chris Benham


On 24/06/2019 3:13 am, John wrote:
My point is mostly that score is useless, and hybrid methods are essentially trying to cover for Score by incorporating it while avoiding it's use in practice.  It's kind of like saying you have a new ear infection treatment where you use amoxicillin, and if that doesn't work you attach leeches to the earlobes.

I have found that even e.g. Tideman's Alternative resisits burying, although I cover it with a robust candidate selection via a proportional primary election specifically to prevent formation of useful oligarchy coalitions.  Someone should quantify "resists burying" for all these methods one day.

Note that resistance doesn't mean burying does nothing.  In some 4-candidate examples, I had to inflate a candidate's voter base (to about 31% in one example) to eliminate the Condorcet winner, and the practical result was if 4% of voters whose first choice was the Condorcet winner preferred a candidate less-desirable to the burying coalition, that candidate was elected.  In simple terms, it produced worse results for the tactical voters than if they had voted honestly.

The single-election approach simply cannot provide a good election on its own for statistical reasons, and mixing bad rules into good rules won't make better rules.

On Sun, Jun 23, 2019, 12:50 PM C.Benham <[hidden email]> wrote:


On 22/06/2019 9:15 am, John wrote:

The great purported benefit of score systems is that more voters can rank A over B, yet due to the scores score can elect B:

John,

Is every method that uses score ballots a "score system"?   My suggested VIASME method meets Smith and therefore avoids
the "benefit" you refer to.

Wrapping it in a better system and using that information to make auxiliary decisions is still incorporating bad data.  Bad data is worse than no data.

As it relates to VIASME, I'm afraid you've lost me. A few years ago James Green-Armytage proposed a Condorcet method that asked the voters to both
rank the candidates (with equal ranking and truncation allowed) and also give each of them a high-resolution score and the ranking and the scoring
had to be consistent with each other.  If there was a Condorcet winner the scoring was ignored.

Well it seems to me that the ranking is a redundant extra chore for the voter because it can be inferred from the scoring. That is what I propose for
VIASME.  The Green-Armytage method was called Cardinal-Weighted Pairwise  and was designed to try to resist Burial strategy. He had a simpler-ballot
version called Approval-Weighted Pairwise. One of the reasons I don't much like it is that it can elect a candidate that is pairwise-beaten by a more approved
candidate.

https://electowiki.org/wiki/Cardinal_pairwise

On 22/06/2019 8:57 am, Felix Sargent wrote:

That's not even going into what happens when a voter ranks an ordinal ballot strategically, placing "guaranteed losers" to 2nd and 3rd places in order to improve the chances of their first choice candidate (in IRV at least).

Felix, the Burial strategy you describe doesn't work in IRV because your 2nd and 3rd place preferences won't be counted if your  first choice candidate is still alive.
It is methods that fail Later-no-Help (such as all the Condorcet methods) that are vulnerable to that, some more than others.

Chris Benham

On 22/06/2019 9:15 am, John wrote:
The error comes when you make inferences.

The great purported benefit of score systems is that more voters can rank A over B, yet due to the scores score can elect B:

A:1.0 B:0.9 C:0.1
C:1.0 A:0.5 B:0.4
B:1.0 A:0.2 C:0.1

A=1.7, B=2.3, C=2.2

Both B and C defeat A, despite A defeating both ranked.

If the first voter scores B as 0.7, C wins.

Whenever a system attempts to use score or its low-resolution Approval variant, it is relying on this information.

So why does this matter?

The voters are 100% certain and precise that these are their votes:

A>B>C
C>A>B
B>A>C

We know A defeats B, A defeats C, and B defeats C.  A is the Condorcet winner.

For score votes, 1.0 is always 1.0.  It's the first rank, the measure.  This is of course another source of information distortion in cardinal systems: how is the information meaningful as a comparison between two voters?

How do you know 10 voters voting A first at 1.0 aren't half as invested in A as 6 voters voting B 1.0, this really A=5 B=6?

Ten of us prefer strawberry to peanut butter.

Six of us WILL DIE IF YOU OPEN A JAR OF PEANUT BUTTER HERE.

Score systems claim to represent this and capture this information, but they can't.

(Notice I used the negative: that 1.0 vote is an expression of the damage of their 0.0-scored alternative.)

Even setting that aside, however, you have a problem where an individual might put down 0.7 or 0.9 or 0.5 for the SAME candidate in the SAME election, solely based on how bad they are at creating a cardinal comparison.  Humans are universally bad at cardinal comparison.

So now you can actually elect A, B, or C based on how well-rested people are, how hungry they are, or anything else that impacts their mood and thus the sharpness or softness by which they critically compare candidates.

It's a sort of random number generator.

Wrapping it in a better system and using that information to make auxiliary decisions is still incorporating bad data.  Bad data is worse than no data.

On Fri, Jun 21, 2019, 7:27 PM Felix Sargent <[hidden email]> wrote:
I don't know how you can think that blurrier data would end up with a more precise result.
No matter how you cut it, if you rank ABCD then it translates into a score of
A: 1.0
B: .75
C: 0.5
D: 0.25

There's no way of describing differences between candidates beyond a straight line between first place and last place.
Even if the voter is imprecise in the difference between A and B they will never make the error of rating B more than A, whereas the error between a voter's actual preferences and the preferences that are recorded with an ordinal ballot has the liability of being massive. Consider I like A and B but HATE C. ABC does not tell you that.
That's not even going into what happens when a voter ranks an ordinal ballot strategically, placing "guaranteed losers" to 2nd and 3rd places in order to improve the chances of their first choice candidate (in IRV at least).

Your analysis depends on the question of how intelligent you believe the average voter to be.
If voters can use Amazon and Yelp star ratings, they can do score voting.


On Fri, Jun 21, 2019 at 2:14 PM John <[hidden email]> wrote:
Cardinal voting collects higher-resolution data, but not necessarily precise data.

Let's say you score candidates:

A: 1.0
B: 0.5
C: 0.25
D: 0.1

In reality, B is 90% as favored as A. C is 70% as favored as B.  The real numbers would be:

A: 1.0
B: 0.9
C: 0.63
D: etc.

How would this happen?

Cardinal: I approve of A 90% as much as B.

Natural and honest: I prefer A to win, and I am not just as happy with B winning, or close to it.  I feel maybe half as good about that?  B is between C and D and I don't like C, but I like D less.

Strategic: even voting 0.5 for B means possibly helping B beat A, but what if C wins...

The strategic nightmare is inherent to score and approval systems.  When approvals aren't used to elect but only for data, people are not naturally inclined to analyze a score representing their actual approval.

Why?

Because people decide by simulation. Simulation of ordinal preference is easy: I like A over B.  Even then, sometimes you can't seem to decide who is better.

Working out precisely how much I approve of A versus B is harder.  It takes a lot of effort and the basic simulation approach responds heavily to how good you feel about A losing to B, not about how much B satisfies you on a scale of 0 to A.

Score and approval voting source a high-error, low-confidence sample.  It's like recording climate data by licking your finger and holding it in the wind each day, then writing down what you think is the temperature.  Someone will say, "it's more data than warmer/colder trends!" While ignoring that you are not Mercury in a graduated cylinder.


On Fri, Jun 21, 2019, 3:10 PM Felix Sargent <[hidden email]> wrote:
Valuation can be ordinal, in that you can know that 3 is more than 2.
There are two questions before us: Which voting method collects more data? Which tabulation method picks the best winner from that data?

Which voting method collects more data?
Cardinal voting collects higher resolution data than ordinal voting. Consider this thought experiment. If I give you a rating of A:5 B:2 C:1 D:3 E:5 F:2 you should create an ordered list from that -- AEDFBC. If I gave you AEDFBC you couldn't convert that back into its cardinal data.

Which tabulation picks a better winner from the data?
Both Score and Approval voting pick the person with the highest votes.
Summing ordinal data, on the other hand, is very complicated, as to avoid loops. Methods like Condorcet or IRV have been proposed to eliminate those but ultimately they're hacks for dealing with incomplete information.


On Fri, Jun 21, 2019 at 5:23 AM John <[hidden email]> wrote:
Voters can't readily provide meaningful information as score voting. It's highly-strategic and the comparison of cardinal values is not natural.

All valuation is ordinal.  Prices are based from cost; but what people WILL pay, given no option to pay less, is based on ordinal comparison.

Is X worth 2 Y?

For the $1,000 iPhone I could have a OnePlus 6t and a Chromebook. The 6t...I can get a cheaper smartphone, but I prefer the 6t to that phone plus whatever else I buy.

I have a higher paying job, so each dollar is worth fewer hours, so the ordinal value of a dollar to me is lower.  $600 of my dollars is fewer hours than $600 minimum wage dollars.  I have access to my most-preferred purchases and can buy way down into my less-preferred purchases.

Information about this is difficult to pin down by voter.  Prices in the stock market set by a constant, public auction among millions of buyers and sellers.  A single buyer can hardly price one stock against another, and prices against what they think their gains will be relative to current price.

When pricing candidates, you'll see a lot like Mohs hardness: 2 is 200, 3 is 500, 4 is 1,500; but we label things that are 250 or 450 as 2.5, likewise between 500 and 1,500 is 3.5.  Being between X and Y is always immediately HALFWAY between X and Y, most intuitively.

The rated system sucks even before you factor in strategic concerns (which only matter if actually using a score-driven method).

Approval is just low-resolution (1 bit) score voting.

On Fri, Jun 21, 2019, 12:01 AM C.Benham <[hidden email]> wrote:

Forest,

With paper and pencil ballots and the voters only writing in their numerical scores it probably isn't very practical for the Australian Electoral Commission
hand vote-counters.

But if it isn't compulsory to mark each candidate and the default score is zero, I'm sure the voters could quickly adapt.

In the US I gather that there is at least one reform proposal to use these type of ballots. One of these, "Score Voting" aka "Range Voting",
proposes to just use Average Ratings with I gather the default score being "no opinion"  rather than zero and some tweak to prevent an unknown
candidate from winning.

So it struck me that if we can collect such a large amount of detailed information from the voters then we could do a lot more with it, and if we
want something that meets the Condorcet criterion this is my suggestion.

Chris Benham

https://rangevoting.org/

How score voting works:

  1. Each vote consists of a numerical score within some range (say 0 to 99) for each candidate. Simpler is 0 to 9 ("single digit score voting").

On 21/06/2019 5:33 am, Forest Simmons wrote:
Chris, I like it especially the part about naive voters voting sincerely being at no appreciable disadvantage while resisting burial and complying with  the CD criterion. 

From your experience in Australia where full rankings are required (as I understand it) what do you think about the practicality of rating on a scale of zero to 99, as compared with ranking a long list of candidates?  Is it a big obstacle?

Virus-free. www.avg.com
----
Election-Methods mailing list - see https://electorama.com/em for list info

----
Election-Methods mailing list - see https://electorama.com/em for list info
Reply | Threaded
Open this post in threaded view
|

Re: [EM] High Resolution Inferred Approval version of ASM

Chris Benham-2
In reply to this post by Richard Lung

Richard L,

Can you please expand a bit on the meaning and relevance of your profound observation?

What is your working definition of a "points system"??? (I can perhaps guess from your reference to the Borda method.)

How is your reference to some variants of the multi-winner?? Single Transferable Vote algorithm relevant to the discussion of a single-winner method?

Chris Benham

On 23/06/2019 11:54 pm, Richard Lung wrote:
Points systems (Borda method is the archetype) are an assumed weighting of preferences. Gregory method transfer value or Meek method keep values are a real weighting of preferences.

Richard L.

On 20/06/2019 21:03, Forest Simmons wrote:
Chris, I like it especially the part about naive voters voting sincerely being at no appreciable disadvantage while resisting burial and complying with?? the CD criterion.??

From your experience in Australia where full rankings are required (as I understand it) what do you think about the practicality of rating on a scale of zero to 99, as compared with ranking a long list of candidates??? Is it a big obstacle?

----
Election-Methods mailing list - see https://electorama.com/em for list info



----
Election-Methods mailing list - see https://electorama.com/em for list info

Virus-free. www.avg.com

----
Election-Methods mailing list - see https://electorama.com/em for list info
Reply | Threaded
Open this post in threaded view
|

Re: [EM] High Resolution Inferred Approval version of ASM

Richard Lung

Thankyou for asking.
It's standard statistics. I refered to it occasionally over the years.
To give a more representative summary of classes of data, they may be weighted. If no accurate information is available, the weights to respective classes may be assumed. Hence Borda method fits the statistical description, weighting in arithmetic progression. JFS Ross, Elections and Electors, 1955, suggested that the weighting would be more realistic using the geometric mean. This would be weighting in geometric progression. The British broadcaster Robin Day favored weighting in harmonic progression!
But the point is they are all assumptions. This is the basic drawback to score voting systems.
The other standard statistical phrase is weighting in arithmetic proportion, which applies when statisticians have the weighting data for the proportionate importance of the classes of data. An example of this well-defined count is the Gregory weighting of the total transferable vote or alternatively, and more consistently, the Meek method keep values.
Of course, this accurate count does not apply to deficit votes, as well as surplus votes for candidates. That is, until FAB STV.

By the way, as far as method of counting is concerned, FAB STV is unlike traditional STV in that it does not distinguish between AV and STV, because only the latter is PR with potential surplus transfers. Consequently, there is no special "single winner method" with FAB STV.
But there is a but, which, without going into details, essentially is JS Mill distinction between democracy and maiorocracy.

Richard L.

On 24/06/2019 15:58, Chris Benham wrote:

Richard L,

Can you please expand a bit on the meaning and relevance of your profound observation?

What is your working definition of a "points system"??? (I can perhaps guess from your reference to the Borda method.)

How is your reference to some variants of the multi-winner?? Single Transferable Vote algorithm relevant to the discussion of a single-winner method?

Chris Benham

On 23/06/2019 11:54 pm, Richard Lung wrote:
Points systems (Borda method is the archetype) are an assumed weighting of preferences. Gregory method transfer value or Meek method keep values are a real weighting of preferences.

Richard L.

On 20/06/2019 21:03, Forest Simmons wrote:
Chris, I like it especially the part about naive voters voting sincerely being at no appreciable disadvantage while resisting burial and complying with?? the CD criterion.??

From your experience in Australia where full rankings are required (as I understand it) what do you think about the practicality of rating on a scale of zero to 99, as compared with ranking a long list of candidates??? Is it a big obstacle?

----
Election-Methods mailing list - see https://electorama.com/em for list info



----
Election-Methods mailing list - see https://electorama.com/em for list info

Virus-free. www.avg.com

----
Election-Methods mailing list - see https://electorama.com/em for list info



----
Election-Methods mailing list - see https://electorama.com/em for list info
Reply | Threaded
Open this post in threaded view
|

Re: [EM] High Resolution Inferred Approval version of ASM

C.Benham

Richard L,

You didn't exactly answer my question (What is your working definition of a "points system"?).
I infer from what you write that you are talking about methods that use ranking ballots and just award points according
to some predetermined fixed schedule of so many points for being ranked first and so many for being ranked second and
so on and then just elects the candidate with highest (or as with one version of Borda I've heard of, the lowest) total score.

Why do you think that is relevant to my suggested VIASME method?  To refresh your memory:

This is my favourite Condorcet method that uses high-intensity Score ballots (say 0-100):

*Voters fill out high-intensity Score ballots (say 0-100) with many more available distinct scores
(or rating slots) than there are candidates. Default score is zero.

1. Inferring ranking from scores, if there is a pairwise beats-all candidate that candidate wins.

2. Otherwise infer approval from score by interpreting each ballot as showing approval for the
candidates it scores above the average (mean) of the scores it gives.
Then use Approval Sorted Margins to order the candidates and eliminate the lowest-ordered
candidate.

3. Among remaining candidates, ignoring eliminated candidates, repeat steps 1 and 2 until
there is a winner.*

To save time we can start by eliminating all the non-members of the Smith set and stop when
we have ordered the last 3 candidates and then elect the highest-ordered one.

https://electowiki.org/wiki/Approval_Sorted_Margins

In simple 3-candidate case this is the same as Approval Sorted Margins where the voters signal
their approval cut-offs  just by having a large gap in the scores they give.

It could be that you have misunderstood what I mean by "high intensity Score ballots". It has nothing to
do with anything Borda-like.  The voter assign however many points to each candidate that they wish.

In the US, "Score Voting" (formerly and also called "Range Voting") is a version of Average Ratings where
the voters give candidates any score they like in the 0-99 inclusive range.

Actually since in VIASME the scores are only used to infer ranking and sometimes approval, the individual voters
can in theory use any range of scores they like.

Chris Benham

On 25/06/2019 4:09 am, Richard Lung wrote:

Thankyou for asking.
It's standard statistics. I refered to it occasionally over the years.
To give a more representative summary of classes of data, they may be weighted. If no accurate information is available, the weights to respective classes may be assumed. Hence Borda method fits the statistical description, weighting in arithmetic progression. JFS Ross, Elections and Electors, 1955, suggested that the weighting would be more realistic using the geometric mean. This would be weighting in geometric progression. The British broadcaster Robin Day favored weighting in harmonic progression!
But the point is they are all assumptions. This is the basic drawback to score voting systems.
The other standard statistical phrase is weighting in arithmetic proportion, which applies when statisticians have the weighting data for the proportionate importance of the classes of data. An example of this well-defined count is the Gregory weighting of the total transferable vote or alternatively, and more consistently, the Meek method keep values.
Of course, this accurate count does not apply to deficit votes, as well as surplus votes for candidates. That is, until FAB STV.

By the way, as far as method of counting is concerned, FAB STV is unlike traditional STV in that it does not distinguish between AV and STV, because only the latter is PR with potential surplus transfers. Consequently, there is no special "single winner method" with FAB STV.
But there is a but, which, without going into details, essentially is JS Mill distinction between democracy and maiorocracy.

Richard L.

On 24/06/2019 15:58, Chris Benham wrote:

Richard L,

Can you please expand a bit on the meaning and relevance of your profound observation?

What is your working definition of a "points system"? (I can perhaps guess from your reference to the Borda method.)

How is your reference to some variants of the multi-winner? Single Transferable Vote algorithm relevant to the discussion of a single-winner method?

Chris Benham

On 23/06/2019 11:54 pm, Richard Lung wrote:
Points systems (Borda method is the archetype) are an assumed weighting of preferences. Gregory method transfer value or Meek method keep values are a real weighting of preferences.

Richard L.

On 20/06/2019 21:03, Forest Simmons wrote:
Chris, I like it especially the part about naive voters voting sincerely being at no appreciable disadvantage while resisting burial and complying with the CD criterion.

From your experience in Australia where full rankings are required (as I understand it) what do you think about the practicality of rating on a scale of zero to 99, as compared with ranking a long list of candidates? Is it a big obstacle?

----
Election-Methods mailing list - see https://electorama.com/em for list info



----
Election-Methods mailing list - see https://electorama.com/em for list info

Virus-free. www.avg.com

----
Election-Methods mailing list - see https://electorama.com/em for list info



----
Election-Methods mailing list - see https://electorama.com/em for list info

----
Election-Methods mailing list - see https://electorama.com/em for list info
Reply | Threaded
Open this post in threaded view
|

Re: [EM] The Problem with Score Voting and Approval Voting

robert bristow-johnson
In reply to this post by Richard Lung



---------------------------- Original Message ----------------------------
Subject: Re: [EM] The Problem with Score Voting and Approval Voting
From: "Richard Lung" <[hidden email]>
Date: Sun, June 23, 2019 7:26 am
To: "EM" <[hidden email]>
--------------------------------------------------------------------------

> I agree with all this.
> It was said long ago, with regard to many votes per seats and cumulative
> voting, as by Enid Lakeman: Multiple votes count against each other.
> Single transferable voting is the way to go.

Richard, do you mean specifically "[IRV] is the way to go" or that the use of the ordinal ranked-ballot is the way to go?

Single transferable vote [STV] means "IRV" to me and at one time I thought it would make little difference in outcome in comparison to Condorcet.  But 10 years ago, in my very own town, I found out differently.


--

r b-j                         [hidden email]

"Imagination is more important than knowledge."
 

 

 

 


----
Election-Methods mailing list - see https://electorama.com/em for list info
Reply | Threaded
Open this post in threaded view
|

Re: [EM] High Resolution Inferred Approval version of ASM

Richard Lung
In reply to this post by C.Benham
Borda method, I take to be just the archetype of points systems.
Typically, I am talking about all those systems that assign or allow some count of the vote to the voters, instead of allowing the count to come from counting the votes. These systems tend to treat the voters like their own returning officers. VIASME appears to be a case in point. I don't see its decisive difference in principle from the score voting family.)
A noticable tendency, I put it no stronger than that, of points systems or score voting is to promote single stage counts, without appreciating that their primitive example, simple plurality (FPTP) is just an abandoned count, after one stage of election.

As to Condorcet pairing, it discards the information of the over-all ranking of the candidates. This is the stock criticism first made by Laplace, in favor of Borda. But since the count in stages, of JB Gregory, we don't need to assume the relative weight of support for a range of candidates, so far as surplus transfer of votes is concerned. (FAB STV introduces real relative weighting of preference for quota-deficit as well as surplus voted candidates.)

Richard L.



On 24/06/2019 20:29, C.Benham wrote:

Richard L,

You didn't exactly answer my question (What is your working definition of a "points system"?).
I infer from what you write that you are talking about methods that use ranking ballots and just award points according
to some predetermined fixed schedule of so many points for being ranked first and so many for being ranked second and
so on and then just elects the candidate with highest (or as with one version of Borda I've heard of, the lowest) total score.

Why do you think that is relevant to my suggested VIASME method?  To refresh your memory:

This is my favourite Condorcet method that uses high-intensity Score ballots (say 0-100):

*Voters fill out high-intensity Score ballots (say 0-100) with many more available distinct scores
(or rating slots) than there are candidates. Default score is zero.

1. Inferring ranking from scores, if there is a pairwise beats-all candidate that candidate wins.

2. Otherwise infer approval from score by interpreting each ballot as showing approval for the
candidates it scores above the average (mean) of the scores it gives.
Then use Approval Sorted Margins to order the candidates and eliminate the lowest-ordered
candidate.

3. Among remaining candidates, ignoring eliminated candidates, repeat steps 1 and 2 until
there is a winner.*

To save time we can start by eliminating all the non-members of the Smith set and stop when
we have ordered the last 3 candidates and then elect the highest-ordered one.

https://electowiki.org/wiki/Approval_Sorted_Margins

In simple 3-candidate case this is the same as Approval Sorted Margins where the voters signal
their approval cut-offs  just by having a large gap in the scores they give.

It could be that you have misunderstood what I mean by "high intensity Score ballots". It has nothing to
do with anything Borda-like.  The voter assign however many points to each candidate that they wish.

In the US, "Score Voting" (formerly and also called "Range Voting") is a version of Average Ratings where
the voters give candidates any score they like in the 0-99 inclusive range.

Actually since in VIASME the scores are only used to infer ranking and sometimes approval, the individual voters
can in theory use any range of scores they like.

Chris Benham

On 25/06/2019 4:09 am, Richard Lung wrote:

Thankyou for asking.
It's standard statistics. I refered to it occasionally over the years.
To give a more representative summary of classes of data, they may be weighted. If no accurate information is available, the weights to respective classes may be assumed. Hence Borda method fits the statistical description, weighting in arithmetic progression. JFS Ross, Elections and Electors, 1955, suggested that the weighting would be more realistic using the geometric mean. This would be weighting in geometric progression. The British broadcaster Robin Day favored weighting in harmonic progression!
But the point is they are all assumptions. This is the basic drawback to score voting systems.
The other standard statistical phrase is weighting in arithmetic proportion, which applies when statisticians have the weighting data for the proportionate importance of the classes of data. An example of this well-defined count is the Gregory weighting of the total transferable vote or alternatively, and more consistently, the Meek method keep values.
Of course, this accurate count does not apply to deficit votes, as well as surplus votes for candidates. That is, until FAB STV.

By the way, as far as method of counting is concerned, FAB STV is unlike traditional STV in that it does not distinguish between AV and STV, because only the latter is PR with potential surplus transfers. Consequently, there is no special "single winner method" with FAB STV.
But there is a but, which, without going into details, essentially is JS Mill distinction between democracy and maiorocracy.

Richard L.

On 24/06/2019 15:58, Chris Benham wrote:

Richard L,

Can you please expand a bit on the meaning and relevance of your profound observation?

What is your working definition of a "points system"? (I can perhaps guess from your reference to the Borda method.)

How is your reference to some variants of the multi-winner? Single Transferable Vote algorithm relevant to the discussion of a single-winner method?

Chris Benham

On 23/06/2019 11:54 pm, Richard Lung wrote:
Points systems (Borda method is the archetype) are an assumed weighting of preferences. Gregory method transfer value or Meek method keep values are a real weighting of preferences.

Richard L.

On 20/06/2019 21:03, Forest Simmons wrote:
Chris, I like it especially the part about naive voters voting sincerely being at no appreciable disadvantage while resisting burial and complying with the CD criterion.

From your experience in Australia where full rankings are required (as I understand it) what do you think about the practicality of rating on a scale of zero to 99, as compared with ranking a long list of candidates? Is it a big obstacle?

----
Election-Methods mailing list - see https://electorama.com/em for list info



----
Election-Methods mailing list - see https://electorama.com/em for list info

Virus-free. www.avg.com

----
Election-Methods mailing list - see https://electorama.com/em for list info



----
Election-Methods mailing list - see https://electorama.com/em for list info



----
Election-Methods mailing list - see https://electorama.com/em for list info
12