
Everything works fine if we replace the random ballot distribution with an estimate of winning probabilities not determined from the rankings.
A discussion of how to do that is beyond the scope of this message, but a quick and dirty way would be to have the voters indicate which of the alternatives they consider to be viable, and make the respective probability estimates proportional to the number of viability marks.
With that adjustment we can restore the original symmetry of Joe Weinstein's rule. The DSV version of approval based on rankings becomes... for each ballot B and each alternative X, approve X on B if and only if the alternatives ranked strictly above X on B have greater total winning probability than those ranked strictly below X.
Now for decloned Copeland: The decloned Copeland score of alternative X is the sum of the probabilities of the alternatives pairwise beaten by X minus the sum of the probabilities of the alternatives that beat X pairwise. The alternative with the highest score is declared winner!
Notice that if X covers Y, and Y has positive probability, then X has a greater score than Y. In other words this version of Copeland preserve the Landau property: it always elects uncovered alternatives.
On Wednesday, December 2, 2020, < [hidden email]> wrote: ...
1. Re: Copeland Done Right (fatal flaw) (Forest Simmons)
2. Proof idea that IRV can't be summable (Kristofer Munsterhjelm)

Message: 1
Date: Tue, 1 Dec 2020 13:15:07 0800
From: Forest Simmons <[hidden email]>
To: EM <[hidden email]>
Subject: Re: [EM] Copeland Done Right (fatal flaw)
MessageID:
<[hidden email]>
ContentType: text/plain; charset="utf8"
Fatal flaw in the monotonicity argument: it turns out that raising X to Top
on one ballot might increase the approval of Y on other ballots where
alternative Y is still ranked higher than X. So even though we have shown
that the approval of X does not decrease, there is a possibility that the
approval of alternative Y might surpass it.
On Monday, November 30, 2020, Forest Simmons <[hidden email]> wrote:
> A while back I made an attempt to de clone Copeland while preserving the
> property of electing uncovered alternatives. Although I got tantalizingly
> close I could not quite pull it off at the time. But recent discussions
> about the difficulty voters have deciding approval cut offs have led me to
> explore various ideas one of which gave me the key to success in our old
> Copeland sprucing up endeavor.
>
> Although it is tempting to completely remove he scaffolding and reveal the
> solution in its Stark Beauty with no trace of the method of discovery, in
> honor of Leonard Euler and with no disrespect for Carl Friedrich Gauss I
> would like to lead you through the successful line of thinking hoping that
> you will enjoy the journey as much as destination.
>
> As I mentioned above, pondering on approval strategy got me started on the
> right path. In particular, an idea Joe Weinstein suggested in the early
> days of the EM list: approve an alternative X if and only if it seems more
> likely for the winner to be someone you like less than X than for the
> winner to be someone you like more than X.
>
> Two immediate corollaries of this rule are to always approve your favorite
> and never approve your most despised alternative since there is no
> likelihood at all that the winner will be an alternative that you like more
> than your favorite nor is there any likelihood that the winner will be an
> alternative that you like less than your most despised.
>
> Another corollary, as Weinstein pointed out, is that when there are two
> clear frontrunners, and you like one of them better than the other, you
> should have proved that one but not the other. How about the Alternatives
> in between? Approve them only if the frontrunner that you approved is less
> likely to win then the one you did not approve.
>
> What if all of the candidates seem equally likely to win ... in other
> words what if we have zero information about winning probabilities? Then
> Weinstein's rule posits that we should approve every alternative above the
> median and disapprove every alternative below the median, and flip a coin
> to decide about the median alternative itself.
>
> This zero information case exposes two weaknesses of the rule: (1) unless
> the winning probabilities respect clone sets, the rule gives clone
> dependent advice, and (2) it cannot truly give optimal approval advice
> because it takes into account only ordinal as opposed to cardinal
> information beyond the likelihood estimates themselves.
>
> Compare for example, the optimal zero info strategy that takes
> objectively quantifiable ratings (e.g. dollar costs/benefits) into account
> when they are available: approve every above mean rated alternative.
>
> So for now, with Weinstein we humbly settle for doing the best we can with
> rankings as opposed to ratings.
>
> So back to (1) ... how do we declone Weinstein's rule? Here we make use
> of a standard clone independent probability distribution as a plausible
> surrogate for "winning probabilities:" namely the random ballot probability
> distribution ... after all if the winner were chosen by random ballot (a
> clone independent method of election) the random ballot distribution would
> be by definition the distribution of winning probabilities. Note by way of
> contrast that the distribution we resorted to in the zeroinfo case above
> was the "random candidate" distribution. But why settle for that when we
> have access to the (clone independent and information rich) random (ballot)
> favorite probabilities as soon as the ballots are tallied?
>
> It was disappointing the first time I tried implementing Weinstein's rule
> with random ballot probabilities ... and reminiscent of our recent
> disappointment in our efforts to declone Copeland; the clone problem was
> solved, but at a cost of loss of monotonicity (mono raise).
>
> Weinstein's rule has a certain symmetry comparing winning probabilities
> above and below the alternative in question. As it happens in my most
> recent attempts I considered giving partial approval to the "cutoff
> alternative" i.e. the one which has a majority of the probability neither
> above nor below it. Something kept drawing me back to this idea... perhaps
> we could use something like Andy's mental coin flip estimate of whether the
> cutoff alternative was closer to Top or Bottom to decide whether to approve
> it or not ... I was willing to abandom purely ordinal ballots if necessary
> to get something useful out of this!
>
> The turning point came when I finally got the courage to give up on
> symmetry and always approve or always disapprove the cut off alternative.
> There did not seem to be any a priori way to decide between these two
> extremes because on the one hand always including could mean approving
> Bottom if Bottom had 51 percent of the probability or disapproving Top if
> Top had 51 percent of the probability. Which would be worse?
>
> If you think about it, the first of these two bad approval decisions is
> the one that is harmless ... why? Because if Bottom has 51 percent of the
> (random ballot) probability, then any decent rankings based deterministic
> method should elect Bottom ... so no harm done.
>
> So here is the DSV (designated strategy voting) method for automatically
> transforming ranked ballots into approval ballots:
>
> First tabulate the random ballot probabilities.
>
> Then on each ballot B, approve each alternative X such that the combined
> random ballot probability of the winner being ranked strictly ahead of
> (i.e. above) X on ballot B is at most fifty percent.
>
> In other words if there is an even chance or greater that the winner of a
> random ballot election would be ranked (by ballot B) below X or equal with
> X, then approve X.
>
> If you like, you could distnguish between truncation and being "ranked" at
> the bottom. So the above rule applies when X is ranked, and no truncated
> alternative is approved period!
>
> So let's seen how this asymmetry confers monoraise compliance:
>
> Suppose that the only change is that X is raised on some ballot B. The
> only potential problem is if the probabilities change, and that can only
> happen if X is raised to equal first. That would would result in X being
> approved on ballot B ... so far so good.
>
> But what about on some other ballot B'? Could an increase in Prob(X)
> actually move the approval cutoff up so that on ballot B' alternative X
> goes from approved to disapproved?
>
> The answer is no, because whatever amount of probability is lost by the
> alternatives below X on B' is gained by X, so the amount of probability
> less than or equal to X is at least as great as before, so X does not lose
> approval on ballot B'.
>
> It turns out that the same asymmetry trick works to preserve monotonicity
> in decloned Copeland, as I will show in the next message!
>
>
>
>
>
 next part 
An HTML attachment was scrubbed...
URL: <http://lists.electorama.com/pipermail/electionmethodselectorama.com/attachments/20201201/f9eedc85/attachment0001.html>

Message: 2
Date: Tue, 1 Dec 2020 23:25:37 +0100
From: Kristofer Munsterhjelm <[hidden email]>
To: EM <[hidden email]>
Subject: [EM] Proof idea that IRV can't be summable
MessageID: <[hidden email]>
ContentType: text/plain; charset=utf8
Here's an idea that may be used to prove that any attempt to find a
clever way of making IRV summable (in a linear sense) will fail: that
it's not just the case that we haven't found such a solution, but that
it can't be done.
It's not a complete proof, but perhaps someone can complete it :) And,
of course, it could be wrong; I could be missing something.
Consider a voting method as a union of convex polytopes expressed as
sets of linear inequalities. The unknown variables are the ballot
numbers: e.g. in the threecandidate notruncation case, they're ABC,
ACB, BAC, BCA, CAB, CBA: the number of people who expressed each preference.
Then if the c!dimensional vector is inside this union of convex
polytopes, A wins (if the union is called a win region); or A wins or
ties for first (if the union is called a winortie region)[1]. As far
as summability is concerned, we can pick whichever is more convenient as
long as the difference between the winandtie region and the win region
is not of full dimensionality (which means that as the number of voters
approach infinity, the proportion of ties approach zero). This is, to my
knowledge, the case for IRV.
My strategy is to show that for (almost) any pair of ballot variables,
there exists some win region with some polytope with some inequality
that treats both ballot variables equally by scaling both equally; and
another that treats them differently (by scaling them by different
amounts).
The inequalities need not be in the same polytopes, since the argument
is that the inequality requires the method to distinguish between two
variables (or treat them equally); and thus that any summary must also
allow the method to do so. A violation of this will degrade at least one
of the win regions.
So, letting the two ballot variables be X and Y, the inequality that
treats X and Y equally proves that if IRV is summable, the sums can't
all have a term that amounts to (XY), because then the inequality in
question could not treat X and Y equally. The second case proves that if
IRV is summable, the sums can't all have a term that amounts to (X+Y),
because then the inequality can't treat X and Y differently.
If this can be proven for all pairs, then there must be c! different
summed variables (since no pair can be treated differently nor equally),
and since c! is not polynomial in c, we're done.
Now, to familiarize, consider the win region for A. For IRV, the union
is of the form (three candidate example given here):
"B is eliminated, A wins pairwise against C" or
"C is eliminated, A wins pairwise against B"
If either of these is true, then A wins with certainty; otherwise, A
does not. In this particular case, the inequalities of the first
constituent polytope look something like:
ABC + ACB  BAC  BCA > 0 (A outscores B in the first round)
ABC + ACB  CAB  CBA > 0 (A outscores C in the first round)
CAB + CBA  BAC  BCA > 0 (C outscores B in the first round)
ABC + ACB + BAC + BCA  CAB  CBA > 0 (A beats C pairwise)
So now take two ballot orders X and Y, but not so that Y is X reversed.
Let the set S_1 be the set of candidates that must be removed so that
the first candidate of X and Y match. Since Y isn't X reversed, this set
is at most two candidates smaller than the set of all candidates. Then X
and Y have a common factor in an inequality belonging to a polytope
where all the candidates of S_1 have been eliminated; because both X and
Y then count towards the first preferences of someone not in S_1, vs
someone else not in S_1.
Then let the set S_2 be the set of candidates that must be removed so
that the first candidate listed differs. Since X is not Y, the set is at
most two candidates smaller than the set of all candidates. Then,
similarly, in the scenario where the candidates in S_2 are all
eliminated, X counts towards someone's first preferences while Y counts
towards someone else's. So if cX is the candidate listed first on X
after the candidates in S_2 have been eliminated, then a polytope
dealing with cX's win region after the candidates in S_2 have been
eliminated will have a X + ...  Y  ... > 0 inequality.
The cases where Y is the exact reverse of X can't be treated this way,
but that's no problem as there are only O(c) of them anyway: hence these
being summable won't make IRV itself polynomially summable.
Caveat: I haven't shown that every pair of constituent polytopes for A's
win region is nonconvex as a whole, and that there are no redundant
inequalities (that each polytope's inequality matrix is full rank).
That's necessary to make sure the win region can't be reconstructed in a
way so that the tricky inequalities disappear. In other words, if I
don't do this, then the following "proof" for the nonsummability of
Plurality would work:
A wins if one of the following is true (one polytope per inequality):
for i = all 2^k subsets of ballots that begin in A
for j = all 2^p subsets of ballots that don't
e_i * x > e_j * x
for some numbers k and p; where x is the ballot vector, and e_i and e_j
are the indicator vectors for the respective sets.
Then one could claim that due to the superpolynomial amount of
inequalities, Plurality is nonsummable. The flaw lies in that e.g. the
inequalities
ABC > CBA
ACB > CBA
are entirely subsumed by the inequality
ABC + ACB > CAB + CBA.
So the method doesn't need to discriminate between two situations that
it seems like it needs to at first glance.
I would also, strictly speaking, need to show that (win or tie region) \
(win region) is not of full dimensionality; or deal with the winortie
region directly.
Finally, I have a feeling there's some kind of more general matrix rank
argument hiding in there somewhere. Perhaps it could be used to show
just what kind of positional elimination systems are nonsummable.
Bordaelimination is summable because you can get the Borda score from
the Condorcet matrix; why is it different? Are there any other systems
like it?
km
[1] Note that these methods are deterministic. If there are any random
tiebreaks, they will have to be determined ahead of time. It's then
clear that some random tiebreaks may make an otherwise summable method
nonsummable if they're "random enough".

Subject: Digest Footer
_______________________________________________
ElectionMethods mailing list
[hidden email]
http://lists.electorama.com/listinfo.cgi/electionmethodselectorama.com

End of ElectionMethods Digest, Vol 197, Issue 2
************************************************

ElectionMethods mailing list  see https://electorama.com/em for list info
