I like this… approximation resistance. There is a lot of approximation resistance in politics.
Of course, with demarchy, you don’t have to rely on uniform distributions… For exampe, we could designate journal editors based on how many papers they have published in the journal. It would actually make a lot of sense.
The thing is, the idea of having somebody run things, however they are selected, is itself problematic.
Politics has always focused on how to select and support leaders. Information age politics has to be about how to do away with the need for them altogether.
“probabilistic algorithms work better than deterministic algorithms”
Whoa. I disagree. Hash tables are deterministic. Cryptography is about complexity, not probabilities. Machine learning is over-rated. For argument’s sake: For every probabilistic algorithm, there is a deterministic one, which is at least as good.
I suspect that there is merit to making very local selection of (potential) representatives, and this made Congress work better when this country was first formed. Wrote about this a ways back.
The problem with local perceived-merit selection is that it does not scale well to large populations. You want to keep the number of voters somewhat small per selected representative. This allows voters to know more about their representative, and limits the value of subverting the selection process for any one representative.
Add random selection from the pool of local merit-selected candidates, and you have a scalable solution.
Sureshsays:
Daniel, the notion of ‘approximation resistance’ is interesting here: basically, cases where the best approximation algorithm is one generated at random !
@beza1e1 that’s not at all true: while we don’t know if randomization has strictly more power, there are many instances (including hashing) where randomization gives power
Jury selection often deals with race and gender. Surely, there are crminal courts in Israel. How does jury selection works?
Moreover, we too often expect democratic leaders to somehow represent everyone. They are supposedly able to abstract out all of their own biases and interests over an entire career. Of course, this is an illusion. There is no reason to believe that democratic leaders are any more enlightened that others. In fact, because they are necessarily very ambitious people, the opposite might be true.
Any rule you put in place to protect communities can be translated in a demarchy.
paberlosays:
As a machine learning researcher, i just need to reject the idea of random decisions performing better than directed ones 🙂 Randomness may help you to explore a new search space (decision or candidates space) or reduce time, but search and final acceptance (for the machine learning case) should be performed in a smart way.
Directed decisions only work better when the algorithm used for decisions is better than random. Very often – especially in the political space – the algorithm is lousy.
Given perfect knowledge, directed decisions should always be better. If knowledge is less than perfect … then the result is far less certain.
Five centuries back Machiavelli was trying to understand why democracies sometimes yield a better result (centuries further back) than an expert (a well-educated prince). Often, when a prince (an expert) adopts an inappropriate algorithm, the results can be a disaster. Experts are occasionally prone to adopt algorithms (beliefs) that are far from optimal.
Random selection has a place when knowledge is imperfect.
Usays:
I’ve been interested in demarchy ever since I read about it in Arthur C. Clarke’s “Songs of Distant Earth”.
What bothers me most about this system is that when society is fractured (for example along ethnic, economic, or religious lines), then this system may fail to adequately represent some of the subgroups of society. In cases where there is strong animus between the various groups, this may lead to very harsh results.
I live in Israel. Taking into account all the different communities here, and how they hate each other (and this is definitely not restricted to Arabs and non-Arabs – each group is fractured within itself to mutually hating sub-communities), the thought of randomly choosing a few people, which might turn out to misrepresent some communities, is very frightening.
It seems to me that the fundamental problem here is the false equivalence stated or implied between government and algorithms. Randomized algorithms, for certain problems, can be superior to (known) deterministic ones. But governance is not an algorithm.
I don’t know how you would disqualify people from governance; some propose to do so based on lack of ownership of property. It seems unlikely that it would be based on an understanding of the world around us. That certainly isn’t the case for democracy, either.
jldsays:
@paberlo
As a machine learning researcher, i just need to reject the idea of random decisions performing better than directed ones
No, plain no, I am not a researcher of any kind and that may be why I am aware that some instances of search, traveling salesman or spin glasses for instance, are definitely out of reach of directed algorithms only simulated annealing can approach (and only approach) the optimal solution, anything else get stuck in local optima of the fitness landscape.
Usays:
@Daniel
Israel, like many other countries in the world, has no jury system. See this [http://en.wikipedia.org/wiki/Jury#India] for example.
In fact, the historical reason there are no juries in Israel is exactly the one I gave. Even though Israel (Palestine) was ruled by the British from 1917-1948, and much of the legal system is based on the common law. The British decided a jury system would not work in such a fractured society.
I do not believe for one moment that democratic leaders represent everyone. However, they have to take account of the political power embedded in the representatives of all the groups. In case power is, by virtue of some random selection, conferred to some groups and not others, the fear is that they will use this power unjustly. I do not trust my fellow citizens not to abuse this power if they have it, because emotions run so incredibly high. I believe this can happen anywhere, but particularly in places where the dividing issues run deep.
Nobody knows how well demarchy would work… except that whenever it is used, it seems to work well.
Shouldn’t you be concerned by the fact that (according to you) the average citizen is unable to understand the world? How can they be trusted to vote in the first place?
paberlosays:
@jld
of course, I agree. With random algorithms i meant pure random algorithms, and with directed I meant both deterministic and randomized&directed algorithms.
I didn’t write that hash tables were non-deterministic, though they certainly can be. However, pick up any textbook introduction on hash tables (say Cormen et al.), and you’ll get a probabilistic analysis.
Itmansays:
Hash tables are deterministic!!!
+10100 to beza1e1
Itmansays:
Preamble: it looks like my previous comment did disappear.
Sorry, Daniel you did: Meanwhile, most software uses probabilistic algorithms:
Databases use hash tables to recover values in expected constant time;
Classic hashing algorithms are deterministic. That is why one cannot safely say that hash table is a probabilistic algorithm. Perfect hashing is randomized (at least in some cases).
The argument about probabilistic analysis is not good either. For example, random binary trees have random analysis, e.g., to conclude that their average height is log N.
Yet, the randomness comes from the data, not from the algorithm itself.
Most random number generators and most hashing are actually deterministic. But they emulate randomness. Down this path you could get into philosophical questions… Does randomness really exist, or is it just a convenient abstraction? Then we could have a debate as to whether “God plays dice”. And so on.
What matters to me is that the hash value of an object “looks random”. We could get into what this statement exactly means… but we both know intuitively what it means.
Regarding hash tables, it only matters that for the duration of the life of the hash table, the same object gets the same hash value. But you should use a different (randomly picked) hash function for each new hash table. And it is not just a theoretical possibility. One problem with having a fixed hash function, like we do in Java, is that it poses a security threat. If I know the hash function, I can choose the objects so that the performance of your hash table will degenerate to linear complexity, by maximizing collisions. Thus, you could potentially crash a server by exploiting your knowledge of its hash function. Of course, generating new hash functions at random is slightly inconvenient, so we don’t tend to do it often…
Both Ruby and Perl use random hashing: Ruby systematically so, and Perl when needed. Universal hashing is a real thing.
In case you think I might make this up as I go, let me quote wikipedia:
Using universal hashing refers to selecting a hash function at random from a family of hash functions with a certain mathematical property (see definition below). This guarantees a low number of collisions in expectation, even if the data is chosen by an adversary. Many universal families are known (for hashing integers, vectors, strings), and their evaluation is often very efficient. Universal hashing has numerous uses in computer science, for example in implementations of hash tables, randomized algorithms, and cryptography.
You could also reverse this and put the randomness first. From each community, someone is picked at random. Then people vote the “randomly selected people” (based on their public profile) to various positions.
Itmansays:
@Daniel.
It is not quite correct, because the seed is usually random. Hashing should be deterministic, this is the point. Otherwise, you will have trouble finding where you put elements.
I’m not opposed to your model, I was merely pointing out that there are various alternatives. Randomness does not have to be at every level.
Anyhow. I think it would be great to just pick at random whether the next American president is Republican or Democrat. Might save a lot of money.
Itmansays:
Of course, generating new hash functions at random is slightly inconvenient, so we don’t tend to do it often…
This is an interesting theoretical possiblity, but I never saw it implemented.
Itmansays:
I did not doubt it exists in theory.
In addition, your another point is that probabilistic algorithms work (you imply always) better than deterministic.
What about tries vs hashing? Tries can be as efficient as hashing (compact tries are slower but by a small factor) and has additional functionality. They are not affected by adversary insert sequence either! How does it stand for being worse than “probabilistic” hashing?
This discussion has drifted far from Daniel’s initial observation about political systems. Within political systems we see many ways in which the deterministic algorithms in use can be subverted, and otherwise malfunction.
From the work on algorithms for computers, we have learned that sometimes (often?) adding a bit of randomness can improve the result. Sorting is the classic example. When the deterministic algorithm is perfect for the incoming data, then there is no advantage to shuffling. Partially ordered data could make the sort perform badly. A bit of shuffling makes the partially ordered case fantastically unlikely, and may yield better overall performance.
Our political algorithms are considerably less precise. With highly motivated parties interested in altering the incoming data, we get failures.
To Daniel’s arguments I will add the observation that local selection – when the number of voters is limited compared to the number of candidates – seems to work very well. When the candidate is sufficiently local, voters can meet the candidate first-hand, and attempting to subvert all the candidates is impractical.
The problem comes when scaling up to huge populations of voters. The selection-by-voting algorithm does not scale.
If representatives are selected by local voters, then roles are assigned randomly from the total pool of representatives … the combined algorithm might work better than anything we have at present. Once elected, a representative would get randomly assigned a role at the city, county, state, or national level. This gets us both the generally-best folk in the pool, and the least chance of subversion in the selection process.
As the links I posted above should indicate, I am a great believer in local selection. You want merit-based selection, when it works – and local selection is a very good approximation. But it has to be sufficiently local.
I have seen two candidates in a very local forum who were later elected. One whom I liked, did a terrific job in office. After listening to the other, I did not trust the guy, and he turned out corrupt.
I take this as a pretty strong hint that local selection can work. 🙂
One of the main objections to random selection is that you might get an idiot selected. On a local level, folk self-select (someone who does not feel qualified is less likely to run), and are filtered by local reputation (folk who know you will vote for or against, and influence other voters). Seems to me that this is a pretty good approximation.
Oh, make that three elected folk. The third I met at soccer, PTA, Boy Scouts, and around the school when picking up kids. She was an easy choice, and did very well.
Just realized I am making an unjustified assumption about shared knowledge, in this discussion, that is key to why local selection works.
When someone is standing in front of you, you are much less likely to be deceived. If you are deceived, your friends might not – and you probably have a good notion who of your friends are most perceptive. That means an unscripted meeting in a local venue is likely to expose the true character of the candidate to the voters present. Body language is hard to fake.
Perhaps my prior response was a bit too strong. If there were no value in local selection, then using random choices at the local level is entirely reasonable.
From first-hand experience (and some old observations from Machiavelli) I have long believed in the value of local selection. At the time the United States was founded, the much smaller population meant that local selection applied even to Congressmen. What the Founding Fathers lacked was any notion of how things would change as the population expanded. Put differently, our form of representative democracy works better for smaller populations, and poorly for large populations.
What I lacked with local selection was a compelling model for how to scale up usage. Your writing about Demarchy supplied – what seems to me – the missing bit needed to apply local selection to large populations.
The next question – how does random assignment of roles affect who runs for local selection? Not everyone wants to be a nation-level representative. Some might choose to opt-out. Should opt-out be allowed? As an old Greek observed, the guy that does not want the job might be better suited. 🙂
You cannot easily opt-out of jury duty, at least around here. On the other hand, judy duty is more invasive over a short-term – so not an exact analog.
@Suresh
I like this… approximation resistance. There is a lot of approximation resistance in politics.
Of course, with demarchy, you don’t have to rely on uniform distributions… For exampe, we could designate journal editors based on how many papers they have published in the journal. It would actually make a lot of sense.
The thing is, the idea of having somebody run things, however they are selected, is itself problematic.
Politics has always focused on how to select and support leaders. Information age politics has to be about how to do away with the need for them altogether.
“probabilistic algorithms work better than deterministic algorithms”
Whoa. I disagree. Hash tables are deterministic. Cryptography is about complexity, not probabilities. Machine learning is over-rated. For argument’s sake: For every probabilistic algorithm, there is a deterministic one, which is at least as good.
Sounds like the base for a scalable solution.
I suspect that there is merit to making very local selection of (potential) representatives, and this made Congress work better when this country was first formed. Wrote about this a ways back.
http://bannister.us/weblog/2006/11/19/making-politics-more-local/
http://bannister.us/weblog/2007/05/12/are-political-parties-the-problem/
The problem with local perceived-merit selection is that it does not scale well to large populations. You want to keep the number of voters somewhat small per selected representative. This allows voters to know more about their representative, and limits the value of subverting the selection process for any one representative.
Add random selection from the pool of local merit-selected candidates, and you have a scalable solution.
Daniel, the notion of ‘approximation resistance’ is interesting here: basically, cases where the best approximation algorithm is one generated at random !
http://cstheory.stackexchange.com/questions/3656/what-are-the-problems-with-the-best-approximation-ratio-achieved-by-algorithm-ret
@beza1e1 that’s not at all true: while we don’t know if randomization has strictly more power, there are many instances (including hashing) where randomization gives power
@U
Jury selection often deals with race and gender. Surely, there are crminal courts in Israel. How does jury selection works?
Moreover, we too often expect democratic leaders to somehow represent everyone. They are supposedly able to abstract out all of their own biases and interests over an entire career. Of course, this is an illusion. There is no reason to believe that democratic leaders are any more enlightened that others. In fact, because they are necessarily very ambitious people, the opposite might be true.
Any rule you put in place to protect communities can be translated in a demarchy.
As a machine learning researcher, i just need to reject the idea of random decisions performing better than directed ones 🙂 Randomness may help you to explore a new search space (decision or candidates space) or reduce time, but search and final acceptance (for the machine learning case) should be performed in a smart way.
Directed decisions only work better when the algorithm used for decisions is better than random. Very often – especially in the political space – the algorithm is lousy.
Given perfect knowledge, directed decisions should always be better. If knowledge is less than perfect … then the result is far less certain.
Five centuries back Machiavelli was trying to understand why democracies sometimes yield a better result (centuries further back) than an expert (a well-educated prince). Often, when a prince (an expert) adopts an inappropriate algorithm, the results can be a disaster. Experts are occasionally prone to adopt algorithms (beliefs) that are far from optimal.
Random selection has a place when knowledge is imperfect.
I’ve been interested in demarchy ever since I read about it in Arthur C. Clarke’s “Songs of Distant Earth”.
What bothers me most about this system is that when society is fractured (for example along ethnic, economic, or religious lines), then this system may fail to adequately represent some of the subgroups of society. In cases where there is strong animus between the various groups, this may lead to very harsh results.
I live in Israel. Taking into account all the different communities here, and how they hate each other (and this is definitely not restricted to Arabs and non-Arabs – each group is fractured within itself to mutually hating sub-communities), the thought of randomly choosing a few people, which might turn out to misrepresent some communities, is very frightening.
It seems to me that the fundamental problem here is the false equivalence stated or implied between government and algorithms. Randomized algorithms, for certain problems, can be superior to (known) deterministic ones. But governance is not an algorithm.
I don’t know how you would disqualify people from governance; some propose to do so based on lack of ownership of property. It seems unlikely that it would be based on an understanding of the world around us. That certainly isn’t the case for democracy, either.
@paberlo
No, plain no, I am not a researcher of any kind and that may be why I am aware that some instances of search, traveling salesman or spin glasses for instance, are definitely out of reach of directed algorithms only simulated annealing can approach (and only approach) the optimal solution, anything else get stuck in local optima of the fitness landscape.
@Daniel
Israel, like many other countries in the world, has no jury system. See this [http://en.wikipedia.org/wiki/Jury#India] for example.
In fact, the historical reason there are no juries in Israel is exactly the one I gave. Even though Israel (Palestine) was ruled by the British from 1917-1948, and much of the legal system is based on the common law. The British decided a jury system would not work in such a fractured society.
I do not believe for one moment that democratic leaders represent everyone. However, they have to take account of the political power embedded in the representatives of all the groups. In case power is, by virtue of some random selection, conferred to some groups and not others, the fear is that they will use this power unjustly. I do not trust my fellow citizens not to abuse this power if they have it, because emotions run so incredibly high. I believe this can happen anywhere, but particularly in places where the dividing issues run deep.
@U
Fair enough. I don’t know enough about Israel.
@Stiber
Nobody knows how well demarchy would work… except that whenever it is used, it seems to work well.
Shouldn’t you be concerned by the fact that (according to you) the average citizen is unable to understand the world? How can they be trusted to vote in the first place?
@jld
of course, I agree. With random algorithms i meant pure random algorithms, and with directed I meant both deterministic and randomized&directed algorithms.
@Itman and @beza1e1
I didn’t write that hash tables were non-deterministic, though they certainly can be. However, pick up any textbook introduction on hash tables (say Cormen et al.), and you’ll get a probabilistic analysis.
Hash tables are deterministic!!!
+10100 to beza1e1
Preamble: it looks like my previous comment did disappear.
Sorry, Daniel you did:
Meanwhile, most software uses probabilistic algorithms:
Databases use hash tables to recover values in expected constant time;
Classic hashing algorithms are deterministic. That is why one cannot safely say that hash table is a probabilistic algorithm. Perfect hashing is randomized (at least in some cases).
The argument about probabilistic analysis is not good either. For example, random binary trees have random analysis, e.g., to conclude that their average height is log N.
Yet, the randomness comes from the data, not from the algorithm itself.
@Itman
Most random number generators and most hashing are actually deterministic. But they emulate randomness. Down this path you could get into philosophical questions… Does randomness really exist, or is it just a convenient abstraction? Then we could have a debate as to whether “God plays dice”. And so on.
What matters to me is that the hash value of an object “looks random”. We could get into what this statement exactly means… but we both know intuitively what it means.
@Itman
Regarding hash tables, it only matters that for the duration of the life of the hash table, the same object gets the same hash value. But you should use a different (randomly picked) hash function for each new hash table. And it is not just a theoretical possibility. One problem with having a fixed hash function, like we do in Java, is that it poses a security threat. If I know the hash function, I can choose the objects so that the performance of your hash table will degenerate to linear complexity, by maximizing collisions. Thus, you could potentially crash a server by exploiting your knowledge of its hash function. Of course, generating new hash functions at random is slightly inconvenient, so we don’t tend to do it often…
@Itman
Both Ruby and Perl use random hashing: Ruby systematically so, and Perl when needed. Universal hashing is a real thing.
In case you think I might make this up as I go, let me quote wikipedia:
http://en.wikipedia.org/wiki/Universal_hash
@Itman
Randomized tries (assuming there were such a thing???) might have benefits over deterministic tries.
Most of the code I write is deterministic. But, at least on paper, random algorithms have very nice properties compared to deterministic ones.
The statement in my blog post was rhetorical, not scientific.
@Bannister
Very creative.
You could also reverse this and put the randomness first. From each community, someone is picked at random. Then people vote the “randomly selected people” (based on their public profile) to various positions.
@Daniel.
It is not quite correct, because the seed is usually random. Hashing should be deterministic, this is the point. Otherwise, you will have trouble finding where you put elements.
@Bannister
I’m not opposed to your model, I was merely pointing out that there are various alternatives. Randomness does not have to be at every level.
Anyhow. I think it would be great to just pick at random whether the next American president is Republican or Democrat. Might save a lot of money.
Of course, generating new hash functions at random is slightly inconvenient, so we don’t tend to do it often…
This is an interesting theoretical possiblity, but I never saw it implemented.
I did not doubt it exists in theory.
In addition, your another point is that probabilistic algorithms work (you imply always) better than deterministic.
What about tries vs hashing? Tries can be as efficient as hashing (compact tries are slower but by a small factor) and has additional functionality. They are not affected by adversary insert sequence either! How does it stand for being worse than “probabilistic” hashing?
@Bannister
Some might choose to opt-out.
Can you opt out of jury duty?
This discussion has drifted far from Daniel’s initial observation about political systems. Within political systems we see many ways in which the deterministic algorithms in use can be subverted, and otherwise malfunction.
From the work on algorithms for computers, we have learned that sometimes (often?) adding a bit of randomness can improve the result. Sorting is the classic example. When the deterministic algorithm is perfect for the incoming data, then there is no advantage to shuffling. Partially ordered data could make the sort perform badly. A bit of shuffling makes the partially ordered case fantastically unlikely, and may yield better overall performance.
Our political algorithms are considerably less precise. With highly motivated parties interested in altering the incoming data, we get failures.
To Daniel’s arguments I will add the observation that local selection – when the number of voters is limited compared to the number of candidates – seems to work very well. When the candidate is sufficiently local, voters can meet the candidate first-hand, and attempting to subvert all the candidates is impractical.
The problem comes when scaling up to huge populations of voters. The selection-by-voting algorithm does not scale.
If representatives are selected by local voters, then roles are assigned randomly from the total pool of representatives … the combined algorithm might work better than anything we have at present. Once elected, a representative would get randomly assigned a role at the city, county, state, or national level. This gets us both the generally-best folk in the pool, and the least chance of subversion in the selection process.
Absolutely not.
As the links I posted above should indicate, I am a great believer in local selection. You want merit-based selection, when it works – and local selection is a very good approximation. But it has to be sufficiently local.
I have seen two candidates in a very local forum who were later elected. One whom I liked, did a terrific job in office. After listening to the other, I did not trust the guy, and he turned out corrupt.
http://en.wikipedia.org/wiki/Mike_Carona
I take this as a pretty strong hint that local selection can work. 🙂
One of the main objections to random selection is that you might get an idiot selected. On a local level, folk self-select (someone who does not feel qualified is less likely to run), and are filtered by local reputation (folk who know you will vote for or against, and influence other voters). Seems to me that this is a pretty good approximation.
Oh, make that three elected folk. The third I met at soccer, PTA, Boy Scouts, and around the school when picking up kids. She was an easy choice, and did very well.
Just realized I am making an unjustified assumption about shared knowledge, in this discussion, that is key to why local selection works.
When someone is standing in front of you, you are much less likely to be deceived. If you are deceived, your friends might not – and you probably have a good notion who of your friends are most perceptive. That means an unscripted meeting in a local venue is likely to expose the true character of the candidate to the voters present. Body language is hard to fake.
http://en.wikipedia.org/wiki/Body_language
I was assuming that all in this discussion knew that this aspect of psychology was extremely relevant.
contratulations on getting such a ‘hot post’.
Perhaps my prior response was a bit too strong. If there were no value in local selection, then using random choices at the local level is entirely reasonable.
From first-hand experience (and some old observations from Machiavelli) I have long believed in the value of local selection. At the time the United States was founded, the much smaller population meant that local selection applied even to Congressmen. What the Founding Fathers lacked was any notion of how things would change as the population expanded. Put differently, our form of representative democracy works better for smaller populations, and poorly for large populations.
What I lacked with local selection was a compelling model for how to scale up usage. Your writing about Demarchy supplied – what seems to me – the missing bit needed to apply local selection to large populations.
The next question – how does random assignment of roles affect who runs for local selection? Not everyone wants to be a nation-level representative. Some might choose to opt-out. Should opt-out be allowed? As an old Greek observed, the guy that does not want the job might be better suited. 🙂
You cannot easily opt-out of jury duty, at least around here. On the other hand, judy duty is more invasive over a short-term – so not an exact analog.
I wonder how much randomness really helps. For instance, Gil Kalai’s post suggests that BPP=P is a popular belief
http://gilkalai.wordpress.com/2009/12/06/four-derandomization-problems/