This is just a variation of the "I have x number of red socks in a drawer and y number of blue socks, how many socks do I need to take out of the drawer to be holding a pair?" Right?
Sorry, I hadn't checked back up the page to see Timo's Edit. I'll try to think of a good maths problem, though my puzzle setting skills aren't as good as Timo's.
I've been informed by a whisper that previous problems were taken from websites. I didn't want to do that. Either try out the following little puzzle or someone else post something better.
While juggling x balls in y hands, each ball "sees" H+A, where H is the time the ball spends in the hand and A is the time the ball spends in the Air. The time it takes for the ball to return to its starting position can be expressed as
(H+A)y
Defining some new terms, describe what the hand "sees" to complete this:
Defining some new terms, describe what the hand "sees" to complete this:
(H+A)y = ( .... answer here ... )x
I'm not sure if I completely understand what you're asking, but here's a shot:
B = the amount of time a hand is holding onto each ball W = the time each hand waits between balls
(H+A)y = (B+W)x
Correct! This is otherwise known a Shannon's Theorem of Juggling
"...proposed by Claude E. Shannon of the Massachusetts Institute of Technology is schematically represented for the three-ball cascade. The exact equation is (F+D)H=(V+D)N, where F is the time a ball spends in the air, D is the time a ball spends in a hand, V is the time a hand is vacant, N is the number of balls juggled, and H is the number of hands. The theorem is proved by following one complete cycle of the juggle from the point of view of the hand and of the ball and then equating the two."
Actually Luke, I think you should define some more quantities. Or at least define what juggling is in the context of the problem. From the (H+A)y statement it seems clear that you assume the ball to traverse a loop where it goes through all the hands once and all hands have equal holding time and air time between hands is equal for all transitions as well.
However, juggling moves can be such that not all balls follow the same loop. Let's assume, for simplicity, that all the balls you are juggling go through a loop of finite length. Let's also assume that you can only throw the balls to a maximum height (== maximum air time Ta) and that there is a minimum amount of time you need to hold the ball (Th) before you get rid of a ball once you catch it . We can treat each hand H_i individually (since your left may be stronger, or a juggling partner may have quicker hands). The ratio of the times (for hand i) Ta_i/Th_i gives you the maximum number of balls that hand can sustain B_i. Now, if you have "y" hands H_i (i goes from 1 to y) The maximum number of balls you can juggle is a sum over the B_i, let this be x.
But how many different loops can you make with the maximum number of balls?
Each of the x balls follows a loop L_i (i goes from 1 to x). L_i is a list of "n" triplets {H_j,Th_j,Ta_j}, the hand, time the ball spends there and air time to the next destination. "n" is the number of hands the ball travels through before the loop begins again. Note, that the ball may, during the course of the loop, traverse a certain hand several times. So now the time it takes for a ball i to return to it's starting position Ts_i is Sum[Th_j+Ta_j,{j,1,n}].
A Juggling "J" is any combination of loops L_i which can coexist. Two loops L_i L_j can coexist if at no time two balls share a hand... ahem, yes. Anyway, what is the maximum number of loops in a Juggling? Can each ball have it's own loop? Even if there are the maximum number of balls present?
Since the loops in a Juggling may be of different length the time to "reset" the Juggling TJ, i.e., all the balls are at their respective starting positions is simply the least common multiple of the loop reset times Ts_i.
And thus we have arrived at your (H+A)y, which is a simplified case (equal hand times, equal air times, one distinct loop traversing all hands once) of TJ.
The total number of length n patterns (or loops) with up to b balls is (b+1)^n, that is, b+1 raised to the n'th power.
Edit: Refering to asynchronous siteswap patterns here. With other juggling notations you can get more but vanilla siteswap is easiest for mathematical analysis and manipulation.
Actually, the more I think about the math of juggling the more interesting it gets. Is there a "theory of juggling" and do you use it when thinking up routines?
The ad hoc formulation I made above would lead me to believe that one could make some sort of group theoretic model for juggling. Also it seems that there could be parallels to logistics problems.
Try reading about beatmap, the most "uncompressed" notation for juggling. The rule set is simple:
1. Every hand (or field) is noted on every beat. 2. Every number says how many beats later the ball will next be noted (when it is "caught"). A zero means an empty field. A field can have more than one number. 3. A letter corresponds to each field, and each number (or throw) is directed to that field.
With these rules you can notate any juggling pattern of any complexity, be it many hands, timing variations, etc, etc. And yes, it could be used for logistics. For everyday use there are many shortcuts and assumptions made. For example, the three ball cascade in long form could be:
Left hand, Right hand.+ 2R,1R. 1L,2L.
But it can just as easily be written: 2x,1.* meaning two hands are assumed, and the balls can never leave this pair of fields (no drops, though those can be described just as handily), so an x means "don't throw here" or "throw across", the second beat is repeated on the other side so * means just that. There are "check sums" in there, if you know where to look, and using state transitions (for this pattern xx,-x. xx.x- or others) you can generate infinite numbers of patterns.
While this is the most exact notation, there are others which are far more limited but are easier to work with mathematically.Vanilla siteswap is the most basic. It doesn't care about how many hands you are juggling, at what speed, or anything too detailed. You can't throw two balls at once or two balls from one hand. All it cares about is the sequence balls are thrown and the iterations of throwing and catching. A three ball cascade is thrown ball A, ball b, ball c, repeat. Each ball is thrown 3 beats later, so the siteswap description is simply: 3. Other patterns us jugglers like are 441, 97531, 51234, 6113, etc. Siteswap is a lot easier to use for coming up with new tricks and routines, but they have the disadvantage of all looking indistinguishable from each other to non-jugglers as they all stem from a single limited set of juggling rules that is handy for mathematicians but not so interesting in the real world.
The proof of patterns I gave above is based on vanilla siteswap. The same answer for beatmap would be a bit more complicated. However, the siteswap max pattern proof was first stated by Arthur Lewbel, ranked in the top 20 economists in the world, and is much better at this than I am.
These things are mainly used to come up with new patterns and tricks, not so much new routines. Though there are some jugglers (myself included) who have done so extensively. There is lots on the internet to read, or if you are really interest, check out the Siteswap DVD, chocked full of 11.5 hours of the geekiest juggling, tutorials and math lectures you'll ever need.
Awesome, thanks a lot Luke. Beatmap is neat, but more interesting, I thought, is Arthur Lewbel's paper (PDF) on the benefits of lying about your college education .
Find the optimal strategy for the game of E-Card, as seen in the Kaiji anime, but assuming there is no cheating (if you're interested, I can give a description of the game, but in all honesty the easiest and best way for you to get a description of the game is to watch episode 16 of the Kaiji anime).
I just read a pretty awesome math puzzle. The solution I found struck me as pretty elegant too.
A group of 25 students are sitting in a 5x5 grid such that every student is sitting at their own desk. The teacher asked the students to each shift one desk in one cardinal direction (up/down/left/right). Is this possible, if so how?
Solution:Imagine a 5x5 checkerboard superimposed over the board. All students in white squares must move to black squares and vice versa. There are an unequal number of black and white squares. Therefore it is impossible.
Must I watch anime that will prolly make my eyes roll back into my brains to play with math?
I'll probably write out the rules for you later.
Let me oblige: * One player gets four peasant cards and one king card, while the other player gets four peasants and a slave card. * Each round, the player with the king puts a card face down on the table, followed by the other player putting a card face down on the table. * Both cards are turned face up simultaneously and it is determined who won: Peasant beats slave, king beats peasant, but slave beats king. Peasant against peasant is a draw and the players again put cards face down (the cards already on the table don't return to the hand). * When either the king or the slave have been used up, the round ends and players return to the original five card hand. * After five rounds, the players exchange hands (so the guy with the king now has the slave, and vice versa).
In other words, the player with the King must hide the king among the peasants, whereas the slave automatically loses if he can't determine when the first player plays the King card and puts the slave out against a peasant.
In Kaiji theres betting, with money against lengths, as the main character has a device attached to his ear which he can not remove from the ear and which is essentially a drill that will penetrate his ear drum if his total losses is more than 24 mm (I think, not sure about the exact number). Kaiji has to bet at least 1 mm per round. If he loses, the drill will advance that length. If he wins, he gets 100,000 yen per mm bet. Again, not sure about the exact numbers in the bets.
The real math problem is the optimal betting strategy. Some minor corrections to chaosof99's problem statement: He loses his eardrum at 30mm, but at 29mm he is perfectly safe. Also, he gets 100,000 yen per mm bet if he wins on the Emperor side, but 500,000 yen per mm bet if he wins on the Slave side. There are a total of 12 rounds, and each player takes three turns at a time as either Emperor or Slave before switching. For the sake of the problem, assume losing an eardrum has infinite negative utility, while the utility of money is directly proportional to the amount.
Kaiji is also offered the choice of whether to play the Emperor or Slave side first - which should he choose?
I think there are only 5 true strategies for the Slave holder and 5 strategies for the King holder. Each strategy is defined by when you play the special card: 1st round, 2nd round, 3rd round, 4th round, or 5th round after a reshuffle.
I think the real question is whether changing the strategy can create an improvement at any subsequent round. It is part of what is being asked, but not entirely.
As either player, the 5th round sans shuffling is predetermined. If 4 rounds have passed without a shuffle, the Slave must win (as 4 pairs of P against P were thrown leaving only S and K). I think that implies the King should never choose Strategy 5, since getting to that 5th round to throw the King means the King holder must lose that round; but Strategy 5 can still win if the Slave holder throws the Slave down any time prior to that 5th round.
That's the start of my thoughts. I need to understand betting better before I can go any further.
Here is what I'm getting: both people independently wager a distance prior to each card show. Either they break even (two peasants shown) and no reward or penalty is made, or one loses and one wins. Loser is subjected to penetration (kinky Japs) while the winner gets monetary compensation. The cards are retrieved and shuffled any time the shown cards do not break even.
Does this game end when the drill penetrates 30+ mm into one or the other person? Can those with drilled eardrums continue playing?
First of all, there's been a little confusion as to what we call a round, since chaosof99 used the term for the playing of a single card and for the playing of 5 cards. Let's call playing one card a turn, while a round consists of 1-5 turns, ending when the Emperor (King, if you like) is played - EvP means the E side wins the round, EvS means the S side wins the round. With that terminology sorted, you bet on each round, not each turn as you suggested. Furthermore, only you (Kaiji) wager distances, and you're the only one at risk of drilled eardrums. The other player is the "house", they simply pay you money, and, if you should hit 30mm of losses, drill your eardrum. We are assuming that there is no cheating going on. As I said previously
For the sake of the problem, assume losing an eardrum has infinite negative utility, while the utility of money is directly proportional to the amount.
As such, the point of the game is maximize the expected amount of money you earn without losing an eardrum. The rest of the problem statement is this:
He loses his eardrum at 30mm, but at 29mm he is perfectly safe. Also, he gets 100,000 yen per mm bet if he wins on the Emperor side, but 500,000 yen per mm bet if he wins on the Slave side. There are a total of 12 rounds, and each player takes three turns at a time as either Emperor or Slave before switching.
Kaiji is also offered the choice of whether to play the Emperor or Slave side first - which should he choose?
I did this analysis before reading that one player acts as "house". So this post is sort of ancillary to the question at hand, but I think it still holds value.
I am going to try creating a payoff table for a normal-form game. I am going to ignore the max penalty of 30mm. I am also going to assume the minimum bet of 1mm is used for both parties every turn; this is because I don't yet know how betting works exactly. To create a common unit, a "gain of 1mm" will be seen as a "loss of 100k" for King and a "loss of 500k" for the Slave. This assumption might have some problems, but it gets things rolling. Further, I'm reducing all numbers from 100k and 500k to 1 and 5.
On the very first turn after a shuffle, the payout table might look like this: S1 S2-5 K1 -1,+5 +1,-5 K2-5 +1,-5 +0,+0 If Slave and King are both played on turn 1, then the Slave holder wins 5 (500k) and the King holder loses 1 (1mm = -100k). If the King is played on turn 1 but the Slave is not (instead a Peasant is played), the King holder wins 1 (100k) and the Slave holder loses 5 (1mm = -500k). The last cell represents both players choosing to play the special card later, so two Peasants are played; each player gets neutral reward and a second turn commences.
On the turn round after a shuffle, S1 and K1 are no longer strategies. If either S1 or K1 was chosen, a reshuffle will have occurred and it is back to turn one. Here's the table for turn two: S2 S3-5 K2 -1,+5 +1,-5 K3-5 +1,-5 +0,+0 Looks pretty much the same. I think turn 3 will look the same as well.
The choice of turn 4 entirely predetermines turn 5, so this one looks a little different. Strategies S1-3 and K1-3 are out. Here's the fourth round after shuffling: S4 S5 K4 -1,+5 +1,-5 K5 +1,-5 -1,+5 As mentioned in the previous post, the 5th turn has no choice and thus no strategy for either player.
I think these tables can be combined for the overall strategy irrespective of the number of turns played per round, like so: S1 S2 S3 S4 S5 K1 -1,+5 +1,-5 +1,-5 +1,-5 +1,-5 K2 +1,-5 -1,+5 +1,-5 +1,-5 +1,-5 K3 +1,-5 +1,-5 -1,+5 +1,-5 +1,-5 K4 +1,-5 +1,-5 +1,-5 -1,+5 +1,-5 K5 +1,-5 +1,-5 +1,-5 +1,-5 -1,+5 This is possible because either the strategy chosen starts a reshuffle for that turn, or a new turn commences (when two Ps are played) carrying over the gains and losses of the previous round. A previous round's gains and losses are always zero, so there is nothing to accumulate.
Looking at one row or the other, changing the column strategy does not necessarily yield a better result. Similarly looking at one column or the other, changing the row strategy does not necessarily yield a better result. There is no dominant strategy.
Further, I suspect that random choice would be the optimal strategy, as any other strategy could potentially be learned and predicted by the opponent. If the opponent learns your strategy (and you do not learn the opponent's), you are guaranteed to lose.
The interesting question here would be what if bets weren't always 1mm every time? However, this apparently doesn't address the actual problem or ruleset at hand. It is still relevant to the thread of interesting math problems
Furthermore, only you (Kaiji) wager distances, and you're the only one at risk of drilled eardrums. The other player is the "house", they simply pay you money, and, if you should hit 30mm of losses, drill your eardrum.
Can Kaiji legally wager over 30mm on the first bet (or any other bet)? If 30mm is negative infinity utility, than any number greater than 30mm is equal to 30mm for loss (but the gain continues increasing). If Kaiji has already had, say, 20mm drilled, can Kaiji wager a distance that exceeds 10mm?
He can bet any distance he wants, though I think the maximum extension of the drill is 45mm if I'm not mistaken (I'm sure loc will correct me on that) so that is the upper limit on his bets. Of course, any distance the drill has already advanced reduces the amount he can bet.
Oh, and another thing: At the start of the game, Kaiji had the option to choose between having such a drill device attached to his ear or his eye. My initial thought when that happened was "take both, and you have double the space for error and winnings!". Not sure if that would be legal though.
I think we can ignore "turns" entirely and focus only upon rounds. Since betting is done on rounds and not turns, the expected values would be based upon probabilities in terms of rounds. I am going to attempt to show that the probabilities of winning can be written explicitly in terms of round without a thought to turns.
There are still only 5 strategies for any particular round. Each strategy represents which turn the special card will be played on. The following table shows which round the Slave is played (S#), which round the King is played (K#), and whether the Slave Holder (S) or the King Holder (K) wins: S1 S2 S3 S4 S5 K1 S K K K K K2 K S K K K K3 K K S K K K4 K K K S K K5 K K K K SIf Kaiji is the Slave holder and chooses any particular S# Strategy, without knowing which K# Strategy the house has chosen, he cannot possibly improve his condition by changing the Strategy as one loss is the same as any other. I believe the same can be said if Kaiji is the King holder. So I think there remains no dominant strategy. I think choosing a strategy completely at random is optimal.
Given that, as King holder, Kaiji can expect to win 20/25 or 4/5 rounds. As Slave holder, he can expect to win 5/25 or 1/5 rounds. This can also be seen if you consider that the King holder has only 1 strategy out of 5 strategies that can fail for any particular round (he plays King when house plays Slave); similarly the Slave holder has 1 strategy out of 5 strategies that can win for any particular round.
Comments
While juggling x balls in y hands, each ball "sees" H+A, where H is the time the ball spends in the hand and A is the time the ball spends in the Air. The time it takes for the ball to return to its starting position can be expressed as
(H+A)y
Defining some new terms, describe what the hand "sees" to complete this:
(H+A)y = ( .... answer here ... )x
B = the amount of time a hand is holding onto each ball
W = the time each hand waits between balls
(H+A)y = (B+W)x
"...proposed by Claude E. Shannon of the Massachusetts Institute of Technology is schematically represented for the three-ball cascade. The exact equation is (F+D)H=(V+D)N, where F is the time a ball spends in the air, D is the time a ball spends in a hand, V is the time a hand is vacant, N is the number of balls juggled, and H is the number of hands. The theorem is proved by following one complete cycle of the juggle from the point of view of the hand and of the ball and then equating the two."
Shannon's genius touches all.
However, juggling moves can be such that not all balls follow the same loop. Let's assume, for simplicity, that all the balls you are juggling go through a loop of finite length. Let's also assume that you can only throw the balls to a maximum height (== maximum air time Ta) and that there is a minimum amount of time you need to hold the ball (Th) before you get rid of a ball once you catch it . We can treat each hand H_i individually (since your left may be stronger, or a juggling partner may have quicker hands). The ratio of the times (for hand i) Ta_i/Th_i gives you the maximum number of balls that hand can sustain B_i. Now, if you have "y" hands H_i (i goes from 1 to y) The maximum number of balls you can juggle is a sum over the B_i, let this be x.
But how many different loops can you make with the maximum number of balls?
Each of the x balls follows a loop L_i (i goes from 1 to x). L_i is a list of "n" triplets {H_j,Th_j,Ta_j}, the hand, time the ball spends there and air time to the next destination. "n" is the number of hands the ball travels through before the loop begins again. Note, that the ball may, during the course of the loop, traverse a certain hand several times. So now the time it takes for a ball i to return to it's starting position Ts_i is Sum[Th_j+Ta_j,{j,1,n}].
A Juggling "J" is any combination of loops L_i which can coexist. Two loops L_i L_j can coexist if at no time two balls share a hand... ahem, yes. Anyway, what is the maximum number of loops in a Juggling? Can each ball have it's own loop? Even if there are the maximum number of balls present?
Since the loops in a Juggling may be of different length the time to "reset" the Juggling TJ, i.e., all the balls are at their respective starting positions is simply the least common multiple of the loop reset times Ts_i.
And thus we have arrived at your (H+A)y, which is a simplified case (equal hand times, equal air times, one distinct loop traversing all hands once) of TJ.
Edit: Refering to asynchronous siteswap patterns here. With other juggling notations you can get more but vanilla siteswap is easiest for mathematical analysis and manipulation.
The ad hoc formulation I made above would lead me to believe that one could make some sort of group theoretic model for juggling. Also it seems that there could be parallels to logistics problems.
1. Every hand (or field) is noted on every beat.
2. Every number says how many beats later the ball will next be noted (when it is "caught"). A zero means an empty field. A field can have more than one number.
3. A letter corresponds to each field, and each number (or throw) is directed to that field.
With these rules you can notate any juggling pattern of any complexity, be it many hands, timing variations, etc, etc. And yes, it could be used for logistics. For everyday use there are many shortcuts and assumptions made. For example, the three ball cascade in long form could be:
Left hand, Right hand.+
2R,1R.
1L,2L.
But it can just as easily be written: 2x,1.* meaning two hands are assumed, and the balls can never leave this pair of fields (no drops, though those can be described just as handily), so an x means "don't throw here" or "throw across", the second beat is repeated on the other side so * means just that. There are "check sums" in there, if you know where to look, and using state transitions (for this pattern xx,-x. xx.x- or others) you can generate infinite numbers of patterns.
While this is the most exact notation, there are others which are far more limited but are easier to work with mathematically.Vanilla siteswap is the most basic. It doesn't care about how many hands you are juggling, at what speed, or anything too detailed. You can't throw two balls at once or two balls from one hand. All it cares about is the sequence balls are thrown and the iterations of throwing and catching. A three ball cascade is thrown ball A, ball b, ball c, repeat. Each ball is thrown 3 beats later, so the siteswap description is simply: 3. Other patterns us jugglers like are 441, 97531, 51234, 6113, etc. Siteswap is a lot easier to use for coming up with new tricks and routines, but they have the disadvantage of all looking indistinguishable from each other to non-jugglers as they all stem from a single limited set of juggling rules that is handy for mathematicians but not so interesting in the real world.
The proof of patterns I gave above is based on vanilla siteswap. The same answer for beatmap would be a bit more complicated. However, the siteswap max pattern proof was first stated by Arthur Lewbel, ranked in the top 20 economists in the world, and is much better at this than I am.
These things are mainly used to come up with new patterns and tricks, not so much new routines. Though there are some jugglers (myself included) who have done so extensively. There is lots on the internet to read, or if you are really interest, check out the Siteswap DVD, chocked full of 11.5 hours of the geekiest juggling, tutorials and math lectures you'll ever need.
A group of 25 students are sitting in a 5x5 grid such that every student is sitting at their own desk. The teacher asked the students to each shift one desk in one cardinal direction (up/down/left/right). Is this possible, if so how?
Solution:Imagine a 5x5 checkerboard superimposed over the board. All students in white squares must move to black squares and vice versa. There are an unequal number of black and white squares. Therefore it is impossible.
* One player gets four peasant cards and one king card, while the other player gets four peasants and a slave card.
* Each round, the player with the king puts a card face down on the table, followed by the other player putting a card face down on the table.
* Both cards are turned face up simultaneously and it is determined who won: Peasant beats slave, king beats peasant, but slave beats king. Peasant against peasant is a draw and the players again put cards face down (the cards already on the table don't return to the hand).
* When either the king or the slave have been used up, the round ends and players return to the original five card hand.
* After five rounds, the players exchange hands (so the guy with the king now has the slave, and vice versa).
In other words, the player with the King must hide the king among the peasants, whereas the slave automatically loses if he can't determine when the first player plays the King card and puts the slave out against a peasant.
In Kaiji theres betting, with money against lengths, as the main character has a device attached to his ear which he can not remove from the ear and which is essentially a drill that will penetrate his ear drum if his total losses is more than 24 mm (I think, not sure about the exact number). Kaiji has to bet at least 1 mm per round. If he loses, the drill will advance that length. If he wins, he gets 100,000 yen per mm bet. Again, not sure about the exact numbers in the bets.
Some minor corrections to chaosof99's problem statement:
He loses his eardrum at 30mm, but at 29mm he is perfectly safe. Also, he gets 100,000 yen per mm bet if he wins on the Emperor side, but 500,000 yen per mm bet if he wins on the Slave side. There are a total of 12 rounds, and each player takes three turns at a time as either Emperor or Slave before switching.
For the sake of the problem, assume losing an eardrum has infinite negative utility, while the utility of money is directly proportional to the amount.
Kaiji is also offered the choice of whether to play the Emperor or Slave side first - which should he choose?
I think there are only 5 true strategies for the Slave holder and 5 strategies for the King holder. Each strategy is defined by when you play the special card: 1st round, 2nd round, 3rd round, 4th round, or 5th round after a reshuffle.
I think the real question is whether changing the strategy can create an improvement at any subsequent round. It is part of what is being asked, but not entirely.
As either player, the 5th round sans shuffling is predetermined. If 4 rounds have passed without a shuffle, the Slave must win (as 4 pairs of P against P were thrown leaving only S and K). I think that implies the King should never choose Strategy 5, since getting to that 5th round to throw the King means the King holder must lose that round; but Strategy 5 can still win if the Slave holder throws the Slave down any time prior to that 5th round.
That's the start of my thoughts. I need to understand betting better before I can go any further.
Here is what I'm getting: both people independently wager a distance prior to each card show. Either they break even (two peasants shown) and no reward or penalty is made, or one loses and one wins. Loser is subjected to penetration (kinky Japs) while the winner gets monetary compensation. The cards are retrieved and shuffled any time the shown cards do not break even.
Does this game end when the drill penetrates 30+ mm into one or the other person? Can those with drilled eardrums continue playing?
With that terminology sorted, you bet on each round, not each turn as you suggested. Furthermore, only you (Kaiji) wager distances, and you're the only one at risk of drilled eardrums. The other player is the "house", they simply pay you money, and, if you should hit 30mm of losses, drill your eardrum. We are assuming that there is no cheating going on. As I said previously As such, the point of the game is maximize the expected amount of money you earn without losing an eardrum. The rest of the problem statement is this:
I am going to try creating a payoff table for a normal-form game. I am going to ignore the max penalty of 30mm. I am also going to assume the minimum bet of 1mm is used for both parties every turn; this is because I don't yet know how betting works exactly. To create a common unit, a "gain of 1mm" will be seen as a "loss of 100k" for King and a "loss of 500k" for the Slave. This assumption might have some problems, but it gets things rolling. Further, I'm reducing all numbers from 100k and 500k to 1 and 5.
On the very first turn after a shuffle, the payout table might look like this:
S1 S2-5
K1 -1,+5 +1,-5
K2-5 +1,-5 +0,+0
If Slave and King are both played on turn 1, then the Slave holder wins 5 (500k) and the King holder loses 1 (1mm = -100k). If the King is played on turn 1 but the Slave is not (instead a Peasant is played), the King holder wins 1 (100k) and the Slave holder loses 5 (1mm = -500k). The last cell represents both players choosing to play the special card later, so two Peasants are played; each player gets neutral reward and a second turn commences.
On the turn round after a shuffle, S1 and K1 are no longer strategies. If either S1 or K1 was chosen, a reshuffle will have occurred and it is back to turn one. Here's the table for turn two:
S2 S3-5
K2 -1,+5 +1,-5
K3-5 +1,-5 +0,+0
Looks pretty much the same. I think turn 3 will look the same as well.
The choice of turn 4 entirely predetermines turn 5, so this one looks a little different. Strategies S1-3 and K1-3 are out. Here's the fourth round after shuffling:
S4 S5
K4 -1,+5 +1,-5
K5 +1,-5 -1,+5
As mentioned in the previous post, the 5th turn has no choice and thus no strategy for either player.
I think these tables can be combined for the overall strategy irrespective of the number of turns played per round, like so:
S1 S2 S3 S4 S5
K1 -1,+5 +1,-5 +1,-5 +1,-5 +1,-5
K2 +1,-5 -1,+5 +1,-5 +1,-5 +1,-5
K3 +1,-5 +1,-5 -1,+5 +1,-5 +1,-5
K4 +1,-5 +1,-5 +1,-5 -1,+5 +1,-5
K5 +1,-5 +1,-5 +1,-5 +1,-5 -1,+5
This is possible because either the strategy chosen starts a reshuffle for that turn, or a new turn commences (when two Ps are played) carrying over the gains and losses of the previous round. A previous round's gains and losses are always zero, so there is nothing to accumulate.
Looking at one row or the other, changing the column strategy does not necessarily yield a better result. Similarly looking at one column or the other, changing the row strategy does not necessarily yield a better result. There is no dominant strategy.
Further, I suspect that random choice would be the optimal strategy, as any other strategy could potentially be learned and predicted by the opponent. If the opponent learns your strategy (and you do not learn the opponent's), you are guaranteed to lose.
The interesting question here would be what if bets weren't always 1mm every time? However, this apparently doesn't address the actual problem or ruleset at hand. It is still relevant to the thread of interesting math problems
There are still only 5 strategies for any particular round. Each strategy represents which turn the special card will be played on. The following table shows which round the Slave is played (S#), which round the King is played (K#), and whether the Slave Holder (S) or the King Holder (K) wins:
If Kaiji is the Slave holder and chooses any particular S# Strategy, without knowing which K# Strategy the house has chosen, he cannot possibly improve his condition by changing the Strategy as one loss is the same as any other. I believe the same can be said if Kaiji is the King holder. So I think there remains no dominant strategy. I think choosing a strategy completely at random is optimal.S1 S2 S3 S4 S5
K1 S K K K K
K2 K S K K K
K3 K K S K K
K4 K K K S K
K5 K K K K S
Given that, as King holder, Kaiji can expect to win 20/25 or 4/5 rounds. As Slave holder, he can expect to win 5/25 or 1/5 rounds. This can also be seen if you consider that the King holder has only 1 strategy out of 5 strategies that can fail for any particular round (he plays King when house plays Slave); similarly the Slave holder has 1 strategy out of 5 strategies that can win for any particular round.
P(win round | Slave holder) = 1/5
P(win round | King holder) = 4/5
P(lose round | blah) = 1 - P(win round | blah)
This should inform expected values and whether being King first or Slave first is best.