Redefining the 'X' factor formula for the benefit of all.

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Redefining the 'X' factor formula for the benefit of all.

      I'd like to see the 'X' factor changed from a straight 0.0 - 1.0 ranged value to a weighted value. With this idea, it would still vary from 0.0 to 1.0, but by utilizing a weighted value -- a mean value which would vary along that 0.0 to 1.0 range -- would bias the 'X' factor towards itself. This varied mean value could be calculated by utilizing some combination of the SBDE, the "us vs. them" ratio, and/or a secondary 'X' factor to fit into a calculation using the standard deviation formula.

      From wikipedia, "In statistics, the standard deviation (SD) is a measure that is used to quantify the amount of variation or dispersion of a set of data values. A low standard deviation indicates that the data points tend to be close to the mean (also called the expected value) of the set, while a high standard deviation indicates that the data points are spread out over a wider range of values."

      Refer to the standard deviation reference in Wikipedia for details on how to do this. I include two links at the bottom of this page that show how easy it is to do this. Depending on the various factors involved, one might have either a "normal distribution" or a weighted (skewed) distribution. I would like to see a skewed distribution based on the previously-mentioned combination of affecting values for the formulation of the standard deviation.

      Example of a normal distribution of values determined by their standard deviation:


      From Wikipedia, this is "a plot of normal distribution (or bell-shaped curve) where each band has a width of 1 standard deviation". Note that a standard deviation is denoted by the sigma Greek letter "σ". The deviation indicates the likelihood of a random value within a range is likely to occur. In the above normal distribution, the area under the curve represents the total likelihood of a random value (within a selected range of values) occurring within that band. Thus, a value of 0σ to -1σ is likely to occur about 34.1% of the time when a statistically random value is given (again, within that selected range of values).

      This is another example of a normal distribution of a standard deviation.


      In order to have the various factors (i.e., SBDE, the "us vs. them" ratio, a secondary 'X' factor, etc.) properly affect the mean of the standard deviation, you would need to skew the standard deviation to better-represent the true randomness of the 'X' factor value without removing the obvious benefits of having the standard deviation bias.

      Example of a skewed standard deviation:

      From Statistics How To, "Skewness is a measure of symmetry in a distribution. Actually, it’s more correct to describe it as a measure of lack of symmetry. A standard normal distribution is perfectly symmetrical and has zero skew."

      Example showing both left and right skewness of a standard deviation:


      Example of the details of a probability distribution of a skewed standard deviation:


      From Asset Insights:
      This short video shows how a new modified 'X' factor might determine the kind of standard deviation such that its weightedness is represented properly by the various inputs:


      This is the basic formula for determining the single standard deviation value (the sigma "σ" value). It looks complicated, but it's really not so bad. See below on how to easily come up with the various values to put into the formula...as well as how to modify the formula to fit one's needs.


      Simply put, the standard deviation is the square root of a sample set of the variation of values within a range. This easy to follow Math is Fun web page shows how to determine and calculate the "variance" of the standard deviation. This other easy to follow Math is Fun web page better explains how to easily create and use the standard deviation formula, in general.
      It seemed like such a waste to destroy an entire battle station just to eliminate one man. But Charlie knew that it was the only way to ensure the absolute and total destruction of Quasi-duck, once and for all.

      The saying, "beating them into submission until payday", is just golden...pun intended.

      R.I.P. Snickers <3
    • I like the idea, but how about if we made the listed value be the median point, so the combat result could be higher or lower than the listed stat. That would make the listed combat stat the "average" to use the laymans term, and the X factor would be the variation from perhaps 50% to 150% of the listed value.




      Another idea that would be fun to work into the X factor would be the number of kills a group has. For example a group of units that have no kills could not get the highest level of the X factor. And a group that has a lot of kills could not get the lowest level of the X factor. This would allow the "experience" of combat to play a role in future combat.


      Perhaps @freezy could add this to the suggestion list if they ever do a rework of the combat system. :)
      War is a game that is played with a smile. If you can't smile, grin. If you can't grin keep out of the way til you can. - Winston Churchill



      VorlonFCW
      Retired from Bytro staff as of November 30, 2020.

      >>> Click Here to submit a bug report or support ticket <<<
    • Actually, I like your idea of having the veterancy affecting the 'X' factor. But instead of absolutes, let it be a skewing factor (positive or negative) upon the mean. And, the mean could be centered around the actual hit value, but then the possibility of the extreme possible attack values would necessarily be limited to a realistic value of the standard deviation bell curve....maybe up to 3σ either positive or negative. And, if the veterancy (and maybe other factors) skew the curve either positively or negatively, then those extreme deviations (3σ) would not necessarily be equidistant from the mean.

      However, if this were to be the model, then there'd have to be some fancy artwork put together to put it into layman's terms for the typical player trying to figure out if his army's got a chance against another stack.

      Boy, do I wish I could get in on the combat system design. I like your idea and I think we could really implement something like this to make the 'X' factor into something that really begins to simulate real life combat results.

      Dice be danged, I want realism! And utilizing the statistical deviation model to control the 'X' factor would certainly go a long way towards that.
      It seemed like such a waste to destroy an entire battle station just to eliminate one man. But Charlie knew that it was the only way to ensure the absolute and total destruction of Quasi-duck, once and for all.

      The saying, "beating them into submission until payday", is just golden...pun intended.

      R.I.P. Snickers <3
    • Diabolical wrote:

      From Wikipedia, this is "a plot of normal distribution (or bell-shaped curve) where each band has a width of 1 standard deviation". Note that a standard deviation is denoted by the sigma Greek letter "σ". The deviation indicates the likelihood of a random value within a range is likely to occur. In the above normal distribution, the area under the curve represents the total likelihood of a random value (within a selected range of values) occurring within that band. Thus, a value of 0σ to -1σ is likely to occur about 34.1% of the time when a statistically random value is given (again, within that selected range of values).
      Today's math lesson, brought to you by Diabolical.

      A day late. This was supposed to be done yesterday, lol. Forgot.
    • The x-factor is probably working differently than everybody thinks. It is not some 0-100% value that is multiplied with the other values. Officially there isn't even an "x-factor", it is just a term coined by the community for the variance in combat results. This variance is already following a bell curve, and it is influenced by other factors like attack damage compared to enemy defence damage or SBDE.
      Speaking of this, there are damage limiting mechanics implemented, that prevent units from dying during an attack. Only if the attacking army's damage hugely outperforms the defending army's damage (by a factor of 20), the chance of killing a unit in the enemy stack in one hit reaches 100%. Otherwise that chance is as low as 30%, no matter how high the incoming damage is. This so called overwhelm or overkill factor I already mentioned last week in another thread. These mechanics were likely implemented to guarantee a minimum fighting time between normal sized stacks.
      There are also multiple "dice rolls" happening each combat tick for all units involved, not just one, and the chance for these rolls to succeeed are affected by things like damage and hitpoints.
      These are just some examples of the different calculations going on. Just wanted to clarify that this is not just a simple "apply between 0-100% of the attacker's damage".

      We would love to revisit the combat some time in the future and make it easier to understand and better predictable. So when the time is right we may do that, but i can't promise anything. it is on the list though.
    • freezy wrote:

      (by a factor of 20), the chance of killing a unit in the enemy stack in one hit
      So does this overwhelm factor only apply to the chance of a one hit kill, or does is more generally increase the chance of a stronger hit even if it isn't a finishing blow?

      On the distribution, I have recorded identical battles enough times to see that it doesn't significantly differ from normal, at least in simple 1v1 battles.
    • This mechanic should play an effect when determining how much damage an army or unit is allowed to take, as it prevents an army or unit from dying. For example if incoming damage is enough to kill a unit, there is a roll. The chance of that role is determined by the overwhelm factor. If the roll does not succeed, then the unit does not die and takes reduced damage.
      This is only a simplification though, as there are too many influencing factors.

      In simple 1vs1 fights that factor should not play a big role as incoming damage is too small compared to the HP amount.
    • freezy wrote:

      determining how much damage an army or unit is allowed to take
      That sounds more like a defensive metric, which makes sense, since in even-ish battles the mean damage seems shifted to the left. It would seem that in a 1v1 battle this would have a big defensive effect since the expected damage would be somewhere between 30 and 100% if I understand you correctly. Still the distribution could be normal, with a shifted mean. Is it really necessary to note the "chance of killing a unit" or does this just apply across the board from 30-100? Is the count based on side A power against B and vice versa, unit count, or something else? Does it do the calculation based on the entire stack or on a per unit basis within stacks? I would like to add some rough version of this to my calculator but would like to get it as close as possible. Currently I would imagine if total potential power of A vs B is 20x then A get's 100% and B gets 30% and these powers change symmetrically as the ratio reverses. A simple linear change from 30 to 100% doesn't work, because the powers must be the same at A/B = 1. Below is a hypothetical plot of how the power changes with a thin vertical line passing through A/B = 1. I just made the A/B=1 power halfway between 30 and 100.

      In anycase, this probably explains why 5/5 plane stacks are so overpowered vs small stacks. The AA of the ground stack becomes very weak. This should probably be removed from the battle calculation.
    • We are already going too deep into this... :D The overkill factor is indeed a defensive mechanism that lowers the incoming damage to an army or unit. To put it simply, units in the game try to survive as long as possible. The formula should resembles something like this:

      maxChance=0.3 if attackStrength <= defenceStrength
      otherwise it is
      0.3 + 0.7*(min((attackStrength/defenceStrength*20)^2, 1)

      (For calculation it should compare values of the whole stack)


      Regarding how combat rolls are done in general, it should go roughly like this:

      1. UnitHits = unit size * unit hitpoints.
      2. Chance = Damage / Effective size / Hitpoints / 2
      3. Roll dice for every single unit to get a value. (unit size = number of times)
      4. If value > chance then reduce UnitHits.

      Noteworthy is, that this calculation is different in S1914, as we updated the forumla there ~2 years ago. There it uses:
      1. ExpectedUnitHits = UnitHits * Chance + CarriedUnitHits.
      2. AppliedUnitHits = ExpectedUnitHits +-50% (Roll dice once)
      3. CarriedUnitHits = ExpectedUnitHits - AppliedUnitHits.
      4. Directly apply damage or kill as much as AppliedUnitHits.


      I can't give you more details for now. Our past documentation has some holes and is somewhat ambivalent in it's descriptions, so even I don't know all the details for sure. There may also be additional factors involved. The rest you have to figure out on your own (or you have to wait until one day we have time to refactor the combat system) :)
    • freezy wrote:

      maxChance=0.3 if attackStrength <= defenceStrength
      otherwise it is
      0.3 + 0.7*(min((attackStrength/defenceStrength*20)^2, 1)
      I'm not sure about the rest, but it occurs to me that in a 1v1 combat between a Commando and a Militia, even in a mountain range, this battle calculation is skewed too far into the weaker player's camp.

      With just arbitrary numbers (not real to the situation), if the attacker is rated (with terrain, SBDE, etc.) as "8" and the defender is rated (with terrain, SBDE, etc.) as "2", then the lower-valued defending unit will deal a greater percentage of damage than it is entitled to.

      For example, using the above arbitrary numbers, placed into the formula, you get:

      //rewritten with better efficiency
      maxChance = 0.3;
      if (attackStrengh > defenseStrength) {
      maxChance += 0.7*( min( ( attackStrength/(defenseStrength*20) )^2, 1 ) )
      }

      //the above gives:
      //maxChance = 0.3 + 0.7*( min( ( 8/(2*20) )^2, 1 ) )
      //maxChance = 0.3 + 0.7*( min( ( 8/40 )^2, 1 ) )

      //maxChance = 0.3 + 0.7*( min( ( 1/5 )^2, 1 ) )
      //maxChance = 0.3 + 0.7*( min( 1/25, 1 ) )
      //maxChance = 0.3 + 0.7*( 1/25 )
      //maxChance = 0.3 + 0.7*0.04
      //maxChance = 0.3 + 0.028
      //maxChance = 0.328

      Suppose that the larger unit has a hit points of 20 while the smaller unit has a hit points of only 10. Then, using the arbitrary values from above, the "max" possible hit points that the larger unit can deal is 0.328 x 8 = 2.624. Then, in order to eliminate the smaller unit, the larger one must hit at least 4 times (4 x 2.624 > 10 ====> 10.496 > 10). And, in all likelihood, with this "X" factor representing the maximum "possible" hit, it could take more -- even many more -- than 4 rounds of fire to eliminate the smaller and weaker opponent.

      Meanwhile, as the small unit is SLOWLY getting crushed, using the same arbitrary numbers, it deals a small amount of damage to the stronger unit with it's own max possible hit points which it deals at (using the numbers with swapped attack/defend values) only 0.307. But this value of "max" is not much less than the "max" of the much bigger unit (plug in the number "2" in place of "8" to find the truth in this). So, the smaller one will hit at most 2 x 0.307 = 0.614.

      Now, that might not sound like a lot, but if the attacker takes 5 rounds to defeat the small unit, then it is possible for this pathetic little unit to deal as much as 5 x 0.614 = 3.07. While that might not sound like a lot when the bigger unit has 20 points, if the luck "X" factor" goes in favor of the smaller guy enough times, it could -- conceivably -- win the battle. Of course, though that is highly unlikely, it still is dealing far more accumulated damage than it ought.

      In real life, a giant unit is going to squash a puny unit relatively quickly if not on the first shot, then within two or three at most (C'mon, how unlucky is the gunner behind that Heavy Tank aiming at the smallish RPG-mounted pickup-truck at 100 yards distance?) Meanwhile, even though the larger unit is going to come out with only partial damage, it is both held up and vulnerable to other strikes for the duration of several hours of play. This concept makes a blitz strategy virtually impossible without bumbling through completely-empty provinces.

      As you can see, at least in single melee combat, unless the larger and more powerful unit is immensely greater (like a Heavy Tank vs. a child with a toy gun), it loses virtually all of it's advantage. Now think about this: if the larger unit and the smaller unit both only hit at a maximum of ~0.3 times it's potential damage, then the bigger unit is going to take a lot of smaller hits that add up FAR too much while it takes too long taking out the puny unit just because the puny unit will take at least three or more hits to eliminate (0.328 x 3 = 0.984) which is less than enough after three rounds to eliminate the other guy. In other words, the big unit is taking hit after hit after hit....and could eventually die the death of a thousand cuts (a little too fast, mind you), when it should be sailing through the battle like a hot knife through butter.
      It seemed like such a waste to destroy an entire battle station just to eliminate one man. But Charlie knew that it was the only way to ensure the absolute and total destruction of Quasi-duck, once and for all.

      The saying, "beating them into submission until payday", is just golden...pun intended.

      R.I.P. Snickers <3
    • Thanks for the analysis.

      Still that's all theory of course and it may be all influenced by additional factors or luck in the chance rolls.

      For example I just made a quick practical test and let a Commando and an Infantry lvl1 attack eachother on the core territory of the infantry (so with 15% bonus for the inf). The Inf got reduced to 10% condition in the first combat round (both attacking and counter attacking, so 2 rolls) and died with the second combat round (-> combat over after 3-4 rolls total). The commando got reduced to 88% condition. Looks about right to me.

      Feel free to further investigate or test on your own. As I said we probably will revisit this topic at some time in the future.
    • freezy wrote:

      and died with the second combat round. The commando got reduced to 88% condition.
      That's roughly what my calc gives with the INF in core and battle in plains (2 rounds with commando living at 75%), but if I were to reduce the strength of both sides to roughly 1/3 (which the maxChance would suggest) it would take perhaps 6 rounds, so I don't really think maxChance is working like that. Most battles would give a maxChance of about 1/3. The average hit is a bit weaker than the expected value but it is not 1/3 of the expected value.

      The bit you wrote about how rolls are done is too ill-defined for me to fully decipher.
    • DxC wrote:

      freezy wrote:

      and died with the second combat round. The commando got reduced to 88% condition.
      That's roughly what my calc gives with the INF in core and battle in plains (2 rounds with commando living at 75%), but if I were to reduce the strength of both sides to roughly 1/3 (which the maxChance would suggest) it would take perhaps 6 rounds, so I don't really think maxChance is working like that. Most battles would give a maxChance of about 1/3. The average hit is a bit weaker than the expected value but it is not 1/3 of the expected value.
      The bit you wrote about how rolls are done is too ill-defined for me to fully decipher.

      I guess I've got a problem with the max-chance aspect, out of principle. The idea that you can't get a lucky shot in (in either direction) that would wipe out an enemy unit in one hit seems a bit excessive. You shouldn't have to be 20 times greater than the enemy to be able to permit destroying them in the first one or two salvos. Sure, the units represent large(ish) groups of individual soldiers and units. But an effective battle plan can often eliminate an entire regiment or battalion in short order when the intelligence is good and current, the aim is true, your strength is good (though not necessarily 20 times greater than the other guy), and you have a little bit of luck.

      Yet, the "1/3" factor removes luck, removes perfect aim, and removes battle superiority (except by the ridiculous ratios of 20+ : 1). I'm sorry, but history has shown many times that a single shot can wipe out a target, and sometimes even a bigger and more powerful target.

      David beat Goliath because he hit him in his weak spot. Just the same, the chance that an Infantry unit (in real life) might take out a Heavy tank is slim -- especially in one or two salvos -- but technically possible. Though, frankly, the Heavy tank should be more likely able to occasionally take out the Infantry in one or two salvos. But the "1/3" factor prevents this in all instances. This is an absolute that doesn't belong in any simulation.

      Perhaps, instead of the "1/3" factor being prevalent in all but very extreme lopsided ratios, this "1/3" factor could be amended to be a "usually 1/3" factor in which another random variable can determine whether the "1/3" factor should apply in the current salvo. Of course, to be fair, and to maintain the original intent of the "1/3" factor (to preserve game play beyond repeated skunking battles), the random factor could be a hard to hit number (i.e., like hitting snake eyes or double sixes with two dice).

      That way, luck could simulate that occasional "hole in one" perfect shot. Also, the strength or likelihood of that random number happening would needfully be affected by the actual ratings ratio of the two sides. So, in a David vs. Goliath scenario, Goliath might be given 5 out of 9 chances of killing David in a single blow whereas David would only get a 1 in 9 chance of killing Goliath in a single blow (or maybe 1 in 100 or 1 in 1000).

      I think this would be a fair compromise that would also increase the realism of the combat. Because chaos rules the battlefield, you can't just put everything into tidy simulations. Sometimes one side scores a great shot, sometimes the fight goes on and on, sometimes luck turns the tide in unexpected directions, and sometimes things wear down to a standstill or a draw...and sometimes even, both sides are destroyed.

      Still, you make your own luck, so coming in with a much stronger force than your opponent should win the day in most cases while random chance proves that even the best prepared strategies can fail even when executed perfectly...and one well-aimed shot can also bring down even the mightiest of giants.

      And in that case, even Goliath can fall when it might be argued that he was at least 20 times greater than David. Yet, David wasn't automatically defeated, so the "overwhelming force" principle in itself shouldn't be an absolute, either. Otherwise, it could be said that gigantic stacks could slice through thin enemies without a scratch, and that would lead to everyone wanting to build one giant stack which would alter the dynamics of this game and ruin the realism that it tries to convey.
      It seemed like such a waste to destroy an entire battle station just to eliminate one man. But Charlie knew that it was the only way to ensure the absolute and total destruction of Quasi-duck, once and for all.

      The saying, "beating them into submission until payday", is just golden...pun intended.

      R.I.P. Snickers <3