Originally Posted by

**UserF**
...or simply run a extremely large number of RNG calculations and then average them to give the result...

That is by definition identical to what is actually done! Example:

Code:

===================
RNG Seed -- Outcome // 90%-chance ==> Miss when Seed<0.1
===================
0.4371 -- Hit
0.1754 -- Hit
0.1383 -- Hit
0.2290 -- Hit
0.0312 -- Miss
0.6202 -- Hit
0.5683 -- Hit
0.6351 -- Hit
0.0231 -- Miss
0.9782 -- Hit
===================

In this random seed of just 10 samples, the hits were 80% instead of 90%. If I'd drawn more sample (I drew those from Matlab "rand" routine), it would have converged to 90%.

So, what are you proposing by saying "averaging" them? That we impose a rule like "*Hits were more than the misses in this simulation, so the outcome is hit*"? That violates the logic of it and (if I get it right) it means that everything >50% would hit, everything <50% would miss and everything =50% would hit/miss randomly.

**Another (potentially unrelated) issue:** Most pseudo-random RNGs typically have a "state" (e.g. a number) which they are initialized in. This means, that once they're set to a given state, they will give the SAME random number sequence, EVERY time. If, by any chance, you noticed that the first 90% hit attempted in a match misses systematically, that could mean that the RNG-state is not set correctly (i.e. it's not reset/randomized at each battle/day/computer). I seriously doubt that's the case here, because most RNG states are initialized by a pseudo-random number, for instance some operation on the digits of the current machine-time (e.g. 2013Y/06M/28D/19h/57m/35s).