Dungeons and Dragons is a game where a player, the Game Master (GM), creates content for a set of other players. It is challenging for GMs to predict the difficulty of potential combat encounters. To aid GMs in balancing combat, we create a simulation environment where virtual agents automatically play-test potential encounters and predict difficulty. We implement several agents to simulate human players that fall into two main categories: rule-based agents that follow a pre-made set of rules and general game-playing (such as Monte-Carlo Game Search) that explore all potential moves. In simple scenarios, rule-based agents win at a higher rate than general agents, but with complex scenarios, the rule-based and general agents perform similarly. These agents interact in a simulated game environment to playtest potential combat encounters. Our results demonstrate that this simulation outputs similar predictions to the current predicting systems. However, in at least one scenario where our simulation deviated from pre-existing predictions, experienced GMs predictions align more closely with our simulation than existing systems.