This abstract discusses the difficulties in performing valid permutations to obtain an empiric null distribution when testing for quantitative trait loci in the presence of polygenic effects. Although permutation testing is a popular approach for determining statistical significance of a test statistic with an unknown distribution -- for instance, the maximum of multiple correlated statistics or some omnibus test statistic for a gene, gene-set or pathway -- naive application of permutations may result in an invalid test. The risk of performing an invalid permutation test is particularly acute in complex trait mapping where polygenicity may combine with a structured population, for instance due to the presence of families, cryptic relatedness, admixture or population stratification. I give both analytical derivations and a conceptual understanding of why typical permutation procedures fail and suggest an alternative permutation based algorithm that succeeds. In particular, I examine the case where a linear mixed model is used to analyze a quantitative trait and show that both phenotype and genotype permutations may result in an invalid permutation test. The problems are due to a lack of exchangeability that arises from confounding that exists between the genotype being tested and the polygenic effect. Based on analytical derivations I provide a metric that predicts the amount of inflation of the type 1 error rate in the empiric permutation distribution depending on the correlation structure of the polygenic effect in the sample and the heritability of the trait. I validate this metric by doing simulations, showing that the permutation distribution matches the theoretical expectation, and that my suggested permutation based test obtains the correct null distribution. Finally, I discuss situations where naive permutations of the phenotype or genotype are valid and the applicability of the results to other test statistics.