The purpose of this paper is to address and test two assumptions on which csQCA is based, namely that csQCA will generate contradictions and low consistency scores if models are ill-specified. The first part of the paper introduces csQCA in general and as a stepwise approach. In a second part a real-life example is introduced with the purpose of illustrating how csQCA operates and as an input for a simulation in the subsequent part. The third part introduces contradictions, consistency, their interrelatedness and the assumptions which are made with regard to contradictions and consistency. Subsequently the assumptions are tested via a simulation on the basis of a csQCA analysis of over 5 million random datasets. The paper argues that researchers cannot always assume that csQCA will generate contradictions or low consistency scores when models are ill-specified. Such an assumption is only justified when csQCA applications take limitations with regard to model specification (the number of conditions and the number of cases) into account. Benchmark tables for model specification purposes are developed. Since these tables are based on a probability value of 0.5 the paper also tests the results for contradictions and consistency for the probabilities which were present in a real-life example. This test shows that the 0.5 probability generates an appropriate measure for the occurrence of contradictions and consistency indicating that the benchmark tables can be used for different applications with different distributions of 0’s and 1’s in the conditions and outcomes. The paper ends with a conclusion.