Because causal relationships are not perceivable they must be induced from the covariation between a potential cause and effect. Bayesian-inference models of causal learning use covariational information to infer the probability of a causal link between the target variables by incorporating two types of information - empirical effect (e.g., Delta-P) and sample size. Past studies revealed that reasoners struggle to recognize the role of sample size. We investigate why this is. For one, the cover stories used in previous studies emphasized the role of contingency and might thus have relegated the importance of sample size to the background. Another explanation is that reasoners fail to understand that sample size matters because it carries information about measurement reliability and thus about how compatible the data are with the absence of a causal link. We found that both explanations might be psychologically real. Using a novel paradigm in which participants themselves controlled sample size, we found that subjects who observed weak effects indeed preferred larger samples than subjects who observed strong effects. However, we also found that subjects who observed weak effects (1) did not increase sample size to an extent justifying strong inferences and (2) that they consequently refrained from doing so. Further, we found that this might be because subjects fail to see the connection between sample size and reliability. Presented with information about the sampling variation of a fictitious experiment, a large proportion of subjects concluded that the sampling variation would remain constant if sample size was increased.