Notes on Paul Meehl’s “Philosophical Psychology Session” #08

These are the notes I made whilst watching the video recording of Paul Meehl’s philosophy of science lectures. This is the eighth episode (a list of all the videos can he found here). Please note that these posts are not designed to replace or be used instead of the actual videos (I highly recommend you watch them). They are to be read alongside to help you understand what was said. I also do not include everything that he said (just the main/most complex points).

Theory testing

Suggestions to improve theory testing

  1. When doing a conducting experiment, have some rational for expected size of correlation based on one’s theory (not always possible). Think about how much an observed effect corroborates your theory or not.
  2. Follow Cohen’s advice and figure out how many degrees of freedom you need to power your study to .9 so that if you were to find the predicted effect size, you would view it as a strong corroborator of your theory.
  3. When conducting a correlation, measure some variables you would NOT want to be highly correlated with the variables of interest. Making your test a more severe test.
  4. Conduct a pilot study which is fully reported or replicate the main study. Editors, referees, etc. should put emphasis on replication and demand replication from studies with marginal significance.
  5. Improve reporting of results: report descriptive statistics (mean, standard deviation) and actual p-values. Remove statistical significance and let the reader decide whether they are convinced. State distribution of the results. Most readers can’t reconstruct the results from the reported data (lack of reproducibility). Report confidence intervals, percent of variance accounted for if possible (be careful about giving a causal weight influence to a beta weight as it might not be, more factors have to be considered).
  6. Report negative pilot studies so others know what avenues you chose not to explore (so people know what has been tried before and didn’t work so they don’t try it themselves).
  7. Reviewers should stress what the researchers powered their study to in bulletins.
  8. Be honest about when a result doesn’t tell us much about a theories verisimilitude. It’s not the case that a theory successfully predicting 60 out of 100 results, if the auxiliaries are granted, is in “good shape”. Don’t have a harsh line for judging whether a theory is strong or not (looking at the number of results it successfully predicts).
  9. Psychologists are overly optimist about what they can prove when they do a significance test and mainly predict one group will score higher than another, when asked to do more psychologists are pessimistic about chances (stating it can’t be done when it numerical point predictions haven’t really been tried). Numerical point predictions are the most impressive test, though anything that constitutes a strong Popperian risk of falsification can be considered corroborative of a theory. When making predictions about function or form of results, consider prior probability.
  10. Research psychologist PhD students should learn probabilities, calculus, other core mathematics.
  11. Hard to generate theories if you don’t have an understanding of how other sciences develop and test theories (along with the derivation chains for the theories). But we shouldn’t idolise physics as the only science worth pursuing and we shouldn’t believe for psychology to be a legitimate science it has to be a simulacrum of physics.
  12. Reduce emphasis on ‘publish or perish’ with hiring, promoting, and judging the quality of a scientist. Theories are harder to publish so there is less emphasis placed on them. Most of the literature is not published, and the vast majority that is isn’t cited. This will help reduce Impostor Syndrome, which is rampant. Evaluate work by how many times it is cited/the impact it’s making to the scientific community.

There are often meaningful, worthwhile scientific questions that cannot be answered at a given stage of development. Psychologists often don’t realise this/deny this is true. This may be the case because we do not have the necessary embedding auxiliary theories or because we don’t have the instruments of observation or control required.

What happens to psychology students is that there is a combination of operationism about concepts and hence a simple-minded verificationism about statements combined with NHST. Students think if they are asking something sensible with properly operationalised variables that are suitably falsified, it must be answerable at this time (frequently false). Set aside the more interesting (but currently unanswerable) questions and work on the auxiliaries which lead to the desired question.

With probability, must differentiate between the epistemic use and the object use of it. Carnap differentiated between probability 1 (numbers that can’t be defined by how many times an event will occur divided by the total number of possible events like in traditional probability, a meta-linguistic concept, refers to the relationship between a hypothesis and its elements/evidence or between propositions or between beliefs, doesn’t intrinsically have the properties of a frequency) and probability 2 (physical or social, refers to relative frequency in a class/collective which has a certain property, in the object language, it is usually possible to work out the probability of something by looking at the properties of the instruments/apparatus e.g. throwing dice).


Yonce, J. L., 2016. Philosophical Psychology Seminar (1989) Videos & Audio, [online] (Last updated 05/25/2016) Available at: [Accessed on: 11/12/2017]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: