Notes on Paul Meehl’s “Philosophical Psychology Session” #05

These are the notes I made whilst watching the video recording of Paul Meehl’s philosophy of science lectures. This is the fifth episode (a list of all the videos can he found here). Please note that these posts are not designed to replace or be used instead of the actual videos (I highly recommend you watch them). They are to be read alongside to help you understand what was said. I also do not include everything that he said (just the main/most complex points).

  • Operationism states all misible concepts in scientific theory must be operationally defined in observable predicates BUT that’s incorrect, don’t need all theoretical postulates to map to observable predicates.
  • Don’t need constants to be able to use functions and see if the components are correct. Given the function forms you can know the parameters (ideal case is to derive parameters). Weaker version: I can’t say what a, b, and c are but I know they are transferable or that a tends to be twice as big as b. If theory permits that it’s a risky prediction (could be shown to be wrong). Theories are lexically organised (from higher to lower parts). You don’t ask questions about lower points before answering the higher up ones in a way that makes the theories comparable. If two theories have the same entities arranged in the same structure with the same connections, with the same functions that describe the connections between them, and the parameters are the same: t1 and t2 are empirically the same theory. If we can compare two theories, we can compare our theory (tI) to omniscient Jones’ theory (tOJ) and see verisimilitude of our theory (how much it corresponds with tOJ).
  • People can become wedded to theories or methods. This results in demonising the “enemy” & an unwillingness to give up that theory/method.

 

  • Lakatosian defence (general model of defending a theory): 1) (t^At^Cp^Ai^Cn) follows deductively that [sideways T, strict turnstile of deducibility] (o1,  [if, then], o2)

AND absent the theory P(o2/[conditional on]o1)bk[background knowledge] is small

– this extension allows you to say you have corroborated the theory by the facts (because without this small prior it’s formally invalid logic). When P is very small, meets Salmon criteria for a damn strange coincidence

^=conjunction

t= theory we are interested in

At= theoretical auxiliaries we’ve tied to our initial theory (almost always more than 1)

Cp= ceteris paribus clause (all other things being equal). No systematic other factors (they have been randomised/controlled for) but there will be individual differences.

Ai= instrumental auxiliaries. Theories about some controlling or measuring instruments. You distinguish between At and Ai by which field it’s in (if it’s in the same science then it’s an At)

Cn= conditions, experimenter describes to you what they did, very thorough methodology (often incompletely described).

*If the theory is true and the auxiliaries are true, the ceteris paribus clause is true and the instruments are accurate and you did what you said you did, it follows deductively that if you observe o1 you will observe o2

  • This only works left to right; can never deduce the scientific theory from the facts.
  • Sometimes you can’t assume the main theory to test the auxiliary theories; you are testing both of them. So if it’s corroborated, then you’ve corroborated both.
  • Can be validating a theory and validating a test at the same time. Only works if the conjunction of the two leads to a damn strange coincidence.

 

  • Strong use of predictions=to refute the theory.
  • Suppose: (o1,-o2). Modus tollens: P>Q, ~Q therefore ~P
  • Lakatosian criticism: Modus tollens only tells us the whole of the left side is false, not which specific part is.
  • To deny: (p x q x r x s is equivalent to p is false or q is false or r is false or s is false.
  • Formal equivalent of ~ on top a conjunction is disjunctions between statements on the left.
  • Short form: the denial of a conjunction is a disjunction of the denial of the conjuncts.
  • So when we falsify the right in the lab, we falsify the left but because its a conjunction it only tells us something on the left is wrong. But we are testing T so we want to specify whether that is false or not.
  • Randomness is essential for Fisherian statistics.
  • In soft psychology, probability that Cp is literally true is incredibly small.
  • If you start distributing confidence levels to the different conjuncts you work towards “robustness”, can see how by how much Cp is false.

 

  • Often can’t tell (from an experiment) whether a finding is due to what is reported or a confounding variable. Have to consider all potential confound variables and escape from logically invalid 3rd syllogism by exploring all of them.
  • Different methods result in different Cp’s & At’s, something not often considered.
  • Lakatosian defence of theory is only worthwhile if it has something going for it; it has been falsified in a literal sense but has enough verisimilitude that it’s worth sticking with.
  • When examining part of the conjunct, look at Cn first. Can say: “Let’s wait to see if it replicates”.
  • Ai isn’t a great place to start for psychologists.
  • Cp is good (can almost assume it’s false). When we have different types of experiments over different qualitative domains of data, by challenging Cp in one experiment it doesn’t threaten the theories success in other domains.
  • If you challenge At, if that auxiliary plays a role in derivation chain to experiments in other domains and you try to fix up failed experiments by challenging auxiliaries then all derivation chains that worked in past will now be screwed (because you’re undermining one of the links). Cp is more likely to be domain specific (violated in different ways in different settings).
  • Can modify Cp by adding a postulate (as you don’t want to fiddle with At) because you may have changed subjects or environment etc.
  • Progressive movement: Can turn falsifier into a corroborator by adding auxiliaries that allow you to predict new (previously un-thought of) experiments. Not just post-hoc rationalisation of a falsifier (ad-hoc 1). Ad-hoc 2: when you post-hoc rationalise a falsifier by adding auxiliaries & make new predictions but those predictions are then falsified.
  • Honest ad-hocery: ad-hoc rationalisations that give new predictions (which are found to be correct) that are risky and are a damn strange coincidence.

References:

Yonce, J. L., 2016. Philosophical Psychology Seminar (1989) Videos & Audio, [online] (Last updated 05/25/2016) Available at: http://meehl.umn.edu/video [Accessed on: 06/06/2016]

Notes on Paul Meehl’s “Philosophical Psychology Session” #04

These are the notes I made whilst watching the video recording of Paul Meehl’s philosophy of science lectures. This is the fourth episode (a list of all the videos can he found here). Please note that these posts are not designed to replace or be used instead of the actual videos (I highly recommend you watch them). They are to be read alongside to help you understand what was said. I also do not include everything that he said (just the main/most complex points).

  • Saying “it’s highly probable that Caesar crossed the Rubicon” is the same as “it’s true that Caesar crossed the Rubicon” (1st is object language, 2nd is meta).
  • Probability (about evidence): talking about the relation between the evidence and the theory that the evidence is for.
  • Verisimilitude (ontological concept, refers to whether the state of affairs obtains or not in eyes of omniscient Jones) is NOT a statement about the relationship to the evidence (can’t be equated with probability); it’s a statement with relation to the world, whatever is the case (on the correspondence view).
  • Caesar crossed the Rubicon is true if and only if he crossed the Rubicon.
  • Caesar crossed the Rubicon is probable if there is sufficient evidence in the Vatican Library for you to believe that he did. Caesar crossed the Rubicon is probable if there is sufficient evidence for the Centurion sat next to him as he crosses the Rubicon.
  • Difference between the content and the evidence in support of it
  • This distinction completely torpedoed the “meaning is the method of it’s verification”.
  • Verisimilitude is a matter of degrees (just as confirmation).
  • Science theories can differ in how true they are (non-binary).
  • Logically, can argue that a science theory is false if it contains ANY false statements.
  • Falsifying any conjunct in the argument immediately falsifies the conjunction.
  • T(false)=S1(true),S2(true),S3(false). Falsifying any conjunct in a conjunction, falsifies the conjunction.
  • But we have to talk about degrees of truth to get anything done. Unsatisfactory but there have been (unsuccessful) attempts to quantify degrees of truth (true statements-false statements/total statements). Probability is also unsatisfactorily defined (logicians can’t agree if there’s one or two types of probability).
  • In psychology (when using statistics), use the frequency concept/theory:
  • When we are evaluating theories, we may use the other kind.
  • Kinetic theory of gases & gases: explain equations about gases (their volume, temp etc.), you could derive it by principles of mechanics. Theory of heat reduced to non-thermal concepts (concepts of mass, velocity, collision). They did this by imaging a cylinder of gas which contains molecules. These molecules act like billiard balls (with mass, velocity etc.). Degree of heat (temperature) and amount of heat are different things.
  • Scientists like going down the hierarchy of explanations.
  • Kinetic theory doesn’t work under extreme conditions. According to strict Popperian falsification, an example modus tollens so kinetic theory must be rejected. But we don’t do that. To abandon the theory is not the same as falsification.
  • Instrumentalists don’t care about truth (only utility), realists would have to reject it (but could recognise that part of it is false and so won’t totally abandon it).
  • Thinking about how the kinetic theory was false (in idealised Popperian form) allowed researchers to further explore it and it becomes a corroborator as thinking about how it was false tells us how to rewrite the equation and fit this model to the facts much better. You can be far along enough with your theorising to know that the theory is idealised, you use that knowledge to change equation (don’t need theory powerful enough to generate parameters, can be done in psychology).
  • One kind of adjustment is to change the theory to fit the facts (as with above example). Other is to change belief about the particulars (e.g. Planets weren’t behaving as they should, hypothesised that there might be another planet in such and such a place. Point telescope there and voila, Neptune).
  • Primitive statements are more important in some sense than others.
  • We need an idea of the centrality of a postulate (ideas that are crucial to the theory and can’t be dispensed with) and the peripherality of a postulate (those that can be amended and you still agree with that theory). Any way of getting at the theories’ verisimilitude that doesn’t take this into account is unsatisfactory. Why you can’t measure verisimilitude by a nose count/sentence count (gives same weight to central and peripheral postulate).
  • Core and periphery isn’t particularly explicated (and you can’t do it). Attempt: Can I derive the theory language statement (which come pairing this theory language statement with the replicated experiments) from the postulates of the theory. If not, theory is incomplete. You look at the derivation chain from each of the postulates to the facts and see how many derivation chains contain common postulates. If there are postulates that are common to all derivation chains, can be said to be a central postulate.
  • In any theory, we have a set of theoretical postulates. We have a set of mixed postulates (postulates that contain a mixture of observational and theoretical words).
  • Derive from that statements that are all observational. Makes a theory empirical. What makes it empirical is there is at least one sentence that can be ground out by applying laws of logic and mathematics to the theory that does not contain theoretical words but only logic, maths, and observational words. Some words are obviously observational (black) others are not (libido) but some are fuzzy e.g. 237 amps. But we have an ampmeter and a theory about ampmeters and trust the measurements, so it’s observational (but can be disputed). You can link observations together (and therefore observational statements) via theoretical statements.

 

  • Kinds of theoretical entities: Russell said there was only 1 (events). Meehl’s ontology: substances (chemistry sense e.g. elements), structures (including simples e.g. quarks), events (e.g. the neuron spikes), states (e.g. Jones is depressed, I am thirsty. Difficult to distinguish between events and states, could say events are just states strung out over long time intervals), dispositions (if x then y, -ble e.g. soluble, flammable; they are dispositional predicates), fields (magnetic fields). Can be used to analyse any concept in social sciences. Important kinds of events are when structures or substance undergoes a change in state, then changes dispositions. Power of the magnet to attract is a first order disposition. Iron being able to become magnetic is a second order disposition. Supreme Dispositions are dispositions that an object must have in order for it to be that object. The list helps think about the laws and theories are present in science. Most laws turn out to be compositional, functional dynamic, developmental. Compositional theories state what something is made out of and how it’s arranged. Functional dynamic involve Aristotelian efficient causes; if you do this then this will happen. Changes in state will result in changes in disposition over time. When comparing theories in similarity, list the kinds of entities and compare. How do they connect (compositionally and functionally). If you’ve drawn functional connections or time changes in developmental statements, can ask what’s the sign of the first derivative? You don’t claim to know what the function is; does x go up or down with y. You’ve got a strand in the net connecting entities. What about the sign of the second derivative?
  • Continuous case: [partial]F(x1x2) [over] [partial] x1 is greater than [partial]F(x1x2) [over] [partial] x2 EVERYWHERE. Means x1 is a more potent influence on y than x2 but still didn’t tell you what function is.
    X1 and x2 = two inputs
  • Discontinuous case: when the influence of x2 is greater depending on x1 being small.
  • Allows you to order partial derivatives.
  • Interaction effect: y is the output. (y with a present, y with a absent)when b is present-(y with a present, y with a absent)when b is absent. The difference between them is not zero.
  • The effect of y when b is there is greater than the effect of y when b is not there.
  • Theories can look the same/have the same connections/have same entities, but this theory makes this influence much more powerful but this only appears when you look into the derivatives (you see that there is an interaction).
  • Fisher effects=partial derivatives for continuous case. Interaction=mixed partial derivatives for continuous case.

References:

Yonce, J. L., 2016. Philosophical Psychology Seminar (1989) Videos & Audio, [online] (Last updated 05/25/2016) Available at: http://meehl.umn.edu/video [Accessed on: 06/06/2016]

Notes on Paul Meehl’s “Philosophical Psychology Session” #03

These are the notes I made whilst watching the video recording of Paul Meehl’s philosophy of science lectures. This is the third episode (a list of all the videos can he found here). Please note that these posts are not designed to replace or be used instead of the actual videos (I highly recommend you watch them). They are to be read alongside to help you understand what was said. I also do not include everything that he said (just the main/most complex points).

  • Descriptive discourse: what is.
  • Prescriptive discourse: what should be.
  • Science is descriptive, ethics/law etc. Is prescriptive. Philosophy of science (metatheory) is a mixture of both and has to be so in order to properly work (which the logical positivists didn’t realise).
  • External history of science (domain of the non-rational). What effects politics, economics etc. had on a theory.
  • Internal history of science (domain of the rational). Whether a fact had been over-stated or how the theory interacted with other facts and theories.
  • Context of discovery: psychological and social q’s about discoverer. E.g. Discovery of benzene ring. The fact he “dreamed of the snakes” is irrelevant to the truth of the story (the justification).
  • Context of justification: evidence, data, statistics.
  • Some say there shouldn’t be a distinction, BUT: Just because there is twilight, doesn’t mean that night and noon are not meaningful distinctions.
  • There are grey areas e.g. A finding that we are hesitant to bring into the corpus.
  • Sometimes we have to take into account the researcher who has produced this finding e.g. Dayton-Miller and aether.
  • Unknown/unthought of moderators can have a significant impact. Don’t have to be a fraud to not include that in manuscript
  • Fraud is worse than an honest mistake because it can obfuscate and mislead as you have something in front of you. You need enough failed replications to say “my theory no longer needs to explain this”. But this is why taking into account context of discovery is important (even when in context of justification); how close to a person’s heart/passion/wallet is this result? These things won’t be obvious in the manuscript but can have an impact.
  • 4 examples of context impacting research:
    1.
    How strongly does someone feel about this result? How much is their wallet being bolstered by this finding?
    2.
    Literature reviews also need to have the context of discovery considered. Reviewer may not be a fraud, may be sloppy, original paper may be poorly written. Meta-analysis counter-acts some of these flaws with some counterbalancing taking place that’s hard to do in your head. Meehl 1954 (psychologist is no better at weighing up beta-weights than the clinician). Can be abused.
    3.
    File-drawer effect BUT also what kind of research is being funded because it’s popular/faddish? University gets in habit of having large pot of money from government to fund research. Doing research to get grants can mean a narrowing of research but also some research can be shelved by not being funded because it could turn up unwanted/uncomfortable results.- Politics of discovery.
    When reading a paper, you don’t know how much politics/economics has influenced it/caused it to be researched in the first place and stopped other (potentially contradicting) research being conducted. Affects distributions of investigations. If a certain theory is favoured by the use of questionnaires rather than lab experiments and the former is used due to convenience, skewed picture.
    4.
    Relying on clinical experience rather than data, their clinical judgements made during observation are highly influenced by their own personal theory (experimenter effects).
    Power function is low, null result doesn’t tell you as much as a positive result.
    Context of discovery is also impacted by context of justification e.g. Knowing logic means you are likely to avoid making a logical fallacy when examining research. Not all impacts will be negative.

 

  • Scientific realist: there is a world out there that has objective qualities and it is the job of science to work them out.
  • Instrumentalism: the truth of something doesn’t matter if it has utility.
  • But fictions can be useful.
  • B.F. Skinner believed that when we could test mental processes and not just infer them, then it would become apparent which processes map on to which area.
  • 3 main theories of truth: correspondence theory of truth (view of scientific realist, that the truth of a statement is determined by how accurately it corresponds with the real state of affairs), coherence theory (truth consists of the propositions you have hanging together), and instrumental theory (fictionist, truth is what succeeds in predicting or manipulating successfully).
  • Scientific realists admit that instrumental efficacy bears on their truth. Part of the data.
  • Incoherent theory is false by definition, coherent theory can be false.
  • Caesar crossed the Rubicon (for correspondence): only 1 fact needed to verify; whether he crossed or not. Quine corners denote the subject of the sentence e.g. ‘Caesar crossed the Rubicon’ (first half of the sentence is meta-language) is true if and only if Caesar crossed the Rubicon (no quine corners + in the object language).
  • What grounds do we have for believing (epistemological)? *verisimilitude* What are the conditions for that belief to be correct (ontological)? Equivalent in their content, so if one is true then the other is true/if one is false then the other is false.
  • Semantic concept of truth.
  • Knowledge is JUSTIFIED true belief (so stumbling on to a truth by chance is not knowledge).
  • Truth is a predicate of sentences and not things.
  • Argument among logical positivists that they should remove the use of the word truth for empirical sciences as you can never be totally certain that what you’ve said is true (remove from meta-language). Only those predicates which we can be certain are accurate are permissible BUT that means you remove pretty much every word in language (all scientific language and most concrete language)
  • Verisimilitude (similarity to truth) is an ontological rather than epistemological/evidentiary concept (cannot be conflated to probability).
  • Scientific theories are collections of sentences and as such can have degrees of truth.

References:

Yonce, J. L., 2016. Philosophical Psychology Seminar (1989) Videos & Audio, [online] (Last updated 05/25/2016) Available at: http://meehl.umn.edu/video [Accessed on: 06/06/2016]

Notes on Paul Meehl’s “Philosophical Psychology Session” #02

These are the notes I made whilst watching the video recording of Paul Meehl’s philosophy of science lectures. This is the second episode (a list of all the videos can he found here). Please note that these posts are not designed to replace or be used instead of the actual videos (I highly recommend you watch them). They are to be read alongside to help you understand what was said. I also do not include everything that he said (just the main/most complex points).

  • Popper did not accept the verifiable criterion of meaning. Popper never said falsifiability was a criterion of meaning.
  • No experimental/quantitative for Freud (there is empirical data). Popper rejected induction completely.
  • Unscientific theories don’t give you examples of things that will show it’s wrong, just what will confirm it.
  • If P > Q (conditional) = -P (P is false) v Q (Q is true) = not true that P (P is true) v -Q (Q is false). P is sufficient for Q and Q is necessary for P.
  • If there is a semantic connection between propositions, use stronger notation I-

 

  • Implicative syllogism: P > Q, P therefore Q. Valid figure. Used when a theory is predicting an event. Modus ponens.
  • P > Q, ~P therefore ~Q. Invalid. If Nixon is honest I’ll eat my hat. Nixon isn’t honest, can’t conclude I won’t eat my hat.
  • Q > P, Q therefore P. Invalid. All inductive reasoning is formally invalid (if the theory is true then the facts will be so. The facts are so, therefore the theory is true). Hence why all empirical reasoning is probable. Hence why it can never be proved in sense of Euclid. Used when a piece of evidence is trying to support a law.
  • P > Q, ~Q therefore ~P. Valid. If Newton is right, then the star will be here. The star is not here, therefore Newton is wrong. Used to refute a scientific law or theory.  Modus Tollens (destructive mood). 4th figure of implicative syllogism.

 

  • Facts control the theory collectively over the long-haul (rather than just being dismissed after one piece of counter evidence). If the theory is robust enough/substantiated enough, allowed to roll with a piece of counter-evidence. There’s no specified point where this theoretical tenacity becomes unscientific.
  • Empirical science cannot be like formal set theory/ mathematics as it deals with probabilities.
  • Demarcation of scientific theory from non-science.
  • We don’t just state if a theory has been “slain” or not. There is some implicit hierarchy (based on evidence). Popper developed idea of corroboration. A theory is corroborated when it has been subjected to a test that hasn’t been refuted and the more risky the test (greater the chance of falsifiability as it makes more precise predictions), the better it is corroborated. A test is risky if it carves out a narrow interval out of a larger interval.
  • You need to calculate the prior probability
  • Look at theories that predict the most unlikely results.
  • Main problem with NHST as a way of evaluating theories: within parameters (set by previous evidence or common sense) you say it will fall within half this range (so 50% chance). Not impressive.
  • Salmon’s principle: principle of the damn strange coincidence (highly improbable coincidence). If absent the theory, knowing what roughly the range of values occur, I am able to pick out a number that’s a strange coincidence. But if a theory picks out that narrow number and it comes up true, then it’s strongly corroborated.

 

  • Salmon believes you can attach probability numbers to theories. Talked about confirmation (which Popper rejected) but they give the same numbers as Popper’s way of doing things. Salmon does this by using Bayes’ Formula.

 

  • Bayes’ Theorem (criticism of the Neyman, Fischerian, and Pearson): picking white marbles from urns (don’t know which urn it comes from).
  • P (prior probability of urn 1, 1/3) Q (prior probability we have picked urn 2)
  • p1= probability that I draw a white marble from urn 1 (conditional)
  • p2= probability that I draw a white marble from urn 2
  • Posterior probability/inverse probability/probability of causes: probability that if I got a white marble, I got it from urn 1
  • Pxp1                       (product=that you drew from urn 1 and got a white marble)
  • Pxp1+Qxp2          (product=that you drew from urn 1 and got a white marble PLUS you drew from urn 2 and got a white marble)- probability that you got a white marble period
  • Clinical example:
  • P1= probability of a certain symptom on having schizophrenia.
  • Prior probability- what’s the probability that someone has schizophrenia?
  • Posterior probability- what’s the probability that someone showing this Rorschach test has schizophrenia?
  • You have a certain prior on the theory, and the theory implies strongly a certain fact (p1=large, good chance of it happening). Without a theory, the Qxp2 is blank. Filled in by being “fairly small” IF you used a precise/risky test as it’s unlikely you could guess with that precision. Means that pxp1 is quite big, so the ratio is big so the theory is well supported.
  • Salmon says you want large priors (Popper says small) but both recommend risky tests that are more likely to falsify your study (due to their precise predictions)

 

  • Lakatos: research programs (amended theories that started out with leading programs). Kuhn=revising certain things about the theory until it has died and then you have a paradigm shift.
  • Popper says it’s far more important to predict results rather than explain an old one.

References:

Yonce, J. L., 2016. Philosophical Psychology Seminar (1989) Videos & Audio, [online] (Last updated 05/25/2016) Available at: http://meehl.umn.edu/video [Accessed on: 06/06/2016]

Notes on Paul Meehl’s “Philosophical Psychology Session” #01

These are the notes I made whilst watching the video recording of Paul Meehl’s philosophy of science lectures. This is the first episode (a list of all the videos can be found here). Please note that these posts are not designed to replace or be used instead of the actual videos (I highly recommend you watch them). They are to be read alongside to help you understand what was said. I also do not include everything that was discussed (just the main/most complex points).

  • Power of hard sciences doesn’t come from operational verbal definitions but from the tools of measurements & the mathematics.
  • A subset of the concepts must be operationally defined otherwise it doesn’t connect with the facts.
  • Methodological remarks= remark in the meta language (statements that occur in science and the relations between them, properties of statements and between statements, relations between beliefs and evidence e.g. true, false, rational, unknown, confirmed by data, fallacious, deducible, valid, probable) rather than object language (language that speaks about subject matter e.g. protons, libido, atom, g, reinforce),
  • Hans Reichenbach was wrong about induction
  • Pure observations are infected by theory (FALSE for psychology). If protocol you record is infected by theory, bad scientist e.g. Choosing to look at 1 thing rather than another just because of a theory OR falsifying data just to fit your theory.
  • Watson’s theory that learning took place in muscles (from proprioception feedback) was falsified by rats being able to negotiate a maze almost as quickly after having neural pathway that controlled proprioception feedback severed or when the maze was flooded.
  • Operationalism (we only know a concept if we can measure it & all necessary steps for demonstrating meaning or truth must be specified) sparked psychologies’ obsession with operationalising our terms (even though the harder sciences we are trying to emulate are not as rigorous with it) but Carnap suggests it is folly.
  • Logical positivism- taking things that could not be doubted by any sane person and building up from there a justification for science, and with the math and logic on top of the protocols you “coerce them” into believing in science. Urge for certainty.
  • Analyse science and rationally reconstruct (justify) it, show why a rational person should believe in science. Negative aim: liquidation of metaphysics (by creating meaning criterion).
  • A statement is cognitively meaningless if you don’t know how to verify it (either empirically or logically)- Criterion of meaning. The meaning of a sentence is the method of it’s verification, statement of affirmative meaning. A sentence’s meaning is derived from the evidence that supports it (“the meaning of a sentence is to be found entirely in the conditions under which it could be verified by some possible experience”*). Rejected because the sentence “Caesar crossed the Rubicon” means COMPLETELY different things to us and to a Centurion at Caesar’s side because we have different evidence.
  • Lots of our information comes from “authorities” (even though it’s a logical fallacy). We have to calibrate the authority and often we presume someone has done it for us so we trust it.

*http://hume.ucdavis.edu/mattey/phi156/schlickslides_ho.pdf

References:

Mattey, G.J., 2005. Schlick on Meaning and Verification. [pdf] G.J. Mattey. Available at: <http://hume.ucdavis.edu/mattey/phi156/schlickslides_ho.pdf> [Accessed on: 06/06/2016]

Yonce, J. L., 2016. Philosophical Psychology Seminar (1989) Videos & Audio, [online] (Last updated 05/25/2016) Available at: http://meehl.umn.edu/video [Accessed on: 06/06/2016]