Philosopher John Haugeland once offered a sort of counterpart to Ockham's razor: "Don't get weird beyond necessity." (from "Ontological Supervenience," 1984, Southern Journal of Philosophy pp. 1—12.) Of course, the hard part is spelling out what weirdness amounts to and why it counts against a hypothesis. For example: Ockham's Razor tells us not to multiply entities beyond necessity: it stands in favor of parsimonious theories. Panpsychism is certainly weird, but from one point of view it's parsimonious: it says that there aren't actually two kinds of physical things (conscious and unconscious) but only one. Does the weirdness swamp the parsimony? If so why?
So as a quick and dirty rule of thumb, "Pick the less weird theory" seems fine. As a serious methodological rule, it may need some work.

The word "theory" has a common meaning, which is something like "hypothesis" or "speculation." It also has a scientific meaning, which, close enough for our purposes, is "organized set of principles." When we call something a theory in that sense, we aren't saying anything at all about whether the principles are true or false.
Keep in mind that the word "theory" even gets used in mathematics---for example, when mathematicians talk about number theory (roughly, the study of the properties of whole numbers.) The word "theory" here isn't meant to suggest that the principles number theorists use are suspect.
The "theory/fact" confusion is unfortunate. Evolutionary theory is a theory in the scientist's sense: an organized collection of explanatory principles. As it turn out, those principles have been very successful tools for making sense of nature.
So why not just call these principles facts ? We could, but theoretical principles tend to be abstract and general. We tend to use the word "fact" for...

A good question, but as you no doubt guessed, one that people have thought about. The short answer is that we'll say that X causes Y if X raises the probability of Y, even if it doesn't raise the probability to 100%
Let's be a bit more concrete. Think about clinical trials of a medication. Suppose we think that some new compound lowers blood pressure. We might test this by selecting a set of test subjects with hypertension, and then randomly assigning some of them to the treatment group and others to the control group. Ideally the test would be a double-blind test. That is, neither the people administering the treatment nor the people being treated would know if they were getting the actual medicine or a mere placebo. We'd measure everyone's blood pressure before the trial, and then after. And then we'd compare. Normally we wouldn't expect that everyone who received the actual treatment would have lower blood pressure at the end, and we also wouldn't expect that no one who got the placebo would end...

To add a few thoughts to my colleague's response:
We could use the following as a rough working definition of a miracle: a miracle is an intervention in the course of nature by the deity in which the usual regularities are suspended or over-ridden. There's lots of room to refine and polish that, but it gets around one objection to the very idea of a miracle, namely that laws of nature aren't really laws of nature if they have exceptions. Laws of nature would encode the way the world works when God doesn't intervene. But of course, even if we can come up with an intelligible notion of "miracle," it's a long ways from there to having reasons to believe that there actually are miracles, let alone that any particular occurrence is a miracle. The fact that we don't have an explanation for something at the moment provides more or less no reason by itself to think that it's a case of divine intervention.
"Magic" is a more complicated concept than "miracle," in my view. If you go back and look at how...

I think the way to sort this out is to be careful about some relevant distinctions.
A theory is a certain sort of construction that we use for prediction/explanation etc. Theories are at different levels. A biological theory or a psychological theory is at a quite different level than a theory of subatomic particles. Furthermore the laws or generalizations we use in biological or psychological explanation will by and large not appeal to the concepts we use in our physics. If we try to define biological or psychological concepts in the vocabulary of physics, we're not likely to succeed, and even if we did, it's not likely that this would be a useful way to do biology or psychology.
So theories in biology and psychology don't reduce to theories in physics if by "reduce" we have in mind deriving the higher-level laws from physical laws and facts, couched in the language of physics (together with so-called "bridge principles" to connect the two vocabularies.) That said...
It may still be that...

I don't think there's any general injunction about getting the science right, but sometimes getting it wrong can be a distraction. One example that's been discussed by various critics comes from Lord of the Flies . Piggy's glasses are used to focus sunlight and start a fire. But Piggy is nearsighted; his lenses would be concave rather than convex and couldn't be used to start a fire. (Thanks to John Holliday for this example, which he discusses in his dissertation.) Many readers won't realize the problem, but the glasses and Piggy's nearsightedness aren't just an incidental plot element. This is the sort of detail that Golding could have gotten right and once you know that it's wrong, you may never be able to read those scenes in the same way.
Needless to say, this doesn't show that getting the science right always matters. It surely doesn't. It's also plausible that these things will be matters of degree. The more esoteric the bit of science, and the less central to the story, the less it's likely to...

I don't have a clear fix on the question, but insofar as I do, I don't see how philosophy alone could answer it. You seem to be saying that there's a real-world, repeatable phenomenon: babies in certain situations behave this way rather than that . That may be true—is true, far as I know. But if it's true, there's nothing a priori about it; the opposite behavior is perfectly conceivable and might have been true for all we could have said in advance. I don't see how philosophical analysis could tell us why things turned out one way rather than another. At least as I and many of my colleagues understand philosophy, it doesn't have any special access to contingent facts. A philosopher might come up with a hypothesis, but insofar as the hypothesis is about an empirical matter, it will call for the usual sort of empirical investigation that empirical claims call for.
As for blank slates, philosophy can't tell us by itself whether we're born with blank slates as minds, but as a matter of fact, there...

I think the best place to start is by asking yourself what "self-actualization" is supposed to be and why it's so important. The phrase "self-actualized" has a sort of aura about it, but I'm not sure it's a helpful one for thinking about how we should live. One of my problems with the phrase is that as it's often used, it seems to mean something that has to do with a rather narrow sense of bettering oneself. Wanting to live a good life is a noble goal. Part of living a good life has to do with making good use of the gifts one has been given, to borrow language from the religious tradition. And I sense that that's part of your concern. One doesn't want one's life to be devoted to trivial things. But most of us have to make a living, and making a living by doing routine science doesn't seem ignoble—not least since one can never be sure what the larger consequences will be. So if you find satisfaction in doing science and do it well and conscientiously, I'd say you have nothing to be ashamed of. But...

It's a good question and I don't think it has an easy answer. On the one hand, if laws aren't truly "global" (i.e., could hold only at particular times and/or places), then we have a potential problem of arbitrariness. I'm pretty sure this is a true generalization: All men born in Canada and typing an answer on December 27, 2014 in the city of Washington DC to a question about laws on askphilosophers are wearing cotton sweaters. On the other hand, I'm quite sure that it's not a law of nature and I can't imagine why anyone would think otherwise. You could just stipulate that all true generalizations are laws of nature, but that seems truly arbitrary, and in particular it seems to ignore all the reasons we think it's worth looking for laws of nature. So from a certain point of view, requiring that laws of nature can't be restricted to particular places or times seems like a way of avoiding rather than introducing arbitrariness. That said, it hardly follows that we would never have...

Here's a sort of rule-of-thumb answer that I find useful. Roughly, we should ask ourselves how surprising the evidence would be if the hypothesis were not true. Suppose the question is whether Harvey robbed the bank. Our evidence for Harvey being the thief is that a witness saw him outside the bank around the time of the robbery. If Harvey really is the robber, this isn't unlikely, but suppose Harvey works in the barber shop on the block where the bank is, and the time he was seen was a few minutes before opening time for the barber shop. Then seeing him outside the bank wouldn't be surprising even if he wasn't the robber. It's not strong evidence. On the other hand, suppose the evidence is that a search of Harvey's apartment turns up a large bag of bills whose serial numbers identify them as the ones that were stolen.Then things look bad for Harvey. If he wasn't the robber, it would be surprising to find the money in his apartment. (Of course, this isn't conclusive proof. Maybe someone has planted the...