Here are two suggestions. The first is less of a philosophy blog and more of a metaphilosophy blog, but it often has useful links to other blogs that you might like: "Leiter Reports: A Philosophy Blog / News and views about philosophy, the academic profession, academic freedom, intellectual culture...and a bit of poetry" http://leiterreports.typepad.com/ The second is also not quite a philosophy blog itself but a philosophy metablog, with summaries and pointers to other philosophy blogs: "Philosopher's Carnival" Its location seems to move around and is always posted on the Leiter Report. The current version is at: http://ichthus77.blogspot.com/2012/02/philosophers-carnival-138.html

The philosopher Benson Mates once characterized philosophy as a field whose problems are unsolvable. This has often been taken to mean that there can be no progress in philosophy as there is in mathematics or science. But I believe that solutions are always parts of theories, hence that acceptance of a solution requires commitment to a theory. Progress can be had in philosophy in the same way as in mathematics and science by knowing what commitments are needed for solutions. In a sense, this means that sometimes philosophy "progresses" backwards , by coming to understand what extra assumptions are needed to solve its problems. (I've written about this in a technical paper-- "Unsolvable Problems and Philosophical Progress" (American Philosophical Quarterly 1982)--and in an essay for a non-technical audience-- "Can Philosophy Solve Its Own Problems?" (SUNY News 1984). There was also a recent symposium on this topic at Harvard, and some of the talks from that symposium can be Googled online.

That's one interpretation, but there are many others. My favorite interpretation focuses on this passage in Turing's classic 1950 essay, "Computing Machinery and Intelligence" (Mind 59:433-460): I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. Of course, that century ended around the year 2000, and Turing's predicted "alteration" hasn't yet happened. But that's beside the point. Turing's claim, according to this passage, is that, if computers (better: computational cognitive agents or robots) pass Turing tests, then we will eventually change our beliefs about what it means to think (we will generalize the notion so that it applies to computational cognitive agents and robots as well as humans), and we will change the way we use words like 'think' (in much the same way that we have generalized what it means to fly or to be a computer,...

Daniel Dennett discussed a fictional drug that he called an "amnestic" that allows you to feel pain, but paralyzes you so that you don't exhibit pain behavior, and leaves you with amnesia. Pleasant, no? For the details and his philosophical analysis, read: Dennett, Daniel C. (1978), "Why You Can't Make a Computer that Feels Pain", Synthese 38(3) (July): 415-456; reprinted in his Brainstorms: Philosophical Essays on Mind and Psychology (Montgomery, VT: Bradford Books (now Cambridge, MA: MIT Press), 1978): 190-229.

Rules of inference are "primitive" (i.e., basic) argument forms; all other arguments are (syntactically) proved using them. So you could either say that the rules of inference are taken as primitive and not (syntactically) provable, or you could say that they are their own (syntactic) proofs. However, the way that they are usually justified is not syntactically, but semantically: For propositional rules of inference, this would mean that they are (semantically) proved by means of truth tables. A rule such as Modus Ponens (your example) is semantically proved (i.e., shown to be semantically valid) by showing that any assignment of truth values to the atomic propositions (P, Q in your example) that makes all of the premises true also makes the conclusion true.

When an elementary-school student is learning how to multiply, the result of multiplying 2 by 2 is probably produced by a form of reasoning (perhaps repeated addition). When you or I do it, it's probably done by rote memory. But when any of us multiply two 6-digit numbers, it's almost certainly by "reasoning". (Maybe those with "savant syndrome" do it by some kind of memory-like process, or maybe it's just by very fast, unconscious reasoning.) But the "reasoning" we use for multiplying those larger numbers consists of applying the multiplication algorithm, among whose steps are instructions to multiply single-digit numbers together (like 2x2, or 9x8). And those multiplications are probably done by "memory" (what computer scientists call "table look-up"). That's because multiplication is a recursive procedure: We multiply large numbers by applying the multiplication algorithm, which requires us to multiply smaller numbers, eventually "bottoming out" in the base case of table-look up of the...

And a good place to continue (after reading Turing 1950) might be with some of the readings that I have listed on my Philosophy of Computer Science course webpages at: Philosophy of Artificial Intelligence and at Computer Ethics , especially: LaChat, Michael R. (1986), "Artificial Intelligence and Ethics: An Exercise in the Moral Imagination", AI Magazine 7(2): 70-79

I agree with Saul, but I also think that it can be very useful to think through a problem for yourself before reading what others have had to say about it. That way, you know what you think about the issue, and you can use your views to help you understand or question the views of others.