Hi Curious Cat, welcome to Programmers! Calls for lists aren't on-topic here: I've removed that part out of your question. That said, this quest is extremely broad: is there a specific problem you're working on that has you thinking about Turing-completeness?
–
user8Jan 29 '12 at 21:02

4 Answers
4

From a more practical standpoint: if you can translate all programs in a Turing-complete language into your language, then (as far as I know), your language must be Turing-complete. Therefore, if you want to check whether a language you designed is Turing-complete, you could simply write a Brainf*** to YourLanguage compiler and prove/demonstrate that it can compile all legal BF programs.

To clarify, I mean that in addition to an interpreter for YourLanguage, you write a compiler (in any language) that can compile any BF program to YourLanguage (keeping the same semantics, of course).

Yes, that would definitely be the most practical way to approach it. </sarcasm>
–
Robert HarveyJan 29 '12 at 18:27

4

@RobertHarvey has a point, but the general idea is quite vital. Brainfuck is proven to be turing-complete and very simple as programming languages go. For non-esoteric programming languages, implementing a brainfuck interpreter may be much easier and faster than giving a rigorous proof out of nowhere (I can implement BF in a couple of lines of Python, but I'm not sure where to start with a formal proof that Python is turing complete); and dozens of esoteric brainfuck-inspired languages are known to be turing complete because it's known how they map to brainfuck.
–
delnanJan 29 '12 at 18:53

5

@RobertHarvey: Why not? Surely someone designing their own language would be able to write a BF compiler to it (if it was imperative, and find a suitable other language otherwise).
–
Anton GolovJan 29 '12 at 18:53

@delnan: You will have to prove, however, that your BF interpreter correctly implements the BF specification, IOW you will have to prove that your BF interpreter is, in fact, a BF interpreter and not an interpreter for a BF-like language that might or might not be Turing-complete.
–
Jörg W MittagJan 31 '12 at 1:52

OISC (One Instruction Set Computer) denotes a type of imperative computation that requires only one instruction of one or more arguments, usually “subtract and branch if less than or equal to zero”, or “reverse subtract and skip if borrow”. The x86 MMU implements the former instruction and is thus Turing-complete.

In general, for an imperative language to be Turing-complete, it needs:

A form of conditional repetition or conditional jump (e.g., while, if+goto)

There are of course other ways of looking at computation, but these are common models for Turing tarpits. Note that real computers are notuniversal Turing machines because they do not have unbounded storage. Strictly speaking, they are “bounded storage machines”. If you were to keep adding memory to them, they would asymptotically approach Turing machines in power. However, even bounded storage machines and finite state machines are useful for computation; they are simply not universal.

Strictly speaking, I/O is not required for Turing-completeness; TC only asserts that a language can compute the function you want, not that it can show you the result. In practice, every useful language has a way of interacting with the world somehow.

For imperative languages, are simple variables enough? I was under the impression that some kind of collection (e.g. arrays or linked lists) would be necessary.
–
luiscubalJan 12 '14 at 23:08

@luiscubal you need to be able to specify an arbitrary amount of data. With simple variables you can represent the amount of data that the variables themselves have. What if you need to represent N+1 different pieces of data. One could argue that with tricks like Fractran plays, you could do it even in simple variables... but that's not quite what you're asking.
–
MichaelTJan 13 '14 at 18:07

i know this is not the formally correct answer, but once you take the 'minimal' out of 'Turing-complete' and put 'practical' back where it belongs, you'll see the most important features that distinguish a programming language from a markup language are

variables

conditionals (if/then...)

loopage (loop/break, while...)

next come

anonymous and named functions

to test these assertions, start out with a markup language, say, HTML. we could invent an HTML+ with variables only, or conditionals only (MS did that with conditional comments), or some kind of loop construct (which in the absence of conditionals would probably end up as something like <repeat n='4'>...</repeat>). doing any of these will make HTML+ significantly (?) more powerful than plain HTML, but it would still be more of a markup than a programming language; with each new feature, you make it less of a declarative and more of an imperative language.

the quest for minimality in logic and programming sure is important and interesting, but if i had to teach n00bies young or old 'what is programming' and 'how to learn to program', i'd hardly start out with the full breadth and width of the theoretical foundations of Turing completeness. the whole essence of cooking and programming is doing stuff, in the right order, repeating until ready, as your mom did it. that about sums it up for me.

yesyesyes i know. but all the examples given are more or less esoteric (while maybe interesting or surprising), my answer was a pragmatic one, and very probably not minimal at all. i think it's important to point that out—this page was #1 when searching for Turing-completeness on google, the answers here are IMHO of little use for, say, a n00bie who wants to know what distinguishes HTML from PHP or Python. i mean, brainfck is not called brainfck for no reason.
–
flowJan 12 '14 at 22:11

A programming language is turing complete if you can do any calculation with it. There isn't just one set of features that makes a language turing complete so answers saying you need loops or that you need variables are wrong since there is languages that has neither but are turing complete.

Alan Turing made the universal turing machine and if you can translate any program designed to work on the universal machine to run on your language it's also Turing complete. This also works indirectly so you can say language X is turing complete if all programs for turing complete language Y can be translated for X since all universal turing machine programs can be translated to a Y program.

The time complexity, space complexity, easy of input/output format and easy of writing any program is not included in the equation so such machine can theoretically do all calculations if the calculations are not halted by power loss or Earth being swallowed by the sun.

Usually to prove turing completeness they make an interpreter for any proven to be turing complete language but for it to work you need means of input and output, two things that are really not required for a language to be turing complete. It's enough that your program can alter it's state at startup and that you can inspect the memory after the program is halted.

To make a successful language it needs more than turing completeness though and this is true for even turing tarpits. I don't think BrainFuck would have been popular without , and ..

"A programming language is turing complete if you can do any calculation with it." That's the Church-Turing thesis, not what makes a language Turing-complete.
–
RhymoidMay 20 '14 at 13:47

@Rhymoid So you mean nothing is turing complete unless you can make an interpreter? Ie. lambda calculus is not turing complete even if it's turing equalent?
–
SylwesterMay 20 '14 at 15:58

1

I'm still looking for an authoritative definition of the terms Turing-equivalent and Turing-complete (and Turing-powerful). I've already seen too many cases, from people on message boards to researchers in their own friggin' papers, who interpret these terms differently.
–
RhymoidMay 20 '14 at 16:12

Anyway, I interpret 'Turing-complete' as being simulation equivalent to a Universal Turing Machine (UTM; which, in turn, is capable of simulating any Turing machine -- hence 'universal'). In Turing's paper from 1936, where he introduced his machines, he defined the notion of a UTM, and gave a sketch of a proof that UTMs are simulation equivalent to Church's lambda calculus. By doing so, he proved that they had the same computational power. The Church-Turing thesis asserts, put simply, that "that's all the computational power you'll ever get".
–
RhymoidMay 20 '14 at 16:19

It has two formal definitions for Turing completeness page of Wikipedia. One requires I/O the other doesn't. The one that doesn't say that a machine is turing complete if it can calculate every Turing-computable function. That puts lambda calculus back to being turing complete since you can easily make a equalent program in lambda calculus that calculates the same as any turing machine programs.
–
SylwesterMay 20 '14 at 16:19