We all know about Turing machines.We all know that there are universal Turing machines, which in some sense "simulate" other TMs.We all were given a hand-wavey description of how to encode (TM, input) pairs for consumption by these UTMs.I can find lots of descriptions of UTMs;I can find lots of proofs that they're universal;but I can't find a precise description of the encoding scheme for any of them.Could any of you smart folks help with this?As far as I can tell, nobody has asked this question in the history of the Internet.

You could totally stop reading this post at this point, but here's some background:

I've recently had the burning passion to actually run a UTM. I know it's trivial to write e.g. a Python program that simulates a TM, and you could call that Python program a UTM; but that's kinda cheating, because that program isn't really a description of a Turing machine. It would be really hard to translate that program into a data structure that could be fed to the program as input. The UTM isn't written in its own language, so to speak. What I really want is a UTM that I could run on itself.

Here are the conventions I'm familiar with:- a Turing machine is a set of 5-tuples: (state, symbol, new_symbol, new_state, direction)- a Turing machine [imath]U[/imath] is "universal" iff there exists some function [imath]enc_U[/imath] that, given a TM [imath]M[/imath] and an input [imath]x[/imath] to that TM, returns some string x' in U's alphabet, where U accepts [imath]x'[/imath] iff M accepts [imath]x[/imath].

In my dream world, somebody would reply to this thread with two things:1. A set of 5-tuples describing a UTM [imath]U[/imath].2. A description of the corresponding function [imath]enc_U[/imath].But I'd be grateful for any help at all.