my question is, what's the equivelant of that in either PPC or x86_64 binary opcodes??
(not python byte-code or ASM, though the low-level ASM range would be legit)

I'm trying to design a common interface language for recompilation purposes between opcodes and code.
but I need to know how a class/object would work on a CPU before I can start finalizing the designs.

thanks :)
___

for your concerns as to why I'd need this:

GCN/Wii: DOL/REL (PPC)
Windows: EXE/DLL (x86_64)

imagine being able to convert between those.

My language is designed for identifying large structures of ASM or binary opcodes and simplifying them into basic code keywords and statements, which can then be parsed into higher level languages such as C++, Java, or even Python (including byte-code).
(and vice-versa)
___

what exactly is my language?
you'll see it when I release my open-source program ;)

I'm not too willing to share right now because it's still very much incomplete.
my only knowledge of the CPU comes from partially building one in Minecraft (Redstone)
(you might want to look into RedGame for something more functional than what I've built)

what's the equivelant of that in either PPC or x86_64 binary opcodes??

Are you asking us to compile this Python code? You can use a Python-to-C/C++ conversion tool, like Cython, and then compile it with a C/C++ compiler. If you specify PPC as target, you'll get the PPC machine code. If you specify x86_64 as target, you'll get that machine code. Your question doesn't make sense if asked to human beings, this question is the reason compilers exist, that's their job.

how a class/object would work on a CPU before I can start finalizing the designs.

A class or object does not work any differently on a CPU than plain old C code. After compilation / interpretation / JIT'ing / whatever, the code is just plain function calls, jumps and operations on registers. The concept of a "class" or an "object" doesn't really survive passed the first pass of the compiler (the "semantic analyser").

I would recommend that you start by learning how to write object-oriented code in C. C is pretty much the closest to hardware language that is still human-readable and has a very straight-forward and obvious (and standardized) mapping to machine code. If you wonder how anything could be done in actuality, just try to write the exact equivalent code in C (which is always possible, but sometimes hard). Then, if you really need to know what it looks like in assembly, just use a C compiler and ask it to generate the assembly listings (the option is -S on most compilers). As for getting the PPC or x86_64 machine code, well... that's the executable / object files you get after compilation.. they are in machine code, that's the point.

And if you want to know how some Python code would look like in ASM, then your out of luck my friend. Python is not a standardized language, there is absolutely no specified or guaranteed behavior. You write Python code, feed it to CPython, and something happens. You can reasonably expect a certain behavior, based on what CPython docs tell you, but there is no formal specification that guarantees any kind of behavior, in other words, a simple "hello world" program that completely ignores the Python file you specify as a command line argument would be a perfectly valid Python interpreter, because "nothing ever happens" is a valid behavior for any python program.

Jokes aside, my point is that you will never get a straight answer to the question of how a Python class or function is realized in actual native code. You can dig into the CPython implementation if you want to see how they do it, but that is neither a generally-applicable nor a definitive answer.

One place you could look for an answer that is closer to the "Python-class-to-ASM" is the Intel Itanium 64 C++ ABI specification. This is the informal standard C++ ABI that most major compilers follow (except Microsoft.. sigh) which specifies exactly how classes of all kinds (in C++) translate to actual implementations in memory.

I'm trying to design a common interface language for recompilation purposes between opcodes and code.

It sounds like you are trying to implement something like LLVM's Code Generator, used by Clang compilers (use on Macs by default for C/C++/Obj-C, and optionally on other systems too). Most compilers have a back-end that compiles from some intermediate language (or opcodes, bytecodes, etc.) to machine code.

It also sounds like you want to implement something along the lines of any one of Microsoft's attempts to create the "one model to rule them all", like COM (Component Object Model) or .NET CLR.

imagine being able to convert between those.

That's called an emulator, like qemu. AFAIK, qemu only does dynamic translation (on the fly) from any architecture to the host architecture. I guess it wouldn't be impossible to do a batch translation of the whole thing. There is also a qemu back-end that produces LLVM bitcode (which can then be optimized and compiled into any target architecture that LLVM supports, which is most of them).

To be honest, translating the machine code from one architecture to another is not really the issue. The main problem is all the links to the outside world, like system calls, library calls, etc... which you cannot easily port.

My language is designed for identifying large structures of ASM or binary opcodes and simplifying them into basic code keywords and statements, which can then be parsed into higher level languages such as C++, Java, or even Python (including byte-code).

That is really the most insane part of your post. You must not be a fan of information theory, are you? Most of the information conveyed by source code is useless noise, as far as the compilers and code generators are concerned. They wash away nearly everything. So, from our perspective (human beings), all that nice and valuable information about how the code works that is represented in the way the source code is structured, well, all that is gone, long gone by the time it hits bytecode, IR, IL, or machine code, of course. There is literally no way to reconstruct it.

There is one exception, which is .NET CLR/CIL, which is a really high-level IR that actually preserves nearly everything of the original source code, which means that (1) it is really slow and (2) it is super easy to hack, exploit, attack and all that nasty stuff (but it certainly makes for interesting reads to find out about how ridiculously easy .NET code is to hack).

I'm pretty sure that the most you could hope for in most cases is to generate some barely human-readable C code.

And, to answer your question about what that program would look like in ASM, I wrote this equivalent C++ program:

Which is pretty much self explanatory if you are the least bit used reading to ASM listings (as all programmers should be). And here is the kicker... the most you could deduce (of the main function) from the above assembly is some code like this:

Not exactly super interesting if you are looking to resurrect the higher-level original C++ program. The "A" class doesn't even exist anymore, as I explained already, it's washed away, like it would in any language that respects itself (and no, .NET languages are not among them). And btw, I omitted the static init code, which only adds more noise. And finally, you are lucky that this particular examples doesn't really contain too many goto instructions, as ASM is usually riddled with gotos, which are very hard to translate back to any kind of code that isn't meaningless unreadable spaghetti crap.

no, I'm asking for a binary representation that would work similarly.
(ASM doesn't quite tell me how the CPU works)
^especially high-level ASM such as MASM or FASM, which seems to be what you posted

I will mention, I'm hardly educated in ASM, though I'm learning through working with stuff dealing with it...

the code is just plain function calls, jumps and operations on registers

so I've been told...

I have an assumption that a class works similar to a struct but with function pointers.
(credit to APott aka NardCake for this idea)

It sounds like you are trying to implement something like LLVM's Code Generator

I guess it could be similar >.>

though looking at this, this seems to be direct ASM -> code...
this approach is alot more complex to have to deal with.

my approach is a language designed slightly above ASM level...
I'm not worrying about the code (C++, Java, etc) just yet.

I need to perfect what my language can handle, and add a few features that keep things just below C.

I havn't shown in any of the images I supplied earlier, but I'm already supporting pointers and functions using the CPU stack.
so yes, the language IS functional, and like I said, I've built an interpreter shown in a few of the supplied images.

I did say those images are really old... heh
(about 3 to 4 years to be exact)

That's called an emulator

no, an emulator is really nothing more than an interpreter substituting the given commends supplied by the given executable for actual commands for the machine during emulation runtime.

the correct term is recompiler.
(a Wii emulator can't reconstruct a DOL into an EXE)

Not exactly super interesting if you are looking to resurrect the higher-level original C++ program. The "A" class doesn't even exist anymore, as I explained already, it's washed away, like it would in any language that respects itself (and no, .NET languages are not among them). And btw, I omitted the static init code, which only adds more noise. And finally, you are lucky that this particular examples doesn't really contain too many goto instructions, as ASM is usually riddled with gotos, which are very hard to translate back to any kind of code that isn't meaningless unreadable spaghetti crap.

I just reread this and relooked at that deduced code...
I see what you mean...
but yea, like I said with the direct approach vs the bottleneck approach.

in my example, all I need to worry about is the very basic instructions.
the backends take care of everything between:
ASM <backend> UMCSL <backend> Code

UMCSL is of course standalone, so everything the backends deal with is direct towards a common interface language.

the .NET CLI is meaningless unless a script supports it.
otherwize I can set function names as IDs as the CPU identifies them.

And if you want it in PPC or any other architecture that is different from your host architecture, then you just need to specify the target architecture options for the assembly and disassembly, and you'll need to have them installed (basically, install the GNU cross-compilers for your desired target).

But the details of the specific instruction set used is not very relevant at this stage, since there's really nothing left of the original object-oriented code, it's just a couple of plain function calls and raw memory. I don't know what you expect to see different between the architectures, or between the assembly listing and the x86-64 instructions, after all, this is just a simple cdecl call (mov, mov, mov, call), another simple cdecl call (mov, call), a return of zero (xor, ret), a stack frame push / pop (sub rsp, add rsp), and a bit of padding (nopl). It's quite literally the most trivial piece of assembly code you could imagine (it's just a "hello world" program, after all!).

like I stated earlier, I'm not a computer genius...
I don't know all that much about the CPU other than what I've designed in Minecraft, which really isn't much more than a functional ALU with some RAM and storage...
nothing too grand... heh

so yea, I'm using ASM to both structure my language against my own ideas of how a CPU works, as well as learn from it.

all my language will do is clarify the logic given by the supplied opcodes.
of course though, it'll need much more code, and multiple instances of similar function structures, in order to assume a class.
___

basically, it'll do the same thing you do when you start out as a noob and write a program out of a bunch of functions...
once you gain knowledge about classes, you start organizing your functions more properly and creating objects.

^that was me a few years ago, I originally wrote UMC3.0a (dev3) knowing nothing but python functions.
now I know classes, decorators, and even meta-programming and am completely redesigning everything with 2 versions of UMC (3.0a (dev5) and 3.0).

with that experience, I can identify classes from slews of functions, and I want to write that functionality into my language.

it has to do with my ability to visualize extremely large structures of logic.
(I have autism, and I believe this ability is a gift)

I know the process looks impossibly complex to you, but it looks simple enough to me. ;)
___

so about learning from the ASM you posted...
as long as I can see the basic logic on the CPU level, I can understand how it works.

I think I need another instance created of class A to understand it better. :P

Those three move operations are for passing the parameters of the function. This is according to the calling conventions. It is only a bit special here because the function signature is such that all the parameters can be passed via registers instead of the stack, as is usually the case.

Here is a basic explanation:

movl $17, %edx : Passes the integer value 17 as the last argument to the function call (__ostream_insert), which is optimized by passing the value through the EDX register (a general-purpose integer extended (32bit) register often used for argument passing). Btw, the value 17 is the length of the string "I am initialized!", which is the required third parameter to the __ostream_insert function.

movl $.LC0, %esi : Passes a pointer to the string constant (marked by the label .LC0, as you can see in the .rodata read-only data section) as the second parameter, which uses the ESI register, which is a general-purpose pointer register.

movl $_ZSt4cout, %edi : Passes a pointer to the std::cout object, marked by the mangled external symbol _ZSt4cout (which will be resolved by the linker later), as the first parameter to the __ostream_insert function, which is a reference to a ostream object (C++ references are, of course, implemented as pointers). That pointer is passed to the EDI register, another general-purpose pointer register.

call ...__ostream_insert... : Calls the function, which just means that it does what is called a "long jump" to the execution address specified, which is, in this case, a mangled external symbol to the function __ostream_insert (with the demangled signature that I posted earlier) that will be resovled later by the linker (that actual function probably resides in libstdc++.so).

The allocation / deallocation of stack memory is simply done by moving the stack pointer (RSP register) back by 8 and then moving it forward again at the end (on Linux, stack memory grows backwards, from higher memory addresses to lower ones, so, you "grow" it by moving the stack pointer backwards). And for the assignment of zero to the EAX register (used for passing simple result values, in this case, the result of the main function, which is zero), this is done with a XOR trick which uses the fact that XOR'ing a variable with itself always gives a result of zero, and it just happens that the XOR operator is faster than any other method to make a value zero.

The only thing that puzzles me a bit is the stack allocation, I'm not sure exactly what it's for, since the main function never uses it to store anything (it has no local variables). It might simply be an artifact of something else or a safety measure.

with that experience, I can identify classes from slews of functions

There are decompilers, like Hex-Rays, that exist, but they generally don't go much beyond reconstituting basic procedural code. And they produce very ugly code (even when linker symbols are available, which is rarely the case for "distributed" software). Unless, of course, you have debug information, but that's like having the source code, making the whole exercise pointless.

it has to do with my ability to visualize extremely large structures of logic.
(I have autism, and I believe this ability is a gift)

Well, maybe you are gifted in that way, but writing a computer program that can do the same is a whole other ball-game. You could argue that any experienced programmer could take assembly code and be able to reconstruct a reasonable approximation of what the source code that produced it probably looked like. But the point is that such a reverse engineering task involves a lot of drawing from your own experience and intuitions about the code. That's not something that's easy to replicate in a computer program, it's called artificial intelligence / cognition.

sorry for the late response btw, I'm multitasking on various other forums and skype chats between 2 other areas of my program.

it's called artificial intelligence / cognition.

haha, funny you should mention that. :)

I've been working on plans for AII (Artificial Interactive Intelligence):
a standard for my game-system/computer that's designed to give natural interaction (not just human-like) to bots in video games.

imagine playing CoD (IK, kill me now), and physically pointing to an area on the screen while telling your bot comrad to go to that area.

depending on how the bot was programmed to act (it's attitude), and standards given to the bot to follow (such as army training), will result in if the bot follows your direction or not.
___

of course, this requires sheer power to perform, which is why I'm redesigning electronics to perform the way electrons were naturally meant to perform.
(binary and quaternary are only fixed systems and aren't exactly natural development)

if you were to plug wires into your brain, how much you wanna bet the data you'd recieve wouldn't be binary or quaternary.

take a look into Analog Computing...
(yes, taking a step back to take a leap forward)
I'm taking that a step further to achieve my power. :)

Well, maybe you are gifted in that way, but writing a computer program that can do the same is a whole other ball-game.

I did say it was extremely complex. =P

I can't be overwhelmed by it though as I'm already beyond drowned in my sea of overwhelming projects... heh

I can see just how big the logic is I need to write...
I just wish I could connect my brain to my compy so the slow method of typing wouldn't keep me from writing out the full code in seconds... heh