porting C code in MIPS

This is a discussion on porting C code in MIPS within the C Programming forums, part of the General Programming Boards category; Hi,
i am new to C programming, and want to advice you in general about good tips in order to ...

Generally, the advice is to write code that is portable. You achieve that with a series of "don't"s. Don't make assumptions about things like what size an integer is. Don't make assumptions about how many digits of precision a floating point variable supports. Don't make assumptions that two variables are beside each other in memory. Don't make assumptions about how data structures are laid out. Don't write complex expressions with lots of side-effects (make your expressions simple, and only change one thing at a time). Don't try to access element 5 of a 3-element array. The list of "don't" you will need to learn is quite long.

There is no good practice in C for replacing functions like malloc(). Your choices are to either use them as is, wrap them in some way that does more error checking before and after calling malloc(), not use them at all, or write dedicated code (which will probably be non-standard and non-portable) to do a similar thing. Each of those approaches has advantages and disadvantages. There is no "best".

Making the huge assumption that your code is not misbehaving in some way (e.g. not molesting pointers, so not corrupting data structures that malloc() uses internally) then malloc() will return NULL if it cannot allocate memory. If memory used exceeds memory available, then malloc() cannot allocate memory. If you want more precision than that (for example, a report how how much memory is remaining) then the techniques are non-standard and non-portable (i.e. specific to an operating system).

If I seem grumpy or unhelpful in reply to you, or tell you you need to demonstrate more effort before you can expect help, it is likely you deserve it. Suck it up, Buttercup, and read this, this, and this before posting again.

basically i try to remove malloc statements by declaring normal variables of fixed size. Supposing that i know the size of the required space is it ok to change a pointer that is allocated x mount of memory into a varilable of size x? Also i want to ask what happens when malloc is used inside a function...is it correct that memory is allocated EACH time the function is called? so it would increase the memory needs siginificantly?

Originally Posted by grumpy

Generally, the advice is to write code that is portable. You achieve that with a series of "don't"s. Don't make assumptions about things like what size an integer is. Don't make assumptions about how many digits of precision a floating point variable supports. Don't make assumptions that two variables are beside each other in memory. Don't make assumptions about how data structures are laid out. Don't write complex expressions with lots of side-effects (make your expressions simple, and only change one thing at a time). Don't try to access element 5 of a 3-element array. The list of "don't" you will need to learn is quite long.

There is no good practice in C for replacing functions like malloc(). Your choices are to either use them as is, wrap them in some way that does more error checking before and after calling malloc(), not use them at all, or write dedicated code (which will probably be non-standard and non-portable) to do a similar thing. Each of those approaches has advantages and disadvantages. There is no "best".

Making the huge assumption that your code is not misbehaving in some way (e.g. not molesting pointers, so not corrupting data structures that malloc() uses internally) then malloc() will return NULL if it cannot allocate memory. If memory used exceeds memory available, then malloc() cannot allocate memory. If you want more precision than that (for example, a report how how much memory is remaining) then the techniques are non-standard and non-portable (i.e. specific to an operating system).

I get maybe two dozen requests for help with some sort of programming or design problem every day. Most have more sense than to send me hundreds of lines of code. If they do, I ask them to find the smallest example that exhibits the problem and send me that. Mostly, they then find the error themselves. "Finding the smallest program that demonstrates the error" is a powerful debugging tool.

> But what it this space of 50K contain? Is it only the instructions of the program, or together with the space allocated for the variables?
It probably means everything (except dynamic memory allocation), if you're just loading this onto a bare board.

So in this example, the generated assembler of main() occupies 115 bytes (the text section), the global array msg is 12 bytes(data) and the global variable(bss) is 4 bytes.

Finding out how much stack space (at compile time) is trickier.
For simple code with a straightforward call relationship, it's pretty easy to work out.

It gets harder if you have like many 20+ nested function paths, as you need to sum each call path in turn to work out what the maximum depth is.
a() -> b() -> c() -> d() -> e() might be less than x() -> y(), if say y has lots of local variables.

Things get messy though if you have recursive calls, or variable length arrays on the stack.

Your default stack size might be quite large, but you should be able to configure it down to a really small size if your code is simple enough.

So when we declare a varilable, the compiler allocates memory to the stack at that point, or just gathers information of how much memory will be allocated in runtime?

Originally Posted by Salem

> But what it this space of 50K contain? Is it only the instructions of the program, or together with the space allocated for the variables?
It probably means everything (except dynamic memory allocation), if you're just loading this onto a bare board.

So in this example, the generated assembler of main() occupies 115 bytes (the text section), the global array msg is 12 bytes(data) and the global variable(bss) is 4 bytes.

Finding out how much stack space (at compile time) is trickier.
For simple code with a straightforward call relationship, it's pretty easy to work out.

It gets harder if you have like many 20+ nested function paths, as you need to sum each call path in turn to work out what the maximum depth is.
a() -> b() -> c() -> d() -> e() might be less than x() -> y(), if say y has lots of local variables.

Things get messy though if you have recursive calls, or variable length arrays on the stack.

Your default stack size might be quite large, but you should be able to configure it down to a really small size if your code is simple enough.