For instance; now that I know that the 2 GB barrier is still there, it may not be faster than the original AOS3.x, there's still hidden tricks in the original's source code that can't be taken advantage of, and SMP may still not be possible, there's very little left there to entice me.... I'm not even sure what that little bit is?

Because it runs on cost effective hardware and is actually useful for computing tasks invented after 1992?

Thierry Atheist wrote:

AROS not having virtual memory and multi-user is not anything that detracts AROS' possible use by me in the future. I consider those not being there a PLUS! And I would not want to use it if it was put in.

So you want AROS to be worse?

Here's an idea: only use one user account. Behold, machine is now single user. But that would be stupid, because then there would be no security model, and then your system would be full of viruses and junk.

Reminds me of that PC you abuse.

It's also hilarious that you don't want virtual memory. I mean, in a system with a 2gb ram limit, you seriously want to cripple yourself this way? There are no words.

Thierry Atheist wrote:

if the idea is to use E-UAE on a NatAmi to get AOS SW to run.... Heh, bad, bad idea.

AROS 68K is binary compatible with AOS. Duh.

Amiga Ppc

Posts 25813 Jan 2011 20:59

Jason S. McMullan wrote:

It's not 'optimized' for the ACA1230, it just happened to have a convenient MAPROM facility to remap the AROS ROM images in the KickStart ROM memory spaces at 0xe00000 and 0xf80000.

As for aros-m68k, it runs in WinUAE on all m68k0 CPUs, from the lowly M68000 to the M68060, without recompilation. I have specifically designed the m68k support to be able to run on the entire Amiga CPU line, all from the same binary.

I'm still working on FPU context support, so there's that that needs to be done, but if you can get the ROM on your machine, it should boot.

Well, thank you very much for your work.

Gunnar von BoehnGermany

(Moderator)Posts 581013 Jan 2011 21:27

Richard Maudsley wrote:

It's also hilarious that you don't want virtual memory. I mean, in a system with a 2gb ram limit, you seriously want to cripple yourself this way? There are no words.

Not wanting virtual memory makes very good sense to me.Virtual memory is like running your on 4wheel motorisation mode all the time - yes it has advantages but it comes for a price. It does result in a more expensive and slower system in the end of the day.

Richard MaudsleyUnited Kingdom

Posts 82413 Jan 2011 21:33

That depends entirely on the implementation. A good virtual memory system is a lifesaver when you need more ram than you have, and invisible otherwise.

It's called four-wheel drive, by the way. And the analogy doesn't work either, a car with permanent 4WD will handle better and so decrease speed lost in cornering. (I don't count 4x4 trucks as they have balance issues)

André JernungSweden

(MX-Board Owner)Posts 98913 Jan 2011 21:36

Thierry Atheist wrote:

The reason I brought up AmiKit was not to trash them, but to tell you my experience with it and if the idea is to use E-UAE on a NatAmi to get AOS SW to run.... Heh, bad, bad idea.

You really have no clue whatsoever, do you? Why the blazes would you want to emulate Amiga binaries when the OS is running them natively? Are you so indoctrinated with PPC Amiga and OS4:s Amiga emulation functionality that you think that you even need to emulate 68k Amiga on 68k Amiga itself?

Michael WardUSA

Posts 23413 Jan 2011 22:11

quite the discussion in here... I do not think it is fair to compare Aros68K to a basic Workbench 3.1 installation in any regard. Aros68K is big in size but so is a patched up Workbench in the form of Amikit or AmigaSys. Aros68K needs UAE or serious 68K hardware, but so does a modified Workbench.

I see some negative comments on UAE-Amikit. That is strange because I use these without issue for the most part. Same with AmigaSYS. Even ClassicWB packages work pretty good.

That depends entirely on the implementation. A good virtual memory system is a lifesaver when you need more ram than you have, and invisible otherwise.

But these are the typical arguments of people not understanding what the AMIGA is.

I we do understand what an AMIGA is, right.We know that the AMIGA is a pure 100% DMA machine: - DMA loads from disk, - DMA display Video, - DMA manipulates data, - DMA plays audio, - A programmed DMA driven simple RISC-machine(Copper) does control the other DMA machines,

The CPU is is only the conductor in a very big concert.

Memory protection and virtual memory only works for the CPU - and the CPU is a tiny little fraction of the AMIGA system.

Memory protection and virtual memory contradict a pure DMA design.

You like the Linux memory protection?You think this is save?It is save but for a very high price.

Memory protection is by desing not save on a DMA based machine, unless you completely disallow DMA usage from user space.

To become secury Linux goes the route to disallow DMA. Every bit that the DMA brings needs to be copied again by the CPU to get the sytem secure.

We don't want this, right?We do not want to handcopy every byte from our DiskDMA again.We do not want to handcopy eveything what the Blitter does touch again.

We have to make a pick.Either we like to have a DMA machine or we don't.

Thomas ClarkeUnited Kingdom

Posts 28614 Jan 2011 02:10

Gunnar von Boehn wrote:

But these are the typical arguments of people not understanding what the AMIGA is.

...

Memory protection and virtual memory only works for the CPU - and the CPU is a tiny little fraction of the AMIGA system.

...

Memory protection and virtual memory contradict a pure DMA design.

Interesting you bring this up, only in the sense that the 'true' Amiga experience (the one we never got) would have allowed for much better memory protection:EXTERNAL LINKIt's such a shame that CAOS was never finished, IMO Amiga systems would have been in a stronger position today if it had been.

Cesare Di MauroItaly

Posts 52814 Jan 2011 05:24

Jason S. McMullan wrote:

The simple answer is yes. I have made an AROS ROM image for my A1200, that I use with my ACA1230/56's hardware ROM remapping feature to test AROS on real Amiga hardware. All that would be needed for A1200 users to 'upgrade' to AROS m68k (when we get all the major bugs worked out), would be to get a manufacturer to burn ROMs with the image.

I don't know how much it'll cost to produce a ROM. May be a tiny board which works as an adapter using a PIC loading from an SD/MMC or a modern EEPROM/NVRAM (to make it cheaper) can be suitable and more desirable (making it easy to upgrade).

Although the current debug ROMs are 1M in size, I hope to be able to reduce that down to 512M that can continue to boot from floppy or HD for machines that can't be adapted to use a 1M ROM set.

There are now 192KB of extra space required (against the 512KB target), which are a lot, especially without a good optimizing compiler (which GCC isn't for 68K).

I don't know how much AROS is linked to GCC, but may be trying another compiler (VBCC?) can help.

Another option can be to compress some (less used) resident modules, and decompress them on regular memory when the machine resets. This gives, also, extra space to provide a more update Wanderer, for example.

Cesare Di MauroItaly

Posts 52814 Jan 2011 05:30

Thierry Atheist wrote:

now that I know that the 2 GB barrier is still there, it may not be faster than the original AOS3.x, there's still hidden tricks in the original's source code that can't be taken advantage of, and SMP may still not be possible, there's very little left there to entice me.... I'm not even sure what that little bit is?

I don't understand your first statement. How a 2GB memory limit can make the system slower? Because you can use the extra memory for ram disk? Caching? Or what else?

Keep in mind that we are talking about AmigaOS, which wasn't a resource hungry, and applications too weren't so much demanding on memory requirement (except for some of them).

I think that a 2GB system can satisfy many guys, and surely without any sort of "slowdowns".

Megol .

Posts 69514 Jan 2011 08:14

Cesare Di Mauro wrote:

Thierry Atheist wrote:

now that I know that the 2 GB barrier is still there, it may not be faster than the original AOS3.x, there's still hidden tricks in the original's source code that can't be taken advantage of, and SMP may still not be possible, there's very little left there to entice me.... I'm not even sure what that little bit is?

I don't understand your first statement. How a 2GB memory limit can make the system slower? Because you can use the extra memory for ram disk? Caching? Or what else?

Keep in mind that we are talking about AmigaOS, which wasn't a resource hungry, and applications too weren't so much demanding on memory requirement (except for some of them).

I think that a 2GB system can satisfy many guys, and surely without any sort of "slowdowns".

He just doesn't know anything about computers. He just combines bits he heard/read into a incoherent rambling, for examples go to moobunny and read some of his rants about Windows XP.

Thomas RichterGermany

(MX-Board Owner)Posts 143814 Jan 2011 08:30

Gunnar von Boehn wrote:

The CPU is is only the conductor in a very big concert.

Memory protection and virtual memory only works for the CPU - and the CPU is a tiny little fraction of the AMIGA system.

Memory protection and virtual memory contradict a pure DMA design.

Sorry, Gunnar, but this argument does not apply. If it would, then PCs would be impossible, and memory protection on Linux or Windows would be impossible.

Hint: It isn't.

As long as the CPU initiates the DMA transfers through the Os, and the Os ensures that the transfered memory is within the region accessible to the user initiating the transfer, everything is fine. The CPU is the conductor, and the CPU by that has the control of which DMA transfer is initiated and which is not.

Gunnar von Boehn wrote:

You like the Linux memory protection?

Yes.

Gunnar von Boehn wrote:

You think this is save?

Yes.

Gunnar von Boehn wrote:

It is save but for a very high price.

No. All you need to do is to write device drivers reasonable. Hint: CachePreDMA and CachePostDMA exist.

Gunnar von Boehn wrote:

Memory protection is by desing not save on a DMA based machine, unless you completely disallow DMA usage from user space.

No. All the Os has to do is to verify that the memory regions to be transfered are valid, and prohibit direct access to the DMA control registers from user space. None of these algorithms imply huge costs.

Gunnar von Boehn wrote:

To become secury Linux goes the route to disallow DMA.

Since when that? No, it doesn't.

Gunnar von Boehn wrote:

Every bit that the DMA brings needs to be copied again by the CPU to get the sytem secure.

No.

Besides, should I remind you that you invented the rule of "disk DMA only to chip memory?" So quite the reverse, the inability of DMA-ing into fast mem requires *on the Natami* with the current design that disk data is copied around. That is not required in a sane system. Ehem.

Gunnar von Boehn wrote:

We don't want this, right?

Yet, we have it on the Natami. Tough luck.

Gunnar von Boehn wrote:

We do not want to handcopy every byte from our DiskDMA again.

Yet, we have to on the Natami. Tough luck.

Gunnar von Boehn wrote:

We do not want to handcopy eveything what the Blitter does touch again.

Why would anyone need to do that? The blitter would operate on graphics only, and the Os would ensure that it does not access memory outside of the graphics buffer allocated for its purpose. It is just a matter of disallowing direct access to the blitter registers (easy done with a MMU) and including appropriate clipping into BltBitmap() and friends. If AmigaOs *had* a secure design to begin with, which it doesn't. As soon as you can fake a bitmap, you're insecure anyhow, but the insecurity is not due to allowing DMA, but due to the misdesign of the Os.

Gunnar von Boehn wrote:

We have to make a pick. Either we like to have a DMA machine or we don't.

Currently, disk DMA is crippled, so you don't have that pick in first place. But besides, the implications you're stating simply don't hold. A DMA machine can very well be safe, provided the Os, and only the Os, is in charge of programming the DMA controller.

Greetings,Thomas

Thomas RichterGermany

(MX-Board Owner)Posts 143814 Jan 2011 08:34

Gunnar von Boehn wrote:

Richard Maudsley wrote:

It's also hilarious that you don't want virtual memory. I mean, in a system with a 2gb ram limit, you seriously want to cripple yourself this way? There are no words.

Not wanting virtual memory makes very good sense to me. Virtual memory is like running your on 4wheel motorisation mode all the time - yes it has advantages but it comes for a price. It does result in a more expensive and slower system in the end of the day.

Not at all, it can help you to keep the machine running where it would just crash otherwise. Anyhow, this discussion is academic because the current Os design doesn't really allow virtual memory in first place. (Hint: Forbid() is again the problem.)

So long,Thomas

Gunnar von BoehnGermany

(Moderator)Posts 581014 Jan 2011 09:14

Thomas Richter wrote:

Gunnar von Boehn wrote:

Memory protection is by desing not save on a DMA based machine, unless you completely disallow DMA usage from user space.

No. All the Os has to do is to verify that the memory regions to be transfered are valid, and prohibit direct access to the DMA control registers from user space. None of these algorithms imply huge costs.

If you look at this "in theory" then yes.

If you look for example how Linux does thisthen I know two approaches of doin it in real live.

A) The Secure approach: DMA goes only to Kernel Buffers.Linux does this by copying every byte that is Read or Written by DMA, first into a Kernel buffer.

The Linuxkernel functions used for this are called:copy_from_user and copy_to_user

This extra copying ensures that every byte goes thru the MMU.This makes DMA secure but slow the system down dramatically.This solution often becomes a serious system bottleneck.

While under AMIGA OS all you need to copy 1 MB with the blitter is: own the blitter, set the address, set mintem and size registers and you are done.

This means if you own the Blitter, to copy 1 MB of memory the CPU only needs to do about 8 move instructions.

On Linux the CPU does copy the whole MB first into Kernelspace.This means you execute in the order of a million CPU instructions more, for the same!

The other common approach on Linux is called Kernel-Bypass.This is commonly marketed as "accelerating" of Linux by allowing direct access of DMA from userspace.Defacto this means - the application does totally bypass the memory security system. The application can this way read anything on the system. Or in case of a typo in setting up the DMA the application can trash anything in the system.

CheersGunnar

Gunnar von BoehnGermany

(Moderator)Posts 581014 Jan 2011 09:41

Thomas Richter wrote:

Besides, should I remind you that you invented the rule of "disk DMA only to chip memory?" So quite the reverse, the inability of DMA-ing into fast mem requires *on the Natami* with the current design that disk data is copied around. That is not required in a sane system. Ehem.

Thomas, please don't be unfair here.There is a differance between "Perfect solution", and "Working on day 1".

The perfect solution for DMA to all memory regions,will go 100% transparently needs a very clever CPU cache controller.

We all agree that getting to this point will be very nice.But this is not something to rush on the 1st day.First get the CPU working, get the whole system working.Doing this step by step is reasonable.

Vidar HokstadUnited Kingdom

Posts 7014 Jan 2011 10:07

Gunnar von Boehn wrote:

To become secury Linux goes the route to disallow DMA. Every bit that the DMA brings needs to be copied again by the CPU to get the sytem secure.

That's patently false. First of all, *any* POSIX compliant OS can support DMA straight to/from user pages when using mmap().

Secondly, Linux goes further and from 2.4 on supports zero (CPU) copy for a number of situations. That is, not only can you DMA straight into or from user space buffers, but you can for some situations avoid userspace buffers entirely. E.g if you want to read from a file and write to a socket, the kernel can do that by triggering DMA from the drive to a kernel buffer and then immediately DMA that data to the network card.

You're right that you can't initiate a DMA transfer from user space in Linux, but what that means is not that the kernel needs to do additional copies (though it does in many situations if developers are careless) for everything - nothing prevents the *kernel* from initiating DMA straight to/from user space, and it does if you write your code properly.

Marcel VerdaasdonkNetherlands

Posts 405114 Jan 2011 10:26

Gunnar your explanation is excellent.

But the Kernal By pass trick is how AOS would handle it too.(sort of)

Further more you total omit uClinux which can run without MMU.And your post is written like there is no linux that can run without MMU.

In linux there is nothing you can stamp never, or always on since there is always a exception. ;)

Loïc DupuyFrance

Posts 25314 Jan 2011 12:41

Marcel Verdaasdonk wrote:

Further more you total omit uClinux which can run without MMU.And your post is written like there is no linux that can run without MMU.

True, but you loose the memory protection in the move. Then, nothing prevent a program to write everywhere in memory, and crash the computer like AOS.

µLinux is still interresting for the drivers library and if you want to use an unix like OS without MMU (educationnal, µcontroler, amiga1200+4MiB without accelerator card :-), ... ).But you loose the main advantage of real memory protection and real separation of privileges.