Pages

Saturday, October 27, 2012

Hi all.
This article is co-authored with David Turner David watched my youtube video showing of a workstation running Linux and multitasking beyond what is expected for that hardware.
David started communicating with me as he has the same hardware base I have and uses windows, so we was both curious, confused... and I think that part of his brain was telling him "fake... it's got to be another fake youtube crap movie".
So I channelled him to this blog and the latest post at the time about the Commodore Amiga and its superiority by design. Dave replied with a lot of confusion as most of the knowledge in it was too technical. We then decided that I would write this article and he would criticize-me whenever I got too technical and difficult to understand, forcing-me to write more "human" and less "techye".
Se he is co-author as he is criticising the article into human readable knowledge. This article will be split into lessons and so this will change with time into a series of articles.

Note, this article will change in time as David forces-me to better explain things. Don't just read-it once and give-up if you don't understand, comment, and register to be warned about the updates.

Starting things up...(update 1)

Lesson 1 : The hardware architecture and the kernels.

Hardware architecture, is always the foundation of things. You may have the best software on earth, but if it runs on a bad hardware...instead of running, it will crawl.
Today's computers are a strange thing to buy. There is increasingly less support for NON-intel architectures, which is plain stupid, because variety will generate competition instead of monopoly, competition will generate progress and improvement. Still most computers today are Intel architecture.

Inside the Intel architecture world, there is another heavy weight that seems to work in bursts. That would be AMD.
AMD started as an Intel clone, and then decided to develop technology further. They were the first to introduce 64bit instructions and hardware with the renown Athlon64. At that time, instead of copying Intel, AMD decided to follow their own path and created something better than Intel. Years latter, they don-it again with the multi-core CPU. As expected, Intel followed and got back on the horse, so now we have to see AMD build more low budget clones of Intel until they decide to get back on the drawing board and innovate.

So what is the main difference between the 2 contenders on the Intel Architecture world?
Back on the first Athlon days, Intel focus development on the CPU chip as pure speed by means of frequency increase. The result is that (physics 101) the more current you have passing on a circuit with less purity of copper/gold/silicon, the more atoms of resisting material will be there to oppose current and generate heat. So Intel developed ways to use less and less material (creating less resistance, requiring less power and generating less heat) that's why Intel CPU have a dye size smaller than most competitors 65nm, 45nm, 37nm and so on. For that reason, they can run at higher speeds and that made Intel development focus not on optimizing the way the chip works, but rather the way they build the chips.
AMD on the other hand doesn't have the same size as Intel, and doesn't sell as much CPUs, so optimizing chip fabrication would have a cost difficult to return. The only way was to improve chip design. That's why Athlon chip would be faster at 2ghz than an Intel at 2.6 or 2.7ghz...it was better in design and execution of instructions.
Since the market really don't know what they buy and just look at specs, AMD was forced to change their product branding to the xx00+... 3200+ meaning that the 2.5gh chip inside, would be compared to (at least) a pentium 3.2ghz in performance. That same branding evolved to the dual core. Since Intel publicized their Hyper-threading CPU (copying the AMD efficiency leap design, but adding a new face to it called the virtual CPU) AMD decided to evolve into the dual core CPU (Intel patented the HyperThreading and thow using the AMD design as inspiration, they managed to lock them out of the marketing to use their own designs.... somehow I feel that Intel has really a lot to do with today's Apple!)... and continued calling it the 5000+ for the 2 core 2500+ 2gh per core CPU.
So to this point in time the AMD and Intel could compete in speed of CPU, the AMD athlon64 5000+ dual core @ 2gh per core would be as fast as an Intel Core2Duo dual core @2.5Ghz!? Not quite. Speed is not always about the GHz as AMD already proved with the Athlon superior design.
At some point in time, your CPU needs to Input/output to memory, and this means the REAL BIG difference in architecture between AMD and Intel.
Intel addresses memory through the chip-set (with the exception of the latest COREix families). Most chip-sets are designed for the consumer market, so they were designed for a single CPU architecture. AMD, again needing to maximize production and adaptability designed their Athlon with an built in memory controller. So the Athlon has a direct path (full bandwidth, high priority and very very fast) to memory, while Intel has to ask the chip-set for permission and channel memory linkage through it. This design removes the chip-set memory bandwidth bottleneck and allows for better scalability.
The result? look at most AMD Athlon, Opteron or Phenom multi-CPU boards and find one memory bank per CPU, while Intel (again) tried to boost the speed of the chip-set and hit a brick-wall immediately. That's why Intel motherboards for servers rarely go over the 2 CPU architecture, while AMD has over 8CPU motherboards. Intel and it's race for GHz rendered it less efficient and a lot less scalable.
If you always stopped to think how intel managed a big performance increase out of the CORE technology (that big leap that CORi3, i5 and i7 have when compared to the design it's based on - the Core2Duo and Core2Quad), then the answer is simple... they already had Ghz performance, when they added a DDR memory controller to the CPU, they jumped into AMD performance territory! Simple, and effective...with much higher CPU clock. AMD had sleep for too long, and now intel rules the entire market in exception for the super computing world.

The Video and the AMD running Linux.
This small difference in architectures play an important role in the Video I've shown with the Linux being able to multitask like hell. The ability to channel data to and from memory directly means the CPU can be processing a lot of data in parallel and without asking(and waiting for the opportunity) the chip-set to move data constantly.

So the first part of this first "lesson" is done.
Yes, today's Intel Core i5 and i7 is far more efficient than AMD equivalence, but still not as scalable, meaning that in big computing, AMD is the only way to go in the x86 compatible world. AMD did try that next leap with the APU recently, but devoted too much time on the development of the hardware and forgot about the software to run-it properly. And I'll leave this to the second part of this "lesson". They also choose ATI as it's partner for GPUs... Not quite the big banger. NVIDEA would be the ones to choose. Raw power of processing power is NVIDEAs ground, while ATI is more focused on the purity of colour and contrast. So when AMD tried to fuse the CPU and the GPU (creating the APU), they could have created a fully integrated HUGE processing engine... but instead they just managed to create a processing chip-set. Lack of vision? Lack of money? Bad choice in the partnership (as NVIDEA is the master of GPU super computing)? I don't know yet... but I screamed "way to go AMD" when I heard about the concept... only to shout "stupid stupid stuuupid people" some months later when it came out.

The software architecture to run on the hardware architecture.
Operating systems are composed of 2 major parts. The presentation layer (normaly called GUI, or Graphical User Interface) which is the one communicating between the user (and the programs) to the Kernel layer. And obviously the kernel layer that will interface between the presentation layer and the hardware.

So...windows and pictures and icons apart, the most important part of a computer next to the hardware architecture, is the kernel architecture.
There are 4 types of kernels:
- MicroKernel - This is coded in a very direct, and simple way. It is built with performance in mind. Microkernels are normally included into routers, or printers, or simple peripherals that have specific usage and don't need to "try to adapt to the user". They are not complex and so eat very little CPU cycles to work, meaning speed and efficiency. They are however very inflexible.
- Monolithic Kernels - Monolithic Kernels are BIG and heavy. They try to include EVERYTHING in it. So it's a kernel very easy to program with, as most features are built in and support just about any usage you can thing of. The down side is that it just eats up lot's of CPU cycles while verifying and comparing things because it tries to consider just about every possible usage. Monolithic kernels are very flexible at the cost of a lot of memory usage and heavy execution.
- Hybrid Kernels - The hybrid-kernel type is a mix. You have a core kernel module that is bigger than the rest, and while loading, that module controls what other modules are loaded to support function. These models are not as heavy as the monolithic, as they only load what they need to work with, but they have to contain a lot of memory protection code to avoid one module to use other modules memory space. So they are not as heavy as the Monolithic, but not necessarily faster.
- Atypical kernels - Atypical kernels are all those kernels out there that don't fit into these categories, mainly because they are too crazy, too good or just too exquisite to be sold in numbers big enough to create their own class. Examples of these are brilliant Amiga kernels and all the wannabes sprung by it (BEOS, AROS, etc), Mainframe operating system kernels and so on.#REFERENCE nr1 (check the end of the article)#

For the record, I personally consider the Linux to be an atypical kernel. A lot of people think the Linux is Monolithic and would be right...in part. Some others would consider it to be Hybrid and be right...in part.
The linux kernel is a full monolithic code block as a monolithic kernel, however, that kernel is hardware match compiled. When you install your copy of Linux, the system probes the hardware you have and then chooses the best code base to use for it. For instance why would you need the kernel base to have code made for the 386 CPU, or the Pentium mmx if you have a Core2Duo, or an AMD Opteron64? The Linux kernel is matched to your CPU and the code is optimized for it. When you install software that needs a direct hardware access (drivers, virtualization tools, etc) you need the source code for your kernel installed and a c++ compiler for one simple reason ->The kernel modules installed to support those calls to hardware are built into your new kernel and it is recompiled for you. So you have a Hybrid-made-monolithic kernel design. Not as brilliant as the Amiga OS kernel, but considering that the Amiga O.S. kernel needs the brilliant Amiga hardware architecture, the Linux kernel is the best thing around for the Intel compatible architecture.
Do I mean that Linux is better for AMD than Intel? Irrelevant! AMD is better than Intel if you need heavy memory usage. Intel is better than AMD if you need raw CPU power for rendering. Linux kernel is better than windows kernel...so comparing to today's windows, Linux is the better choice, regardless of architecture. However AMD users have more to "unleash" while converting to Linux, as windows is more Intel biased on purpose, and less memory efficient.

Resources are limited!
Why is Linux so much more efficient than windows with the same hardware?
Windows kernel is either monolithic (w2k, nt, win 9x) or hybrid (w2k3, xp, vista/7, w2k8, 8). However the base of a hybrid kernel is always the cpu instructions and commands and that is always a big chunk.
Since Microsoft made a crusade against the open-source, they have to keep with their "propaganda" and have a pre-compiled (and closed) CPU kernel module (and this is 50% of why I don't like Windows...they are being stubborn instead of efficient). So while much better that w2k, xp and 7 will still have to load-first a huge chunk of code that has to handle everything from the 386 to the future generations i7 cores and beyond. Meaning that they always operate in a compromised operation mode and will always have code in memory being unused. Microsoft also has a very closed relationship with Intel and tends do favor it against AMD, making any windows run better in Intel than AMD...this is very clear when you dig around AMD FTP and find several drivers to increase windows speed and stability on AMD CPUs...and find nothing like that on Intel. For some reason people call the PC a wintel machine.
So To start, Linux has a smaller memory footprint than windows, it has more CPU instruction-set usage than windows "compatibility mode", it takes advantage of AMDs excellent memory to CPU bus.
Apart from that there is also the way windows manages memory. Windows (up until the vista/7 kernel) was not very good managing memory. When you use software, the system is instancing objects of code and data in memory. Windows addresses memory in chunks made of 4kb pages. So if you have 8kb of code, it will look for a chunk with 2 memory pages of 4kb free and then use-it.... if however your code is made from 2 objects, one with 2kb and another with 10kb, windows will allocate a chunk with one page for the first one, and then a chunk of 3 pages to the second code. You'll consume 4+12kb = 16Kb for 12kb of code. This is causing the so called memory fragmentation. If your computer only had 16Kb of memory, in this last case you would not be able to allocate memory for the next 4kb code. Although you have 4Kb of free memory, it is fragmented into 2 and since it's non continuous, you would not have space to allocate the next 4kb.
The memory fragmentation syndrome grows exponentially if you use a framework to build your code on. Enter the .NET. .Net is very good for code prototyping, but as it's easy to code for, it is so because the guys building it created objects with a lot of functionality built into it (to support any possible usage)... much like the classinc monolithic kernel. The result is that if you examine memory, you'll find out that a simple window with a combo box and an ok button will mean hundreds if not thousands of objects instanced in memory...for nothing as you'll only be using 10% of the coded object's functionality.
Object Oriented programming creates Code objects in memory. A single "class" is Instanced several times to support different usage of the same object types but as different objects. After usage, memory is freed and returned to the operating system for re-usage.
Now picture that your code creates PDF pages. The PDF stamper works with pages that are stamped individually and then glued together in sequence. So your code would be instancing, then freeing to re-instance a bigger object, to free after and re-instance a bigger one...and so on.
For instance: Memory in pages: |-1K 1K 1K 1K-|-1K 1K 1K 1K-|-1K 1K 1K 1K-|-1K 1K 1K 1K-|-1K 1K 1K 1K-|-1K 1K 1K 1K-|-1K 1K 1K 1K-|-1K 1K 1K 1K-|-1K 1K 1K 1K-| Your code: code instance 1 6K |-C1 C1 C1 C1-|-C1 C1 -|- Then you add another object to support your data (increasing as you process it) called C2 code instance 2 10K |-C1 C1 C1 C1-|-C1 C1 -|-C2 C2 C2 C2-|-C2 C2 C2 C2-|-C2 C2 -|- Then you free your first instance as you no longer need it. |- -|- -|-C2 C2 C2 C2-|-C2 C2 C2 C2-|-C2 C2 -|- And then you need to create a new code instance to support even mode data called C3. This time you need 18Kb, so: code instance 3 18K |- -|- -|-C2 C2 C2 C2-|-C2 C2 C2 C2-|-C2 C2 -|-C3 C3 C3 C3-|-C3 C3 C3 C3-|-C3 C3 C3 C3-|-C3 C3 C3 C3-|...... and you've run out of memory!!
I know that today's computers have gigs of ram, but today's code also eat up megs of ram and we work video and sound and we use .Net to use it.... you get the picture.

Linux and Unix have a dynamic way to address memory and normally re-arrange memory (memory optimization and de-fragmentation) to avoid this syndrome.
In the Unix/Linux world you have brk, nmap and malloc:
- BRK - can adjust the chunk size end to match the requested memory so a 6k code would eat 6k instead of 8k
- malloc - can grow memory both ways (start and end) and re-allocate more memory as your code grows (something wonderful for object oriented programming because the code starts with little data, and then grow as the program and user starts working it). In windows this will either be handled with a huge chunk pre-allocation (even if you don't use-it), or by jumping your code instance from place to place in memory (increasing fragmentation probability). The only problem with malloc is that it is very good allocating memory and not so good releasing it. So nmap was entered into the equation.
- nmap - works like malloc but it's useful for large memory chunk allocation and it's also very good releasing it back. When you encode video or work out large objects in memory, nmap is the "wizard" behind all that Linux performance over windows. The more data you move in and out of memory, the more perceptible this is.

There is also something important to this. If you thing about this, who does the memory moving in an Intel architecture? The CPU... so even using windows, moving stuff around memory constantly, the AMD has better performance because of the in CPU memory controller while the Intel platform needs to channel everything through the chip-set.
The CPU architecture (both Intel and AMD) have, under normal conditions a "stack" of commands, and not all of them are using the entire CPU processing power, so Intel uses the "virtual processor" in hyper-threading, making 2 different code threads to be calculated at once, while AMD works it's architecture with simultaneous execution (everything from cache to CPU registers is parallel) and doubling the bus speed (100mhz bus, would work as 200mhz bus inside the CPU, allowing the system to divide or share CPU resources and communication from outside would happen at half speed of the processing speed. So if you enter 2x 32bit instructions (on a 64bit Athlon for instance), in theory, if those instructions are actually 32bit only and use the same amount of CPU cycles to be worked out, the CPU would return the result at once. Without this technology, the CPU would accept one instruction at a time and reply accordingly.
Does the Intel CPU return better MIPS on CPU tests? yup. Most of the CPU testing software's induce big calculation instructions and eat up all of the CPU execution stack, so, no parallelization is possible (that part of AMD execution optimization...and Intel Hyper threading), and since the Intel CPU runs a higher clock speed (all those GHz), the results favour them. Still in real life, unless you are rendering 3D, AMD has the ground in true usable speed. Especially if under a good operating system that takes advantage of this and doesn't cripple RAM as it uses it.
It's simple if you think about it.
Both AMD Athlon 64 running at 2GHz and Intel Core2 at 2.5GHz have 64bit architecture. If they both get 2x 32bit instructions, Core2 will show the real CPU for its first 32bit instruction, and the hyper-threading second virtual cpu for the second instruction... and then would do this at 2.5GHz.
At the same time the AMD would receive 2 instructions at once to the one and only CPU, side by side, but then would process each instruction to the CPU internally at double the speed. So the 0.5Ghz the AMD has less, is compensated by the fact that internally, it works writes and reads instructions, results and data twice as fast. If, however you send a full 64bit calculation, neither of CPU's will be able to parallel the execution stack... so the advantage of the double-data-rate inside the Athlon is gone and the only thing in play from that point on is Ghz....and the Intel has more!So, to conclude this first "lesson":
Linux on a good hardware architecture will multi-task way better than windows because:
- AMD had a direct memory controlled in CPU and a direct memory connection as a result.
- It can take direct advantage of AMD memory bandwidth and CPU functions because the kernel is CPU and hardware matched
- The kernel is lighter because it is hardware matched.
- The kernel doesn't need a lot of memory protection because it's "monolithic" in part.
- Most code for Linux is done in c++ so it has no .net weight behind it (nor the operating system)
- Linux handles memory. Windows juggles things until it "starts to drop"...or crash :S.

The P.S. part :)#REFERENCE nr1#:Comment: You like Amiga a lot. Are you implying one can still buy one? Reply: Yup and No. Yes you can still use an amiga today. Yes you still have hardware updates and software updates today that keep the Amiga alive.
No, not the commodore USA as it's just another wintel computer named as amiga... a grotesque thing for a purist like me.
Keep in mind that the Amiga was so advanced that, if you are looking too buy a computer 10 years into the future, than you have no Amiga to buy. The NATAMI project is the best so far, but from what I've read, it's just an up-to-date of the old Amiga... good and faithful, but not the BANG the Amiga was and has been until commodore gone under. The new Amiga can't just be an update, cause the old one with today's hardware mods can do so! The new Amiga has to show today what wintels will do 10 years from now.
Maybe I can gather enough money to build it myself...I've got the basic schematics and hardware layout and I call this Project TARA (The Amiga Reborn Accurately).

Wednesday, October 24, 2012

The renaissance of the silicon...nooo I'm not talking about Lola Ferrari nor Pamela Anderson and not even Ana Nicolle Smith. I am talking about the computer renaissance.
A lot of people think that this day and age are the days of the renaissance. They are wrong. Computer renaissance happened long ago. It's just that very few noticed it. Those are lucky ones that were blessed with a true Silicon based Leonardo da Vinci Workshop. And out of those, the ones that were able to get the full picture, bloomed into a daVinci type brain.

Why do I state this in the MultiCoreCPU and ZillionCoreGPU world where your refrigerator chip is more powerfull than the early IBM mainframes? Well, bare with me for a couple of minutes and continue reading.

Renaissance was not about vulgar display of power, but rather an era of intellectual growth and multiplicity of knowledge. The renaissance created some of the worlds best ever polymaths (people that master several areas of knowledge and have open-to-knowledge minds)...such as Leonardo da Vinci.

So back to this day and age. The corei7 has multi cores of processing power able to process around 100 GigaFlops, a Nvidea card can have 512 GPU cores and kick out around 130 GigaFlops of parallel processing power.Today we play 3d games rendered at 50frames per second in resolutions exceeding the 1900X1200 mark, while back in the early 90's, the best desktop computer would take 48hours to render one 640x480 frame.

Still, when did we leap from the "electronic typewriter" linked to a amber display and the rudimentary graphics to the computer that can render graphics in visual quality, produce video, produce sound, play games on...and still is able of word processing and spreadsheets? Because that was the turning point. That was the computer renaissance.

Still following me? It's difficult to pin point in time when exactly did this all start and witch brand kicked it.

Some say that it was Steve Jobs and the early 84 Macintosh... and thought not entirely wrong, are far from actually being right. The first "Mac" had an operating system copied from the XEROX project (that same project that Microsoft later brought from XEROX and spawned into MSWindows 1)...and that first Mac design was actually fathered by Jef Raskin (that left the LISA project) while only after the first prototype, had Steve Jobs gain interest in the Mac project and also he left the LISA project...kicking Jef our of the Mac project (some character this Jobs boy).

The next logical contestant is the Commodore VIC20. It was aimed strait to the Mac market and with some success. But still not exactly able to kick the renaissance era, much like the first Mac.

So.. was it Commodore C64/128 family? ahhh now we are talking more on the kind of flexibility needed to kick that so much needed renaissance, still short in ambition. They were brilliant gaming machines with some flexibility but not enough gut to take it through.

Most would now be shouting "ATARI... the ATARI-ST" and would be... wrong. It's a good machine with a too-conventional-to-bloom architecture. Good? Yes! Brilliant? No!

It's clear by now that the computer renaissance podium is taken by the Commodore Amiga. I'm not talking about the late 90's 4000... nor the 1200... or the 600...or even the world renown 500. I'm referring to the Amiga architecture. And that's something that will date back to the very first A1000 (yes the A1000 has a lower spec than the A500 and it's the father of them all).

The Amiga (unlike most will think) is not:
- ATARI technology stolen by engineers leaving the company
- Commodore own technology

The Amiga Corporation project started life in 1982 as Hi-Toro, and the Amiga its self as Lorraine game machine. It was a startup company with a group of people gathered by Larry Kaplan who "fished" Jay Miner and some other colleagues (some from Atari) that were tired of ATARI's management and were disappointed with the "way things headed". Jay (called the father of the Amiga, but actually not the father of the Amiga, but rather it's brilliant architecture) was able to choose passionate people that were trying to do their absolute best.
They were not worried about the chip power as that was something that Moore law would take care (in time), but rather the flexibility of chip design and the flexibility of architecture design.
They were not worried with software features (another thing that the community would pick up in time) but rather on building a flexible and growable base.
And above all I think, they were totally committed to giving the ability to code for the Lorraine console out of the box with the Lorraine console (unlike the standards back then, when everything was done in specific coding workstations... and if you think about that, much like any non computer device today)...that bloomed the latter called Amiga Computer.

The TEAM
Jay chose an original team of very dedicated and commited to excel people

1985

2007

Some pictures (courtesy of http://uber-leet.com/HistoryOfTheAmiga/) taken from the "History of the Amiga documentary" available on youtube:

The team has changed over the years and the full Amiga evolution history has a huge list of people (source: http://www.amigahistory.co.uk/people.html):Mehdi Ali- A former boss at Commodore who made a number of bad decisions, including cancelling the A3000+ project and the release of the A600. He has been largely blamed for the fall of Commodore during 1994 and is universally disliked by most Amiga users.

Greg Berlin- Responsible for high-end systems at Commodore. He is recognised as the father of the A3000.

David Braben- Single-handedly programmed Frontier: Elite II and all round good egg.

Andy Braybrook- Converted all his brilliant C64 games to Amiga, and got our eternal thanks.

Martyn Brown- Founder of Team 17. Not related to Charlie.

Arthur C. Clarke- Author of the famous 2001AD book and well known A3000 fan.

Wolf Dietrich- head of Phase 5 who are responsible for the PowerUP PowerPC boards.

Jim Drew- Controversial Emplant headman who has done a great job of bringing other systems closer to the Amiga.

Lew Eggebrecht- Former hardware design chief.

Andy Finkel- Known as the Amiga Wizard Extraordinaire. He was head of Workbench 2.0 development, as well as an advisor to Amiga Technologies on the PowerAmiga, PPC-based Amiga system. He currently works for PIOS.

Fred Fish- Responsible for the range of Fish disks and CDs.

Steve Franklin- Former head of Commodore UK.

Keith Gabryelski- head of development for Amiga UNIX who made sure the product was finished before faxing the entire Amiga Unix teams resignation to Mehdi Ali.

Irving Gould- The investor that allowed Jack Tramiel to develop calculator and, eventually desktop computers. He did not care about the Amiga as a computer but saw the opportunity for computer commodification with the failed CDTV.

Simon Goodwin- Expert on nearly every computer known to man. Formerly of Crash magazine.

Rolf Harris- Tie me kangaroo down sport etc. Australian geezer who used the Amiga in his cartoon club.

Allen Hastings- Author of VideoScape in 1986, who was hired by NewTek to update the program for the 90's creating a little known application called Lightwave, the rendering software that for a long time was tied to the Video Toaster. This has made a huge number of shows possible, including Star Trek and Babylon 5.

Dave Haynie- One of the original team that designed the Amiga. Also responsible for the life saving DiskSalv. He has been very public in the Amiga community and has revealed a great deal about the proposed devices coming from Commodore in their heyday. His design proposal on the AAA and Hombre chipsets show what the Amiga could have been if they had survived. He also played an important part in the development of the Escom PowerAmiga, PIOS, and the open source operating system, KOSH.

Larry Hickmott- So dedicated to the serious side of the Amiga that he set up his own company, LH publishing.

John Kennedy- Amiga journalist. Told the Amiga user how to get the most of their machine

Dr. Peter Kittel- He worked for Commodore Germany in the engineering department. He was hired by Escom in 1995 for Amiga Technologies as their documentation writer and web services manager. When Amiga Technologies was shut down he worked for a brief time at went to work for the German branch of PIOS.

Dale Luck- A member of the original Amiga team and, along with R.J. Mical wrote the famous "Boing" demo.

R. J. Mical- member of the original Amiga, Corp. at Los Gatos and author of Intuition. He left Commodore in disgust when Commodore choose the German A2000 design over the Los Gatos one, commenting "If it doesn't have a keyboard garage, it's not an Amiga."

Jeff Minter- Llama lover who produced some of the best Amiga games of all time and has a surname that begins with mint.

Jay Miner(R.I.P.)- The father of the Amiga. Died in 1994. Before his time at Amiga Corp. he was an Atari engineer and created the Atari 800). He was a founding member of Hi-Toro in 1982 and all three Amiga patents list him as the inventor. He left Amiga Corp after it was bought by Commodore and later created the Atari Lynx handheld, and during the early 1990's continued to create revolutionary designs such as adjustable pacemakers.

Mitchy- Jay Miner's dog. He is alleged to have played an important part in the decision making at Amiga Corp. and made his mark with the pawprint inside the A1000 case.

Urban Mueller- Mr. Internet himself. Solely responsible for Aminet, the biggest Amiga, and some say computer archive in existance. Responsible for bringing together Amiga software in one place he deserves to be worshipped, from afar.

Peter Molyneux- Responsible for reinventing the games world with Syndicate and Populous. He is also famed for being interviewed in nearly every single computer mag imaginable IN THE SAME MONTH.

Bryce Nesbitt- The former Commodore joker and author of Workbench 2.0 and the original Enforcer program.

Paul Overaa- Amiga journalist. Helped to expand the readers knowledge of the Amiga.

David Pleasance- the final MD of Commodore UK and one-time competitor for the Amiga crown. Owes me 1 PENCE from World of Amiga '96.

Colin Proudfoot- Former Amiga buyout hopeful.

George Robbins- He developed low-end Amiga systems such as the unreleased A300, which was turned into A600, the A1200 and CD32. He was also responsible for Amiga motherboards including B52's lyrics. After losing his driver's license, Robbins literally lived at the Commodore West Chester site for more than a year, showering in sinks and sleeping in his offices.

Eric Schwartz- Producer of hundreds of Amiga artwork and animations.

Carl Sassenrath- helped to create the CDTV, CDXL and has recently developed the Rebol scripting language.

Kelly Sumner- Former head of Commodore UK. Now head of Gametek UK.

Bill Sydnes- A former manager at IBM who was responsible for the stripped down PCjr. He was hired by Commodore in 1991 to repeat that success with the A600. However, at the time the Amiga was already at the low-end of the market and a smaller version of the A500 was not needed.

Petro Tyschtschenko- Head of Amiga International, formerly Amiga Technologies. Responsible for keeping the Amiga on track since 1995.

So why was this such a brilliant machine?It starts with the hardware.
The Amiga was based on the most flexible CPU of it's time. The Motorola MC68000 family. Motorola had the MC680x0 CISC CPU and MC68881/MC68882 FPU combination for workstations, and the MC880x0 RISK CPU family for the Unix workstations. That DNA fused into the PowerPC platform together with IBM RS6000 series RISK. Now some of you may say "yeah the PowerPC was such a flop that not even Apple and IBM stayed to it" and be ultimately wrong about it. The PowerPC problem was it's huge power consumption and dissipation when the CPU production couldn't go beyond the 90nm miniaturization. So a complex design with a lot of big transistors eat-up power and ultimately generate heat. That's why it got stuck. Ever tried to think why today's CPU's go Multi-core and rarely above 3ghz? yup .. better split the design and not let things get too hot...and today CPU are built on 22nm dye size miniaturization.
The PowerPC is very much alive. Inside your XBox 360, and you Nintendo Wii, and your Playstation3 lives a 65nm PowerPC configuring from single to triple core applications. There is even a 2Ghz Dual core PowerPC from Palo Alto Semiconductors..and IBM..just check their servers for not Microsoft software and drool all over the PowerPC cpu specs.
OK the CPU was important but was it all? NO!
The heart of the Amiga is called the AGNUS (later called Fat AGNUS, and Fatter AGNUS) processor. That's Jay Miner's most valuable DNA..and ultimately gave him the title of "father of the Amiga".
Consider the Agnus as a blazing fast and competent switchboard operator.
On one side you have the bus for the CPU, on the other side the memory bus and even a chipset bus. All conveying down to the Agnus. What's the catch? Well, picture you want to play a tune while working your graphics on the Amiga. The CPU loads the tune to memory, and then instructs the Agnus to stream this memory bank to the audio processing DAC chip. By doing that the CPU is then free to do all the processing needs. This is just one example. The graphics was actually the mostly used example o the Agnus chip, but it could do just about anything. That's why you have Amiga machines with addon CPU cards running at different speeds and all in sync. The Agnus is the maestro.

The Agnus, shown here in it's dye miniaturized form, started life as a very complex set of boards to imprint Jay's brilliance. Just take a good look at the complexity:

This was the true heart of the Amiga and it's brilliant architecture capable of true multitasking (instead of time-shared multitasking).

There were other chips for I/O, sound and graphics, but they all had a huge highway like, directly linked to memory at the hands of the Agnus.
These are pictures of the early prototypes and design sketches:

Then we get to the software.
The Amiga OS was built to take advantage of this brilliant design. Most Kernels exist around 3 base kernel models (Monolithic, Microkernel & Hybrid Kernels)
In Short, Monolithic kernel is big and has all the software packages needed to control hardware and provide software function (some call the Linux kernel monolithic...it is...ish... the linux kernel is compiled to the hardware and requested modules so it actually is a hybrid made monolithic kernel). The Microkernel is ofter seen on routers and simple devices that run a very fast yet little featured kernel design. The Hybrid is the kernel type that has a big chunk for the basic CPU and chipset functionality and then loads small microkernels as needed depending on the hardware available.
The Amiga kernel on the other hand it a beast on it's own league (followed by BeOS, AROS, MorphOS and Atheos/Syllabl).
It is a Microkernel design but threads each and every module. So from the kernel to software running, each and every one of them is a separate execution thread on the cpu, with it's own switches to memory from the Agnus and it's own address spaces. It's hugelly fast and stable, being the only draw back, the need for the coders to respect their given memory space (if not, the code could write memory from the loaded kernel space and crash the system...giving the Amiga known "Guru meditation error").

So this is the superior architecture that spawned the computer renaissance and allowed for the bloom that created the computers we have today.

Today, kids at school have a lot on knowledge that Leonardo da Vinci had. Back in the renaissance era he was one of the few having that knowledge...today every one has at least a good part of that knowledge.
Take this thought into the computer world and its progress timeline on steroids and you'll be comparing the 80's Commodore Amiga polymath-ability to today computer and even mobile phone. The Amiga was 10 to 15 year ahead of everything else out there... and in terms of hardware architecture, still is ahead of everything out there.
I was one of the lucky ones that migrated from the C64 to the Amiga500... and only 4 years later i was given an IBM PC. Had the Amiga been replaced by a PC computer (the Olivetti PC1 and the Schneider EURO-PC were big back then), my brain would be closed to the "electronic typewriter" sad reality. I am a polymath today because of the Amiga. It was the tool that (together with my parents investment in excellent and varied education) formed my brain in an open and exploratory way. Thank you Amiga.