Why not the "Pascal subset" of Ada? Base it on Ada 2012 and use pragma Profile to create the subset and keeping within the rules of the language. Use pragma Restrictions to enforce restrictions on the language. Ada is more powerful Pascal, it's also a lot more readable as begin...end don't need to be literally everywhere, i.e. if's, loops, etc. Plus you get a more powerful type system and package/unit system to boot.

I wonder how the person who started this thread is doing and whether there is any progress. He hasn't posted here for some time. The Bitwise Ion compiler seems similar in concept. I wonder how hard it is to port that to the Pi.

You are right. I got that wrong. The last time I met anyone programming in Ada was 1996. It was whilst I was working at Lucas Avionics in Hemel Hempstead. Before that it was about '94 whilst working on the Primary Flight Computers of the Boeing 777 at GEC Avionics in Rochester.

I'm not counting myself, 8 years ago I wrote an assembler in Ada. It was painful. I played with Ada for the ATMEL AVR devices about that time.

Did I miss something since then?

@ejolson,

Currently the Bitwise Ion compiler compiles Ion source into C. As such I'm sure no porting is required, it will build and run on the Pi just fine.

There is no binary executable code generation for any target yet. It would be great if someone produced an ARM code generator backend for Ion.

You may be right. However, from my point of view the main idea was to create a compiler simple enough that one person could develop and understand it. Such a compiler would then serve as a resource for simple hobbyist projects and people using the Raspberry Pi to learn how computers work.

...from my point of view the main idea was to create a compiler simple enough that one person could develop and understand it. Such a compiler would then serve as a resource for simple hobbyist projects and people using the Raspberry Pi to learn how computers work.

Yes, I think it's an excellent idea.

@jahboater,

That backend might be called GCC?

Might be. Others call it Clang/LLVM. Still others call it MSVC.

Bitwise really needs a code generator backend for ARM.

I once wrote a toy compiler for a language about as sophisticated as Tiny Pascal, probably less so, that generated x86 directly. It worked but the generated code was horribly inefficient.

Not sure that I have much of a clue how to create a decent code generator for Ion.

You are right. I got that wrong. The last time I met anyone programming in Ada was 1996. It was whilst I was working at Lucas Avionics in Hemel Hempstead. Before that it was about '94 whilst working on the Primary Flight Computers of the Boeing 777 at GEC Avionics in Rochester.

I'm not counting myself, 8 years ago I wrote an assembler in Ada. It was painful. I played with Ada for the ATMEL AVR devices about that time.

In my experience, the only people who find Ada "painful" are those who like to hack stuff together without thought, like they do in C or C++ and then spend weeks in a debugger.

You are right. I got that wrong. The last time I met anyone programming in Ada was 1996. It was whilst I was working at Lucas Avionics in Hemel Hempstead. Before that it was about '94 whilst working on the Primary Flight Computers of the Boeing 777 at GEC Avionics in Rochester.

I'm not counting myself, 8 years ago I wrote an assembler in Ada. It was painful. I played with Ada for the ATMEL AVR devices about that time.

In my experience, the only people who find Ada "painful" are those who like to hack stuff together without thought, like they do in C or C++ and then spend weeks in a debugger fixing the problems.

In my experience, the only people who find Ada "painful" are those who like to hack stuff together without thought, like they do in C or C++ and then spend weeks in a debugger fixing the problems.

I think I need to learn ADA, just did 6 weeks with CRO and logic analyzer debugging some C code.

Must have gotten soft by using Pascal for the last two years
I hack together Pascal code without much thought and it works fine
The advantage of Object Pascal(Ultibo) is using units written by better coders

Could Ada be used baremetal on Pi's?
I have tried just about every other language on Pi's what is one more?

I think I need to learn ADA, just did 6 weeks with CRO and logic analyzer debugging some C code.

I suggest you do take a look at ADA. It's somewhat like Pascal but even more strict about types and such. You will feel quite at home, I must have spent months debugging Ada projects with a CRO and Logic Analyser.

I hack together Pascal code without much thought and it works fine

I hack together C/C++ code without much thought and it works fine.

The advantage of Object Pascal(Ultibo) is using units written by better coders

The advantage of C/C++ is that there is plenty of code out there written by others to make use of.

Could Ada be used baremetal on Pi's?

I'm sure it could. Almost all the Ada projects I have worked on were "bare metal". People use Ada for the tiny 8 bit ATMEL AVR chips, no operating system there.

I have tried just about every other language on Pi's what is one more?

I don't suggest people go off learning every language under the sun. Lot's of them are conceptually the same so it's kind of pointless.

I make an exception for Ada as learning Ada can provide an insight into how horribly wrong programming language design can go.

Mind you, C++ is another great example of that.

The great thing about Ada is that it tries to catch a lot of programmer errors at compile time. It's very fussy about types and such. Give the right build options it will do further checking at run time, out of range variables, out of range array access etc.

Problem is that does not help much. There are of course tons of other bugs a program can have. You still need to test the finished code. You will need code reviews and perhaps formal analysis etc. And of course it helps to have a good design before you start.

People working in Ada environments, in avionics, military etc, generally have whole teams of people dedicated to testing.

Anecdotally, my experiences in the trenches testing Ada projects indicates they have as many bugs as any other language I have worked with. However, they did have far bigger teams to cover all that testing, review, design, etc.

They (the DoD) had the choice of basing Ada on Pascal or basing it on Algol68.
They chose Pascal because Algol68 was already a powerful and complete language, whereas Pascal was not (Mr Wirth created it as a teaching language). It was easier therefore to develop Pascal into the language they wanted.

I'm not sure how heavily Ada was influence by Pascal. It certainly takes inspiration from there, but also from Algol and other languages. All these structured "third generation" languages are children of Algol anyway.

The whole Ada history is a train wreck. The DoD wanted to standardize on a single language rather than have to support the 450 or so languages they found being used in military projects. A good idea, right? The whole idea was put out to tender and a handful of companies offered their solutions. After a couple of evaluation rounds Ada was selected and mandated for use in all DoD projects in 1991. See Wikipedia for the gory details.

Problem here is you are designing a single language to cover every use case from small real-time embedded systems up to huge distributed programs running on mainframes and such. The result being a horribly huge and complex language, difficult to implement, ill suited to anything, whose performance sucked, and hated by all programmers.

Meanwhile back in the UK the MoD had to tag along and mandate Ada as well, throwing out their previous standard of Coral 66.

The DoD realized it's mistake 6 years later and in 1997 removed the Ada mandate. They now wanted to go for the perceived cheaper option of Commercial Off The Shelf (COTS) solutions. So vendors used whatever language they liked.

Probably by now the DoD would find they are back to 450 odd languages being used in their systems

Although writing a compiler is fun! (part of my job and my masters) I don't see the point elsewhere, when Freepascal is NOT closed source and highly advanced.
It even compiles AARCH code for the Raspberry Pi, both for nixes as for bare metal and contrary what is stated above only the compiler itself has a GPL v 2 restriction. The lgpl+linker exception is even less restrictive. And ALL sourcecode is available, there is no closed source anywhere.
Ultibo also uses - the bare metal options of - FreePascal for that reason. It would be futile to write such an optimizing compiler other than that it is fun.
It is very hard to improve on a compiler that is already 25 years old and extremely well maintained.
Written on one of my day-to day RPi3B's.

Thaddy.

p.s. If you want to pursue it, I am available for advice(after all that's as I stated) . Just don't call it extended Pascal: that already exists and has a feature set that is even less than a subset of the current FreePascal.

Problem here is you are designing a single language to cover every use case from small real-time embedded systems up to huge distributed programs running on mainframes and such.

If the Linux kernel can cover every use case from embedded systems to huge distributed programs running on mainframes, why could not a programming language do likewise?

Many corporations have replaced bespoke legacy software with off-the-shelf commercial offerings. When doing so I believe there is a risk of losing the competitive advantages which come from doing something different. In situations where military investments are for show, it probably doesn't matter what software is used or whether the airplanes and missiles are even lumps of plastic. However, in the event of any significant conflict or war where there is a chance of losing, it is not clear that using Windows defender and other off-the-shelf software would provide a strategic advantage.

I think the idea behind the proposed Pascal-like programming language, no matter what it is called, is to be simple and self contained. Generating optimized code is not interesting, because as pointed out there are already good optimizing compilers. What is missing, however, is a self-contained codebase that can be understood by one person and audited by a team for correctness to ensure the resulting executables (including the compiler binaries themselves) are free of compiler-injected backdoors. As already discussed, a simple but feature-complete non-optimizing compiler would also useful for teaching and hobbyist use.

If the Linux kernel can cover every use case from embedded systems to huge distributed programs running on mainframes, ...

That might be a bit of a stretch of an assumption.

There are plenty of embedded systems that don't have the space or speed for Linux, or don't have MMUs. There are plenty of embedded systems that require real-time deterministic behavior which is not so easy to achieve with Linux. There are a lot of embedded of real-time operating systems in use as a result. Or perhaps no operating system at all.

It is remarkable though how small Linux systems can be now a days, like a lot of home routers.

...why could not a programming language do likewise?

Arguably for similar reasons. Space, speed, etc. I have worked on Ada projects, military and avionic, sometimes it was a bit of a squeeze and the results were not pretty.

I was eagerly following Per Vognsen's development of Ion, a simplified but equally powerful language to C, in his recent series on Youtube. https://www.youtube.com/watch?v=T6TyvsK ... wMSDM3aTtX He managed to write the whole compiler in a few weeks in front of our eyes almost.

Last edited by Heater on Sun Nov 11, 2018 6:47 am, edited 1 time in total.

Problem here is you are designing a single language to cover every use case from small real-time embedded systems up to huge distributed programs running on mainframes and such.

If the Linux kernel can cover every use case from embedded systems to huge distributed programs running on mainframes, why could not a programming language do likewise?

And the Linux kernel is moving in the direction of Multics and Ada. That is to say to complex for its own good.

Multics was a good and grand idea, and many of its features survive to this day. It was such a grand universal idea that it doomed itself to failure, its complexity hid some pretty nasty bugs as I understand it.

Ada buried itself in complexity, trying to do all.

Now Linux is doing the same more and more with each release of the kernel.

Many corporations have replaced bespoke legacy software with off-the-shelf commercial offerings. When doing so I believe there is a risk of losing the competitive advantages which come from doing something different. In situations where military investments are for show, it probably doesn't matter what software is used or whether the airplanes and missiles are even lumps of plastic. However, in the event of any significant conflict or war where there is a chance of losing, it is not clear that using Windows defender and other off-the-shelf software would provide a strategic advantage.

I think the idea behind the proposed Pascal-like programming language, no matter what it is called, is to be simple and self contained. Generating optimized code is not interesting, because as pointed out there are already good optimizing compilers. What is missing, however, is a self-contained codebase that can be understood by one person and audited by a team for correctness to ensure the resulting executables (including the compiler binaries themselves) are free of compiler-injected backdoors. As already discussed, a simple but feature-complete non-optimizing compiler would also useful for teaching and hobbyist use.

I agree completely. I had actually debated a few options on what language to implement when I began down this road. Unfortunately health issues have slowed my progress by a lot.

The Raspberry Pi is an ARM computer, that runs many Operating Systems, including Linux, RISC OS, BSD, Pi64, CP/M as well as many more.
Soon to add AROS to the list of operating systems.

I am sticking with an extended Pascal (excluding two features), though I figured I would give a glance at other options I considered:

There was the option of a compiled subset of BBC BASIC V (subset as somethings just can not be compiled). This would have been fun.

The best option was without question ANSI C, or at least a very good subset thereof (I do not think I would have implemented support for legacy K&R style functions, though I would have included support for inline functions). Though we already have so many C compilers, some of which are very small and simple (like TCC).

Then there were a few others I looked at, including BCPL, B, and LOGO. Of these either they were just to lacking, or in the case of LOGO to different to what a newcomer to programming should see.

The Raspberry Pi is an ARM computer, that runs many Operating Systems, including Linux, RISC OS, BSD, Pi64, CP/M as well as many more.
Soon to add AROS to the list of operating systems.

And the Linux kernel is moving in the direction of Multics and Ada. That is to say to complex for its own good.

Multics was a good and grand idea, and many of its features survive to this day. It was such a grand universal idea that it doomed itself to failure, its complexity hid some pretty nasty bugs as I understand it.

UNIX was short for "Uni-user Multics" and was developed just because the engineers were fed up with the large size and complexity of Multics.

You are right of course, UNIX and now Linux are successful and have evolved over the decades into quite large systems.
I think that's inevitable.

But the great thing about Linux is that you can get various "lite" versions, some of which will run entirely in memory, or in embedded devices.
That gives you a small fast system.
Similarly for a server, you want advanced networking and multi-tasking, but no graphics overhead.
Then again, at the other end of the scale, the top 500 super computers all run Linux.
I love its flexibility.

And the Linux kernel is moving in the direction of Multics and Ada. That is to say to complex for its own good.

Multics was a good and grand idea, and many of its features survive to this day. It was such a grand universal idea that it doomed itself to failure, its complexity hid some pretty nasty bugs as I understand it.

UNIX was short for "Uni-user Multics" and was developed just because the engineers were fed up with the large size and complexity of Multics.

YES. That is part of what I am talking about, thank you.

You are right of course, UNIX and now Linux are successful and have evolved over the decades into quite large systems.
I think that's inevitable.

A big part of the trouble is not so much the size or complexity of the system in full. A big part of the trouble is that the kernel itself is getting rather unwildly, beyond the abilty to well maintain. There is also the fact that there are a lot of extensions that run in privledged modes, thus there bugs can affect more of the system than would be ideal (ironicaly enough one of the things that Linux users say is wrong with RISC OS is more of a problem on Linux).

The other big part of the problem is that even a light version you need a kernel that has grown to be to much for its own good, have you looked at the kernel source tree in the last 10 years? That is just the kernel, even without ANY of the optional modules (static or dynamic), has gotten way out of hand.

Now there are solutions. Though any of them require a complete rework of the kernel, and would likely break most code that is currently in kernel modules, requiring significant rewrites to allow for running in user mode, and having access only to the resources allocated to them (such as mapped in HW registers).

But the great thing about Linux is that you can get various "lite" versions, some of which will run entirely in memory, or in embedded devices.

The "lite" is not so lite any more. Yes you can run a current Linux in as little as 4MB of ram with no mass storage at all, though that is prety close to as small as you can go while still having a usable system.

That gives you a small fast system.
Similarly for a server, you want advanced networking and multi-tasking, but no graphics overhead.
Then again, at the other end of the scale, the top 500 super computers all run Linux.
I love its flexibility.

It is the flexibility that is killing it. Unfortunately.

Do not get me wrong I like Linux more than most systems, though if MiNIX 3 got to a point where it was usable for day to day I would switch my n*x with little thought about it.

Also many of the Linux systems as a whole are getting out of hand in another way, each person seems to have a different idea about how to do the same thing, and which shared libs to use to do so, so we end up with a tangle of shared libs that is insane (on my Raspbian there are so many shared libs that are actually used that it is absolutely insane). This sometimes breaks things because of something overlooked in updating a distro, look at the issue of running OpenSCAD on Raspbian Stretch, it is a prime example.

The Raspberry Pi is an ARM computer, that runs many Operating Systems, including Linux, RISC OS, BSD, Pi64, CP/M as well as many more.
Soon to add AROS to the list of operating systems.

Yes the kernel source is big. Don't forget that when you download the kernel source that includes the code for all supported architectures. It includes the code for all drivers of all supported devices on all architectures. What you actually end up running on a particular machine is a lot less.

There is no evidence that the kernel is "getting rather unwildly, beyond the abilty to well maintain." The kerenel lead developers will tell you the opposite. Everything is well partitioned and managed. Development is proceeding as smoothly as ever.

Despite everything running in kerenel mode I'm amazed how device drivers and such can crash, dump a stack trace, etc, without bringing down the entire system. Seems a micro-kernel is not required to make things robust. I'm no expert on such things mind, just making observations.

Yes, running Linux in 4MB of RAM might be a stretch. But Linux runs on many very small devices. Like phones and tiny WiFi routers etc. This does not seem to be a problem.

There is no sign that the flexibility of Linux is what is killing it. Quite the opposite in fact. It does run everywhere after all.