Make a tool system that can run on-chip, that contains a file manager, editor, compiler, and debugger. The whole thing would definitely be under 32KB. It wouldn't need any internet access, authentication, email account, license, maintenance fees, etc. It would just work and you would own it. Nobody could "disappear" it.

There are people out there making products with P1 chips. Many of them have enjoyed the fact that once they got up and going, no further changes were required or forced on them. Many of these people want P2 chips.

If you ask me, the single most important thing to do right now is debug the instructions and hardware functionality. I'm still around, but hit a rough, super busy patch. It was rough when code I wrote for an older FPGA needed porting. Kind of fell off the bus at that point. No worry though. I'll be working on it again, probably just do a rewrite on a final FPGA.

The second most important thing to do is make the chip.

If we get there, and people know they can get one?

Tools will be a non-issue. Trust me.

Chip, I want that on-chip set of tools, and I want them for the simple realities you've outlined. I've stayed in on this for two reasons:

One, some chip functionality is compelling, and I have ideas on what I want to do with it. The second is the prospect of a stable environment for the longer haul, sans dependencies. I have my reasons, and most of them boil down to time / risk management. Staying current with a set of tools is expensive. For those of us who do this professionally, it's no worry. Benefits outweigh costs.

A lot of others see it very differently, yet may still make products, automate, etc... I'm in that camp.

The last little project I did was kind of amazing. Made the circuit, fired up PropTool, did what I know how to do cold, and it worked. Another team, with a lot more advanced stuff and understanding than I have was struggling. What I've learned here over the years is:

getting the tools out of the way of the problem has extremely high value.

While I don't know as much as many here, what I do know, I know cold. And I can apply it, if not winning any awards. LOL Right? I could look at the problem, focus my brain on understanding that, then make the P1 do what it needed to do. Done, next. Honestly, being able to do that is why I jumped into Propellers in the first place. I was writing fun code in SPIN in a day. PASM a couple days later. Nothing, and I mean nothing has come close, unless you want to count old legacy computers.

Showed that off, and it was very, very interesting. Still is. They are building a monster, but it's built on a proven basic solution I did on a P1. (and frankly, P1 chips could do a lot more, but they have a plan involving very complicated PC software, and that plan is valid, so I step away happy to have contributed what I did. It's R&D anyway. I took a weekend and did some fun R, and they are gonna take years and do some D. Fine by me. )

That's the compelling part about a "finished" on chip toolset. It doesn't need to do everything possible. If it's lean, mean, sort of what the PropTool can do as a compiler, it's good enough. If it has SPIN with inline PASM? Golden, but I would take PASM only.

What I'll do is learn that thing, and I'll know it cold, then spend my time solving problems and getting stuff done, not so much advancing non get stuff done, non solving problems meta learning. Personal choice, but hey, our time is all finite. Got a lot of stuff I have to do. Maybe I'm kind of odd, but I don't generally forget. Can still fire up an Apple or Atari, SGI, whatever machine I've worked with, and do what I did with it. After a brief warm up, it's all there.

Bootstrapping me onto new things takes time. And I do it, but I want it to pay off, like being able to use it for a good long time after having done it. More importantly, code I wrote needs to still work. I build on what I've done before, and I build on what I've been given, or have found others have done before. Works the same way. Once it's debugged, and I know it cold, it's on to application.

In this interconnected world, that won't fly, and everyone knows it. No worries. We advance things slowly, though we often take big steps backward, but it does advance. That way of things won't change, nor should it.

But, where all that isn't required? The other vision, on chip, lean, mean, simple competes and it competes very well and absolutely should exist. A position I've maintained here since the beginning. This post is another expression on why that is.

Let's debug what we got solid to the point where errors, if present, aren't gaffes. People will be happy with that.

Let's make the chip.

Having the tool discussion is a very nice problem to have.

I see that going two ways:

One, the on chip vision Chip has can get done, because he wants to do that. Let him, and let him do it lean, mean. He knows how to do that. Cold.

The other will make good sense the moment we know we are gonna get chips, and that's the pro / official tool set.

After the chips are for sale, and assuming those two exist?

Wonderful problem to have, the more the merrier. No joke.

For those worried about fragmentation, different code bases, I submit that will happen no matter what. At least with Chip doing what he sees possible to do, there will be one "works no matter what" code base out there, and it will be small in scope. I know for a fact that will get targeted and the crap used out of it. We all will benefit.

Others, doing more advanced, more interconnected, more dependency laden things can very easily write a filter / processor and mooch the good stuff into their environment, and should do that. Interestingly, I don't think the on-chip crowd will care much about going back the other way, but the same could be done, and should.

One last thing: Drivers, core bits, loaders, other goodies, if written to the on-chip tool set, will always work. Think hard about the value of that. It's extremely high, as the P1 OBEX shows.

My only wish would be to have the monitor back from the P2-Hot, but if typing constant enough one could program it in via serial terminal.

Seconded, but I won't push for this. I will note the ROM has room.

That thing was awesome. I would fire up a P2 HOT, and just start exploring, testing, tinkering.

I did some amazing stuff, and one of the best was a hot display switch. Had a thing running that Baggers and I were toying with. Was on VGA. Wrote a TV driver that matched the display RAM. Was able to load, run the program on VGA, kill the display COG, squirt in a TV COG image, launch it, and there was the thing continuing on a TV.

Honestly, if I had my way, use the ROM to boot load from whatever we decide. Put monitor-assembler into it, and have all that be at the ready.

On boot, monitor available. A monitor command could fetch assembler from ROM, and itself, if needed.

That way, minimal footprint for those people who don't care about such things. Their loader overwrites the monitor, or its not even fetched at all, and it's business as usual. For those who do care, their loader starts at a higher address, or they just connect via serial. monitor there, able to fetch assembler, if desired.

With that much done, on chip. The rest is an ongoing software project. Leave a jump vector and command byte hook in the monitor / assembler, so that other things, when developed, and or needed, can be fetched from storage and just work native, as if they were all developed and in ROM.

This could be true of the assembler, and maybe should be, depending on what gets decided. The very minimum needed would be monitor with that command byte and jump vector needed to launch and integrate other tools, the first one being an assembler.

All that said, I would be completely happy with an image I can write a loader for. Boot the chip, it fetches the good stuff, makes a serial connection available, and off we go. Once that exists, adding an editor, or native keyboard mouse is just a matter of expanding on that image, should someone want to make a little bench computer. (I do)

The buffet was open about 3 years ago, most of the partygoers left last year, all drinks are drunken, the party is over.

The hosts are cleaning the house and it is simply the wrong time to open just another bottle of champagne...

And every time it is just one more thing, and then another one. Since years.

And still I read - just one more opcode change, just a couple of more registers to save C,Z, just change the prng to 34 bit, just...

this simply has to stop or the P2 will never exist.

Mike

If the party was over some time ago, how is that just last weekend (September 9) the ADDS/ADDSX/SUBS/SUBSX instructions were changed and a request was made for comments? It was that fact that led me to believe change was still possible. Beforehand I thought everything was done.

Re XORO34, I had that idea a couple of months ago but everyone seemed content with XORO32 and I knew if I raised the subject there'd be the usual negativity. In the end it felt a bit like digging up a corpse and so I didn't say anything. After the unexpected change to unsigned signed arithmetic I considered it worth mentioning.

As for last night's notion, maybe a couple of things I wrote were missed (bold emphasis added):

Yes, the monitor was neat because it let you exercise pins, as well. We'd need to give it WRPIN/WXPIN/WYPIN/RDPIN commands. Then, you could use the smart pins without having to code.

Having smart pin access from the PC, would be a GREAT way to assist development.

What is the current ROM size & usage % ?
IIRC the boot times were tweaked to make them faster from reset ? That's getting more important as time moves on.

It may not need a full-blown ASCII Terminal Monitor, as some smarts can be coded PC-Side.

Just some ROM loader calls to R/W memory would be enough ?

Quite a few vendors now have graphical schematic-like configurators, that change to show clock paths, clock speeds and device settings, as SFR values change.
P2 could do something similar, you enter the mode/speeds etc, and then drop into a P2 and check the pin.
It's not going to be super-fast in updates, but most of the work is getting your head around the initial setups.

The drawback with RCZR & RCZL is they are sequential. Imagine if any register could hold 16 copies of the flags, with random access for both reading and writing in a single instruction. All flag worries would disappear. Something for the P3 perhaps?

Let's call this hypothetical instruction FLAGS. Apart from D, it would need an 8-bit operand to specify separate 4-bit read and write addresses of the CZ pairs within D. FLAGS could fit into one of the empty D,S instruction slots near the bottom of the opcode map.

FLAGS is a rotten name, both too vague and too long. XCZ is much better, for eXchange C and Z. S is now 9-bit. S[3:0] is the read address and specifies which CZ pair in D, CZ[0]-CZ[15], will be copied to C (if wc) and Z (if wz). S[8:4] is the write address and specifies which of CZ[0]-CZ[15] the current flags are written to when S[8] = 0. Writing is disabled when S[8] = 1 so that any write address of 16 or above can be used to throw away flags that contain junk.

The drawback with RCZR & RCZL is they are sequential. Imagine if any register could hold 16 copies of the flags, with random access for both reading and writing in a single instruction. All flag worries would disappear. Something for the P3 perhaps?

It comes down to the Logic cost - to me that's looking a LOT of Bit-Muxes needed, vs a simpler shifter for the RCZR

The existing generic barrel shifter in the ALU should be able to handle it if we wanted to throw a full two-operand opcode at the job.

"We suspect that ALMA will allow us to observe this rare form of CO in many other discs. By doing that, we can more accurately measure their mass, and determine whether scientists have systematically been underestimating how much matter they contain."

For example, last month I had a little over 3GB left in my Internet quota for the month. Windows 10 wanted to update, so I foolishly let it loose while I went out. Silly me!!!! Windows update downloaded over 9GB, so I went over my limit for the month by 6GB. This cost me $60 !!!

Win 10 is not so polite. It just barges in, takes over your machine, and spends an eternity downloading, installing and rebooting. Often at the most inconvenient times.

There are some settings to delay updates for some days. But eventually it will take an axe to your door and stuff you.

When this first started happening to me, I thought I must have not done something simple to prevent it. Then, one day I was on the phone with Tanner EDA getting some tech help when there was a long awkward pause, and then the person said that their computer was installing updates. Nobody seems to be able to control it.

This is what I want to do, personally: Make a tool system that can run on-chip, that contains a file manager, editor, compiler, and debugger. The whole thing would definitely be under 32KB. It wouldn't need any internet access, authentication, email account, license, maintenance fees, etc. It would just work and you would own it. Nobody could "disappear" it. It would be there years later, still working as it did originally. No magic "updates" would alter it in some unexpected and irreversible way. You could get "married" to it, so to speak. As long as there is a need to input and output analog and digital signals on a first-principles basis, I think there's also a need for a stable and reliable platform from which to operate. This is my Luddite pipe dream. I can't help but think that people would discover how nice (and vital) it is to have a stable base on which to develop things.

That's a reasonable long term goal, but probably not good for fastest P2 product ramp. The product needs to appeal to mainstream designers.

Yes, it is nice to be 'self contained' but development these days is not done in isolation.
What you describe is quite close to the TurboPascal IDE, but that ran on DOS, and DOS had no means to view PDFs
I can't really imagine development today, without PDF readers, and some web access.
The nearest modern equivalent I can think of is Project Oberon, and I think that is much larger than 32k, and still cannot view PDF or web browse ?

Users these days expect chips to support Compilers/Assemblers, and MANY languages.
That's too much to load into 'stand alone', which will always be constrained.
Better to focus on USABILITY things like
* faster downloads
* Compilers and interpreters for P2
* Boot and run from Quad and Octal Memory
* Start Debuger design

This is the endless debate. The product has to appeal to "mainstream designers" in order to make a profit. But the platform space for mainstream designers is already clogged to overflowing with every conceivable variant. Everything has been pared down to the bone in the combined interests of power and profitability. What's the point in making the P2 just another variant of the same thing?

Meanwhile, who is going to throw away their laptop/tablet/surface because the P2 is out? Nobody. If I need to view a PDF, I've got 20 ways to do that already. Just this morning, sitting at the kitchen table, I was using one laptop to display specific listings and PDF guides while installing certain IDEs, drivers, libraries, etc on a second laptop. With all the resetting and power-cycling you have to do, it's almost idiotic to use just one box for everything.

The P1's radical departure from the mainstream, combined with its power and inherent simplicity, is what endears it to me so greatly. I cringe every time someone wants to make the P2 just like everything else out there.

The P1 got used in ways I imagine Chip never envisioned. No doubt the P2 represents an even larger tabula rasa. I don't think Chip is trying to exclude anyone, but perhaps it's a waste of time right at this moment for Parallax to anticipate, and provide for, every use case.

... What's the point in making the P2 just another variant of the same thing?

That misses the point entirely.
P2 stands quite well on its own as a Silicon Product, but its development flow must be accessible to designers
That means 'no surprises' in how the tool chains present.
There is an enormous inertia and inbuilt learning out there already, around Embedded development, and most designer have multiple cores and suites installed.

In the office here, there is probably close to fifty evaluation boards, only a small number of which made the cut, for design-in, and getting from the 'try' to 'use' baskets, is going to be super-critical for P2 sales. That means the 'try' path needs to be what users expect, and already know.

So long as they can code, users will think of plenty of creative uses for P2.
That may even include a self-contained 'DOS/Turbo-pascal like' offering, but that certainly should NOT be the first or only offering.

Even that standalone version, is going to need a PC host connected to the internet, for the frequent updates.

Exactly.
Because it is the goal to NOT update it all the time, but just leave it like it is.

Cool Goal, but how many updates BEFORE it hits that goal do you think ? How many years ?
Oh wait, is it flawless on the first and only release ?! - why not place it into ROM if it never needs updates ?

One example from the real world eg PropGCC shows already 20 releases, and 14 branches... P2 FPGA is Rev 20 and some suffix letters...

In the office here, there is probably close to fifty evaluation boards, only a small number of which made the cut, for design-in, and getting from the 'try' to 'use' baskets, is going to be super-critical for P2 sales.

Ha, isn't that the truth. Over the years I have collected a pile of dev boards, at home and in the office.

That means the 'try' path needs to be what users expect, and already know.

Oddly enough it worked the other way around with the Propeller for me. Back in the day what users expected was yet another closed source tool chain with every new device. Yet another closed source IDE with every family. Lots of messing with licenses, limited functionality free versions, oh and strange programming dongles that perhaps only worked with one family of device and only on Windows.

Yes they were familiar C/C++ but what a palava. Then there was the Propeller, with it extremely weird architecture and weird programming language built in. Nothing like what a typical embedded dev would expect. But hey, it just works out of the box.

I do agree though. The P2 had better have GCC working out of the box.

Even that standalone version, is going to need a PC host connected to the internet, for the frequent updates.

Cool Goal, but how many updates BEFORE it hits that goal do you think ? How many years ?

The answer to that question is precisely why several of us want to limit tool scope at this early time. Put the really advanced toys into the "Pro" branch and go.

A lot is done. Assembler won't need much more. Just a fix or two as we really exercise it. Chip will take SPIN 1, use those guts, extend it some, and get to a working state fairly quickly. This need not be maxed out on any axis, just needs to work in common sense ways and not have painful bugs. Adding the very minimum to SPIN 2 makes a lot of sense for this case.

And a bunch got taken out! SPIN got lean, with functions used for most things, in-line PASM, etc... filling in the rest. It's not going to take extremes to get something very useful, functional.

And remember that, "But we should be able to put it all in one file" discussion? Multiple ones?

Remember all those "but the tools should..." discussions, and a lot of us saying, "yeah, but later?"

Even that standalone version, is going to need a PC host connected to the internet, for the frequent updates.

I presume you mean for compiler / IDE / documentation updates.

Yes, everything that is updated will need a download path. So that means all software and firmwares.

Some seem to imagine a 'standalone P2', but there is no such thing as stand-alone 100 pin MCU, you need to connect powers, and if you really want to cut that PC-USB chord, then you need to decide on Keyboard and Display, (but still keep that USB for updates & saves) and now you are very much in Raspberry Pi space, only doomed to forever be a poor cousin..

What was imagined to be simple, is suddenly far from it, with connections for Keyboards and Monitors, and a rapidly shrinking subset of users..

To me it makes more sense to focus on PC-Hosted tools that work, and to have a repository that is kept up to date.
Yes, that means you need to use Windows/Mac/Linux, but designers already do that.

I like the idea of a nano-monitor in ROM, that at least allows a host to R/W memory & smart pins etc, that can be made more user friendly with a schematic-like PC side interface, as other vendors already do (again, we have 'as expected' recur )

A new user cannot be expected to know the spaghetti-soup that is all the P2 or P1 support programs.
From what I read here SimpleIDE has suffered poor revision control, with a very outdated GCC included.

Even the P2 module form will be critical, my personal leaning is for Eval boards that are useful ahead of simplest, but many also want a highly compact module.
I think that suggests TWO modules on release
* a) Smallest and most compact that has power and USB and that can drop onto a larger PCB layout,
..as well as..
* b) Something more than a breakout board, that allows P2 to do something useful in a Lab or class room
Is b) a mother board for a) ? or another design ? dunno - but one PCB is cheaper than 2,

I have timer/counters here that are impressive (google PM6672) and useful, and P2 could certainly 'swallow' a Frequency Counter and Multimeter, and some level of scope.
'Open source instruments' many be enough of a niche to get attention for P2.
I notice Redpitya now has STEMlab 125-10 & STEMlab 125-14

I don't need 'standalone' for microcontrollers. I don't even want it. The thing with the P1 was that I got a QuickStart board which I plugged into a PC as soon as I got it, installed bst, clicked on a link on the QS page on parallax.com to get .spin files, 'run' in bst, and the QS board started blinking the LEDs. And then I changed the example in bst, re-ran, and the blinking pattern changed. Took me a few minutes to be up and running after unpacking my first QS.
I don't want standalone for that kind of testing. Am I supposed to find a terminal, or use a terminal emulator? Nope, that just sets the bar higher.

Tor, I think the idea is more like a cut down RaspberryPi but without all the updates. Plug in a keyboard, screen and SD card ... or even two SD cards.

"We suspect that ALMA will allow us to observe this rare form of CO in many other discs. By doing that, we can more accurately measure their mass, and determine whether scientists have systematically been underestimating how much matter they contain."

But I don't want to plug in a screen etc. I have a PC set up already. I want to plug the MCU board into the PC, it's by far the simplest approach. It's a hassle every time I have some other kind of equipment (tower box or even a Pi to set up) to fool around finding a screen (and the space for it) I can use, and a free keyboard.

With the PC it's plug and play. The laptop has its own keyboard and screen, no need to look for those. This works in a coffee shop too, unlike something needing an external monitor and a keyboard.

"We suspect that ALMA will allow us to observe this rare form of CO in many other discs. By doing that, we can more accurately measure their mass, and determine whether scientists have systematically been underestimating how much matter they contain."