Tuesday, December 04, 2018

Apple CEO Tim Cook on Monday said he believes it is "right" and "moral" for technology companies to block hate speech and violent conspiracy theories from their platforms.

"At Apple, we believe that technology needs to have a clear point of view on this challenge," Cook said at an Anti-Defamation League (ADL) conference on Monday afternoon. At the conference, Cook accepted the ADL's first "Courage Against Hate" award, geared towards private sector leaders "dedicated" to fighting bigotry. He was also the conference's keynote speaker.

"That’s why we only have one message for those who seek to push hate, division and violence: You have no place on our platforms," Cook said during his keynote address. "You have no home here."

It's long past time to start enforcing the West's long-dormant blasphemy laws, and doing so with a vengeance. As I have repeatedly pointed out to everyone, the whole "free speech" movement was never anything but an anti-Christian Fabian campaign. Now that the forces of evil feel they have sufficient power, they have rejected the intellectual tools they utilized to dislodge Christian morals and ethics from their dominant societal position in the West.

This is not just a cultural war, it is a war for the soul of the West. And it is not being fought between Right and Left, it is not even truly between Nationalist and Globalist. At its core, it is between Good and Evil, between the Truth and the Lie.

The West needs to send a very clear message for those who seek to corrupt and control it. You have no place in our civilization. I was once a big fan of the Apple II, but I will never buy another Apple product of any kind, for any reason.

Microsoft is not *especially* converged. It's been taken over by a horde of incompetent and corrupt Indians, so they've certainly got their own set of problems, but they aren't especially SJW. Perhaps a benefit of not being in the Valley.

And building your own means putting the money to exactly which high-performance devices YOU want, and only those you want. For example, most prebuilts with high end graphics also have high-end sound, and vice-versa. But if you are using the computer primarily for sound editing, or editing 3D graphics models, you're only going to want one of those.

AMD's 12-core with dual threading is cheaper than an 8-core Intel i7 with dual threading. The AMD 12-core has a shorter pipeline (Good thing -- recovers faster from a bad branch prediction) than the i7 line. And Intel's alternative? There is none. Intel doesn't even make a 12-core.

C-64 was absolutely the best 8-bit computer offered, except for that overly stiff keyboard. It's amazing what they packed into that little box, AND offered the complete system bus through a port (the 'game card' slot).

A nearby cider mill wawas having problem with kids setting fires in the summer. I designed an alarm system using a C-64, some sensors, a bit of "glue logic" and some 6502 assembly code.

We just need an anti-Christian kill quote to make an Apple boycott a churchy virtue signal. I'm not seeing it in the linked article. It's all words like ``hate'' and ``divisiveness'' that churchians will approve of.

Samsung is building its own ecosystem though uses Google Android as a base and includes the usual Google Apps. They are also working on Tizen. Then there's Jolla (from the ashes of Nokia).

Apple will take a while but will fail. Cook's problem is Apple is trying to be a walled garden services company. The two won't work. Is ITunes or Safari even still available for Windows? Example: Apple Maps. They aren't being crowdsourced. There is no big online map with satellite images like maps.google.com or maps.bing.com, and they are sending out Apple map street view cars.

Apple TV will only work on their hardware. Same with streaming music. Similar with iCloud services. (What about hateful rap music?). iBooks? They have tiny botiques within their walled garden, but it is like the shops at an airport.

And they are being sued for anti-Trust over their take it or leave it 30%.

The Apps are specific to their platform, you now they've stopped reporting iPhone numbers and said service, you will know they are dying when you can access the services on Android or other platforms.

When it comes to computers, you can easily avoid the worst SJW corporations, you can even avoid other companies that are jackasses outside of a culture war context.

I have no idea what people do for phones, I don't think my option of not buying a phone since I got a Blackberry about 12 years ago is going to cut it. Microsoft has 1% of smart phone sales with the rest split between Apple and Google iirc.

The 6502 was a microcontroller, not a CPU. That it could be made to do the work of a CPU, convincingly, is due to Steve Wozniak's vision of using a $25 microcontroller ather than a CPU, the cheapest of which was $125 at the time.

The 6800 line, especially the 6809, were the best 8-vit XPUs out there. A C-64 built around a 6809 would have been the best of all worlds-- a CPU that could use relocatable code AND the best 8-bit on-motherboafrd peripheral chips (like the SID sound chip).

Not even close. Both the equivalent Amiga's and the Tandy Color Computer's blew it out of the water.

> Is ITunes or Safari even still available for Windows?

iTunes is. I had to install it at home last night. I'm trying to reset an inherited Ipad for someone. I think you can still get Safari, but as far as I know it's no longer supported.

> I have no idea what people do for phones

For a smartphone at the moment, your only real options are Apple or Google. There are some alternatives out there, but find them is difficult and don't expect much support. Amazon (who have their own problems, but they're not fully converged yet) seems to have dropped the idea of offering their own phone.

Mr. Vox, You don't REALLY think this sodomite Cook cares even if in the unlikely event people stop buying Apple's products and the company goes belly up, do you? He and his ilk ALWAYS have Golden Parachutes and tend to land VERY gently.

The big challenge with these corporate boycott type movements is that there aren't any good alternatives. If I reject Apple (which I have long done; I think it's more like a cult than a corporation) then what kind of phone do I get? An Android, powered by Google, who's just as bad as Apple. You've got your choice between the Hitler phone, the Stalin phone, or no phone.

There has to be a more aggressive and comprehensive plan than simply to run away from our corporate enemies and live Luddite lives without any of the modern conveniences, who are made by corporations that have been converged. I don't know what it is, or how to get the zeitgeist to catch up to the point where something meaningful can be done about it, but simply not buying Apple isn't a solution that solves very much.

LOL. You have no idea what you're talking about. A microcontroller (MCU) is a microprocessor (aka CPU) with additional RAM, I/O, timers, and other extras usually added to a computer through external peripheral chips such as the 6522 VIA and 6526 CIA. The MOS 6508 with 256b SRAM and an 8-bit external I/O port was a true microcontroller. ALL microcontrollers are also microprocessors (CPUs).

Or, from Wikipedia's article on microcontrollers: "A microcontroller contains one or more CPUs (processor cores) along with memory and programmable input/output peripherals. Program memory in the form of ferroelectric RAM, NOR flash or OTP ROM is also often included on chip, as well as a small amount of RAM. Microcontrollers are designed for embedded applications, in contrast to the microprocessors used in personal computers or other general purpose applications consisting of various discrete chips."

Please do not speak authoritatively about subjects that you do not understand.

If things had gone a little differently (had Tramiel stayed at Commodore), we could have had Unix workstations from Commodore by the late 80s, which would have been dirt cheap compared to the competition. It's hard to say how many years the Microsoft/Apple duopoly set us back.

All non Apple smart phones run on some flavor of Android. Maybe you can find a Windows based device but I don't trust MicroShaft either. There are a number of new Chinese/Korean phones that lend them selves well to the LineageOS system which is based on Android but is open source and has privacy features that Android doesn't.Essentially you could get a system without all the spyware.

"Courage Against Hate." lol Were they getting death threats or something? Because I see nothing "courageous" against a basic human emotion. You might as well be a f*cking robot. But these are NPCs, after all.

Seems their iphone sales have been slumped for some time. The advent of android auto and 3rd party itunes integration apps means they no longer have a lock on those who want proper integration with their vehicles.

"It's long past time to start enforcing the West's long-dormant blasphemy laws, and doing so with a vengeance."

The Catholic church had perfectly good reasons for implementing the Inquisition and it's no surprise that the all enemies of God found unity in piling on the Church,once it began. Once the tools of Satan destroyed the authority the Catholic Church wielded over the Monarchical powers ,any blasphemy against God was possible, even championed.

Cook is signalling for 2020. Expect a full court press against Trump supporters by big tech, they're not kidding around. If Trump wants to win a second term he'd best wake the hell up and start dealing with this crap. Unfortunately I get serious boomer vibes from him when it comes to technology and the internet. Yes he has his twitter but he spends way too much time watching Fox News and CNN.

Did ya'all notice it was an ADL award?Just celebrated its 100th anniversary in 2013 - and a glorious founding it was as Ron Unz delineates at - http://www.unz.com/runz/american-pravda-the-adl-in-american-society/ - as usual, the connection never fails.

@31 - The big challenge with these corporate boycott type movements is that there aren't any good alternatives.

This. I would boycott Apple but then I'm stuck using Microsoft spyware on the desktop, and supporting racist Indians who lockout white developers at the same time. And on the phone I would be stuck with Google SJW spyware. How is either a better option?

FOSS is an option on the desktop for some people. But much of the software I need to work just isn't available (or runs terribly) on Linux.

I get that we need to build our own platforms. But in the mean time we also need to make some corporations bake our cakes. Hopefully the lawsuit against Apple's iOS app store succeeds and forces them to open the platform. Apart from that we need Trump to bust up some monopolies and declare any silicon valley company which distributes 3rd party media to be a common carrier.

In the terminology of the time, a 6502 was a microcontroller, not a CPU, the distinction being a simple instructions set (eg no multiply or divide like on an 8080 or 6809).

NO instruction-executing chips at that time had any built-in timers or memory beyond the basic registers visible to the programmer, plus the invisible "z register" on the output side of the ALU.

Don't try to apply modern terminology to 40- year old chips. It doesn't work Look at Mostek's in literature of that era. Mostek defined it as a microcontroller. Microcontroller s if that era were meant to do things like operate a microwave oven, and specifically NOT designed with an internal architecture and instruction set for numerically intensive computing. Compare to the 8080 and 6809, both if which had internal 16-bit manipulations availwble by using instructions which operated on 16-bit internal quantities.

Now that the forces of evil feel they have sufficient power, they have rejected the intellectual tools they utilized to dislodge Christian morals and ethics from their dominant societal position in the West.

This, exactly. The sold to the Christian's when they were in power a false bill of goods; then turned around and used it against them. (Same goes for feminism/women's suffrage, obviously.)

Enough. Enough with no responding to these blatant attacks. Its time to use the heavy hand of state power to fix this mess. Forget building our own platforms to "compete" with them in a market they already control; why fight them with such an inherent and often insurmountable disadvantage? We should do what we do best, which is to conquer and make better that which we conquer. Hell the right pretty much has the entire government already, they just need to step up and--in the marvelous word's of Chancellor Palpatine--Do what must be done.

"Apple CEO Tim Cook on Monday said he believes it is "right" and "moral" for technology companies to block hate speech and violent conspiracy theories from their platforms."

He's talking about Alex Jones. They used Sandy Hook to drive him off social media, but Jones real crime was to come out strong for Trump and use his reach to get a significant number of working class whites, who follow him, out to the polls. Apple, Google etc would have had detailed analytics on him, which is why they wanted him gone. Of course not a word from Trump, not a peep of protest let alone issuing Jones a White House press pass. At the same time thousands of right wingers, but also a significant number of anti neo-liberal leftists, have been de-platformed, but all small beer compared to Jones, he was their main test subject to see how far they could push political censorship. Alpha testing completed.

For their beta they'll start moving against bigger targets. Breitbart, Daily Caller etc. By 2020 it'll be a mass clear out. It'll be everything from denial of web hosting to exclusion from banking services. We will know shit has gotten real when ordinary citizens guilty of wrong think will be excluded from mainstream online spaces, as in no web banking, no social media, no entertainment.

"Please do not speak authoritatively about subjects that you do not understand"

I'm only a computer engineer who designed real-world chips for a DoD project in my junior year, and who, at one time or another, programmed at the assembly level on each if of the most wisdepread 8-bit chips of the 1980's.

Refrain from criticizing your more knowledgeable and experienced elders, who were actual, funcioning adults at that time. It will save you much grief.

Anyway looks like the gay Frenchman has cucked out and caved to the yellow vests. Y'all paying attention? White men directing extreme violence against authority scares the living shit out of the elites. If Marcon hadn't caved it'd be either a plane ride into exile or being ripped apart by the mob.

There are several reasons not to buy Apple products. One is that they "lock you in" with the iTunes platform. That is, if the computer you are running your iTunes on crashes and you have to do a complete reinstall of the OS (and iTunes) you can no longer upload your music from your music player on to your computer. I found this out a couple of years ago.

Some years ago Tim Cook also asked those of us who think global warming is a fraud to not buy any more Apple products. I am more than happy to oblige to this request.

I always find it rich when anti-Christian lefties invoke morality as their justification for doing one thing or another. In a godless, purely materialistic universe morality exists only as specific synapse configurations within the brains of humans. So, saying you should do something because it's 'moral' makes as much sense as saying you should do something because of the specific pH of your stomach acid - it's a non sequitur.

Absent an external telos bestowed by a supreme and just God, there is only the will-to-power and common agreement. If a man had the power to cow everyone into agreeing with him, then he could deem whatever he liked as moral and few would dare gainsay him. Likewise, common agreement is a fickle thing that is primarily determined by whatever is convenient for the majority.

So when people like Cook invoke 'morality', the question every Christian should be asking is "Whose morality is he invoking and to what end does it serve?".

Vessimede Barstool wrote:"Apple CEO Tim Cook on Monday said he believes it is "right" and "moral" for technology companies to block hate speech and violent conspiracy theories from their platforms."

He's talking about Alex Jones. They used Sandy Hook to drive him off social media, but Jones real crime was to come out strong for Trump and use his reach to get a significant number of working class whites, who follow him, out to the polls. Apple, Google etc would have had detailed analytics on him, which is why they wanted him gone. Of course not a word from Trump, not a peep of protest let alone issuing Jones a White House press pass. At the same time thousands of right wingers, but also a significant number of anti neo-liberal leftists, have been de-platformed, but all small beer compared to Jones, he was their main test subject to see how far they could push political censorship. Alpha testing completed.

For their beta they'll start moving against bigger targets. Breitbart, Daily Caller etc. By 2020 it'll be a mass clear out. It'll be everything from denial of web hosting to exclusion from banking services. We will know shit has gotten real when ordinary citizens guilty of wrong think will be excluded from mainstream online spaces, as in no web banking, no social media, no entertainment.

David The Good wrote:Finding alternatives has gotten quite hard. What non-SJW company is making computers and operating systems right now?I have an idea to address this issue; email me (this handle @gmail) and I'll reply with a document outlining how I want to address the "making computers" bit, and a little bit about an OS I'm designing.

JAG wrote:Too bad their isn't still a Commodore option. The C-64 was better than the Apple 2, was cost friendlier, and no overt leftist pozz.I liked Commodore; it was amazing what they did with such hardware -- I suspect that if we made as effective use with modern HW as they did we would be at least an order of magnitude better performance OS- & SW-wise.

Damelon Brinn wrote:If things had gone a little differently (had Tramiel stayed at Commodore), we could have had Unix workstations from Commodore by the late 80s, which would have been dirt cheap compared to the competition. It's hard to say how many years the Microsoft/Apple duopoly set us back.That may have been for the best; Unix and C have, IMO, set the software industry back decades. I mean it's taken 20-25 years just for the industry to realize that [forcing] ad hoc roll-your-own memory-management in every program is a bad idea. It's taken about 20 years to realize that unchecked array-indexing, with no way to actually check (the semantics of C preclude array bounds-checking), is a bad idea.

And Unix is objectively terrible when you look at the impact it's had on programming environments -- the use of plain/unstructured text as an interface forces ad hoc processing at every step of your process, so if you have a program that you only ever see produces positive numbers, incorporate that assumption in your program, and it pops out a zero or negative number it invalidates everything subsequent in the process -- and there's things like "diff" to consider: what we have with diff is absolutely terrible insofar as checking differences for source-code, because it flags your buddy's editor changing tabs to spaces as changes when what you really want are the semantic changes.

This phrase "curation decisions" is the term-of-art that they use to keep people off their platforms.

The question remains are they a monopoly using their monopoly power to smash competition?

Are they a common carrier, like the phone company that producers have a right to have access to (the App Store in particular, iTunes streaming too) in the same way the phone company doesn't get to "currate" the content of your phone calls.

It's good to swear off Apple but they essentially have a duopoly on smart-phones and tablets with Google, who are arguably even worse.

However, yeah, it's pathetic that their current cores seem to be no longer capable of executing floating point ops at what is considered to be an acceptable number of clock cycles.

22 cores with only one floating-point processor is an idiotic design decision.

And they're charging over $5000 for one of these crippled beasts. I guess it's if you need to run only integer oriented code, like databases an web servers.

You wouldn't believe how hot my Nvidia is running. It's scorching hot, unbelievable really. If they tried to move those ops over to the CPU cores with more floating point support, I wonder if it wouldn't China Syndrome the motherboard in a total meltdown. Maybe that has something to do with. But it is a ridiculous situation to have all these CPUs sitting around doing a whole lotta nothing.

You put this really well: "I always find it rich when anti-Christian lefties invoke morality as their justification for doing one thing or another. In a godless, purely materialistic universe morality exists only as specific synapse configurations within the brains of humans. So, saying you should do something because it's 'moral' makes as much sense as saying you should do something because of the specific pH of your stomach acid - it's a non sequitur."

But I think any tribe, not just Christian ones can have a working morality. They might be quite different than ours, but any social code that everyone in the tribe is bound to is a moral system. Headhunters in New Guinnea have rules they must follow in headhunting.

Our problem is that we don't have any commonly agreed to set of moral rules since our well developed Christian codes were destroyed by the culture-distorters.

Finding alternatives has gotten quite hard. What non-SJW company is making computers and operating systems right now?

Patents expire after 20 years (max). Lots of tech was created by the military (government employees) and government works are public domain with rare exceptions. Faceberg was DARPA's Lifelog before it was Faceberg. DARPA shut down Lifelog on on the same day the filthy Satanist Zuckerberg "founded" Faceberg. It's not as impossible as it seems, though nothing of this nature is easy. Apple, Cook, Goolag, et al are not invincible though it often seems like it. Once the debt racketeering machine implodes it will be less difficult provided one is in a locale that's not in a Balkan-Rwandan style multi-sided civil war trying to survive and orc onslaught.

Amiga is coming back. New CPU (Apollo 080) being developed in FPGA and coming along quite nicely. Atari Falcon will hopefully be combined with it to combine all the best hardware aspects, then enhance them and then modernise. Computers are way faster than necessary for most tasks now. So have a look outside the Windows, Apple and Linux freespeach freespaces.

Someone needs to remind Tim Cook that, if they engage in censorship, they are not platforms. They are publishers, and publishers are responsible for content. The very act of censorship is a tacit admission of the legal responsibilities of a publisher.

I constantly laugh at Leftists who prate about "morality". - They always use it when trying to drum up support for homosexuals/pedophiles/trannies/abortion, etc.Liberal morality = "Let me do whatever I want, with no restrictions."

"nd there's things like "diff" to consider: what we have with diff is absolutely terrible insofar as checking differences for source-code, because it flags your buddy's editor changing tabs to spaces as changes when what you really want are the semantic changes."

"You wouldn't believe how hot my Nvidia is running. It's scorching hot, unbelievable really. If they tried to move those ops over to the CPU cores with more floating point support, I wonder if it wouldn't China Syndrome the motherboard in a total meltdown. Maybe that has something to do with. But it is a ridiculous situation to have all these CPUs sitting around doing a whole lotta nothing. "

That heat, instead of being produced in one FPU one ONE chip, would be spread out among 4 CPU chips (well, probably 8, if they were putting a decent number of FPUs on every CPU chip). Of course, this would lower the number of (integer-only) cores/chip back down to something in the 8-12 range.

I'm thinking it would be cool to make 1 with FOUR RPis inside, working as a cluster. Maybe 3 as masters, and the 4th as an I/O slave for a common disk, mounted as something or other on the slave CPU, and then that mount network mounted on the other 3.

Dirk Manly wrote:"nd there's things like "diff" to consider: what we have with diff is absolutely terrible insofar as checking differences for source-code, because it flags your buddy's editor changing tabs to spaces as changes when what you really want are the semantic changes."

diff -E from the man page-E Ignore tab expansion[…]Yes, but this is papering over the issue. What is really needed are tools that understand the language being used and operate in that space, not some raw-text + fixup after fixup. To put it another way, it's like attempting to use RegEx to 'parse' CSV or HTML: actually impossible, due to the nested nature of both. (Nesting HTML's TABLE tag is a good example, and CSV is similar, but requires a bit of textual transformation like " -> "".)

That is the most retarded way to make a new CPU line for mass production.

For one, if you know what to program the FPGA with, then you know what logic operations (and where) to put them on a chip -- saving massive amounts of space because you don't have to include "blow the fuse" constructions at every single gate input.

I can see using FPGAs to make a mock-up on a large board as a way to test the CPU for basic functionality, etc., and also as a way to develop your CPU-testing devices and software for when you put the thing into production. But just marketing a preblown FPGA and selling it as a CPU.... sounds like a very shaky business plan

"Computers are way faster than necessary for most tasks now. So have a look outside the Windows, Apple and Linux freespeach freespaces."

They're never fast enough, because some idiots always want MORE GUI, and specifically MORE ANIMATION.

When I was in college, we had 150 people logged into a 64 MB, 60 MHz 4-CPU machine (so effectively, 64 MB, 240 MHz), and doing editing, programming, compiling, testing, plus a few people playing games (this was en.ecn.purdue.edu) when it was a "test machine" for Gould Electronics' High Performance Unix division, before ECN redesignated it as a "production machine", and the games were removed).

Yes, the data-processing itself for most tasks is easily accomplished on even a 1 MHz machine for most things. The problem is the damned GUI nonsense consumes HUGE amounts of both memory and CPU cycles. EVEN WITH dedicated graphic processing units.

If you don't like Unix' diff, there are literally dozens of free replacements waiting for you. Unix is not, and was never intended to be your IDE. And your favorite IDE is not, and cannot ever be, an Operating System.

"Yes, but this is papering over the issue. What is really needed are tools that understand the language being used and operate in that space, not some raw-text + fixup after fixup. To put it another way, it's like attempting to use RegEx to 'parse' CSV or HTML: actually impossible, due to the nested nature of both. (Nesting HTML's TABLE tag is a good example, and CSV is similar, but requires a bit of textual transformation like " -> "".)"

Then build a tool to do that.Of course, you're going to need a different one for each language family (C/C++/java/D can probably all be viewed with the same tool... even, gack, M$ C-dull)

"If you don't like Unix' diff, there are literally dozens of free replacements waiting for you. Unix is not, and was never intended to be your IDE. And your favorite IDE is not, and cannot ever be, an Operating System."

That's funny, because I consider a few terminal windows running bash in each one to be the best development environment there is.

IDE's were invented because MS-DOS is incapable of running concurrent processes (although DR-DOS eventually gained that capability, it cost extra to get it, due to M$'s (illegal) per CPU sold pricing (as opposed to per OS installation sold), and most white-box vendors didn't even offer it after a while.)

That's funny, because I consider a few terminal windows running bash in each one to be the best development environment there is.Great. But Shark-boy is lamenting that Unix is not what it is not. There are many problems with C and Unix, but the built-in Unix utilities are not among them.

Of course not. Unix was intended to run Emacs, which is your IDE, and your everything else.Spoken like a true believer.I won't get into religious warfare here, but non-Communists use VI. You should check it out sometime

Other than the AMD Platform Security Processor. And since almost nobody buys AMD it's even less probed than Intel's, but serious problems have been found as the link details.

@36 Warunicorn:

"Courage Against Hate."

Every righteous person knows only haters love analog headphones.

@63 David The Good:

It's becoming apparent that Trump doesn't care about his supporters.

Letting the Left beat them bloody for going on three years now, while what's nominally his DoJ persecutes those defending themselves tells us a great deal of what we need to know. Although it's little better for those at the top on our side, ask General Flynn for example.

Snidely Whiplash wrote:If you don't like Unix' diff, there are literally dozens of free replacements waiting for you. Unix is not, and was never intended to be your IDE. And your favorite IDE is not, and cannot ever be, an Operating System.About that… the Rational R-1000 pretty much was a compiler/OS/IDE all mixed together. There's a data-museum that is looking to get their hands on the install-media, if anyone has leads.

Dirk Manly wrote:@89Then build a tool to do that.Of course, you're going to need a different one for each language family (C/C++/java/D can probably all be viewed with the same tool... even, gack, M$ C-dull)I know; I am working on such a tool.

Dirk Manly wrote:@92IDE's were invented because MS-DOS is incapable of running concurrent processes (although DR-DOS eventually gained that capability, it cost extra to get it, due to M$'s (illegal) per CPU sold pricing (as opposed to per OS installation sold), and most white-box vendors didn't even offer it after a while.)That's not true at all; the Lisp-Machines were an IDE, as is the above-mentioned R-1000, and neither of these ran DOS.

Ominous Cowherd wrote:OneWingedShark wrote:Yes, but this is papering over the issue. What is really needed are tools that understand the language being used and operate in that space, not some raw-text + fixup after fixup.

There is probably an Emacs mode for each language you use.There is.But the points I'm getting at are more than just "source code editor" -- an IDE should be, well, a development-environment where everything is focused on development and integrated together (not mismashed/forced) -- and while emacs has a lot of extensibility and functionality, it's not so geared. (That said, it might make a good editor for a true IDE.)

"Spoken like a true believer.I won't get into religious warfare here, but non-Communists use VI. You should check it out sometime"

That won't work. It will only lead to frustration, in the same way that every attempt I've made to use emacs has ended in frustration.

Why? Because so many years of using [editor A] that the mere thought of some modification M has resulted in reflexive [finger-motions C].

These do not translate to [editor Z] which requires [finger motions W] to perform the same Modification M.

And if doesn't matter if A is vi and Z is emacs, nor if A is emacs and Z is vi.

Whichever you learned, and then completed several major projects in first, is what your central nervous system considers to be "normal"

With me, I was working on a project, and realized that ex(1) line editor was just too inefficient for me to get the work done in time. So, I got a vi reference sheet, (and since this was a 4.3 BSD system, I could run the learn(1) command with either vi or C as an argument), and starting with just the 4 single cursor motions, plus a,o,x, and p, started on my way to learning how to use vi(1). Every day, I looked through the reference sheet, and decided on a couple new keystrokes to add to the repertoire that day. I learned 90% of vi by the end of the project, and have rarely found a need to learn more than bits of the remaining 10%, which is mostly really obscure stuff.

The integrated IDE/OS is a favorite of developers who have no idea of operating systems, what they do, how they work, and how they are managed. The concept itself goes back to COBOL systems of the 60s. You will note how many of them are still around. Languages are subject to the laws of fashion, C++ was replaced by Java was replaced by perl was replaced by Python was replaced by Ruby and on and on. What is the advantage of an IDE that understands filesystems and network routes? What is the advantage of an Operating system that is oriented around language rather that operations? There really are none.

The integrated Operating System/IDE has, like the R10k that you cite, failed every single time it's been tried.

"But the points I'm getting at are more than just "source code editor" -- an IDE should be, well, a development-environment where everything is focused on development and integrated together (not mismashed/forced) -- and while emacs has a lot of extensibility and functionality, it's not so geared. (That said, it might make a good editor for a true IDE.)"

And creating each of those requires work.

Work which, apparently, not a lot of people consider to be worth the effort to actually implement.

If you feel otherwise, then start an FOSS project and knock yourself out.

As I said, a couple terminal windows give me the power of the complete Unix environment and all of its tools, plus any scripts I want to write to tweak the use of those tools.

Are there a lot of bells and whistles? No.But when I write code, it's not for the purpose of experiencing a multi-panelled bells-and-whistles display.

Maybe I can keep variable and function names and definitions in my head better than most people... I don't know. But every IDE that someone has shown me and claimed was the greatest thing since sliced bread has turned out to be something that hindered my productivity compared to just running vi or vim in a terminal window... maybe 2 terminals so I can look at a couple files at once, without the weirdness of multi-pane vim, another terminal where I just run the compiler, followed by running the executable if there is no unexpected compiler feedback.

Everything else I've seen in IDE's, even the ones built on and for Linux feel clumsy... Like I'm stuck on some stupid, single-tasking MS-DOS machine in 1985.

"The integrated Operating System/IDE has, like the R10k that you cite, failed every single time it's been tried."

And although I understand the longing for some of the capabilities that the Symbolics LISP machines had(*), they, too, failed for the same reasons you mention.

(especially the ability to inspect the stack when encountering a bug, and then write and insert your own patch into running code, to fix the bug..... WITHOUT THE SOURCE CODE of the original executable. Cool shit. But not every application is one which is appropriate fro writing in LISP, which was the downfall of Symbolics and their LISP machines.

I was there. When the PCs (inc Apple) started. Actually I was there for the early 70's clubs. APPLE: no expansion slots, no alt Apples, much higher prices, smug and condescending "Apple support." No upgrade path, no two or three button mice, the list is pretty much endless. Their Saving Grace? The dictatorial and "awful" person (according to the brown shirt Apple Partei), the insurrectionist Remember "NeXT?) CEO. That would be the misnomered "Jobs." When, not if, Apple crashes, I sincerely hope y'all have sold them short. You should have enough left over to buy a great BEEF burger with cheese. No trans fat, tasteless fries, and ONLY a small carbonated beverage.

Snidely Whiplash wrote:The integrated Operating System/IDE has, like the R10k that you cite, failed every single time it's been tried.And how much of that is poor management vs poor implementation vs it truly being a bad idea vs social issues?Commodore, for example is dead and gone, so is DEC, pretty much everyone I've talked to who lived/worked in the field in that time remarked that they were technically superior. (And let's not forget Beta vs VHS.)

"But the points I'm getting at are more than just "source code editor" -- an IDE should be, well, a development-environment where everything is focused on development and integrated together (not mismashed/forced) -- and while emacs has a lot of extensibility and functionality, it's not so geared. (That said, it might make a good editor for a true IDE.)"

And creating each of those requires work.

Work which, apparently, not a lot of people consider to be worth the effort to actually implement.If you feel otherwise, then start an FOSS project and knock yourself out.I'm starting up something on this vein; working on the design docs. (It's amazing how few projects have a set of design-docs.) And hopefully it'll get some buy-in.

Dirk Manly wrote:(especially the ability to inspect the stack when encountering a bug, and then write and insert your own patch into running code, to fix the bug..... WITHOUT THE SOURCE CODE of the original executable. Cool shit. But not every application is one which is appropriate fro writing in LISP, which was the downfall of Symbolics and their LISP machines.I thought the downfall was bad-planning on management's part: they saw the beginnings of an exponential curve for buy in and planned on that, but then the "AI bust" happened and LISP was out-of-favor. Add in the fact that AT&T had been pushing Unix & C in universities, getting huge buy-in, and as Bret Victor's The Future of Programming speech concludes:"But even more of a tragedy then these ideas not being used is if these ideas were forgotten. If anybody were ever to be shown this stuff and actually be surprised by it. But even that’s not the biggest tragedy; that’s not the real tragedy, the real tragedy would be if people forgot that you could have new ideas about programming models in the first place. — So, let me explain what I mean by that. Here’s what I think the worst case scenario would be: if the next generation of programmers rose up never being exposed to these ideas, the next generation of programmers grows up only being shown one way of thinking about programming, so they work on that way of programming, they flesh out all the details, they kind of solve that particular model of programming, they figured it all out. And then they teach that to the next generation, so that second generation then grows up thinking “oh, it’s all been figured out. We know what programming is. We know what we’re doing.” They grow up with dogma. And once you grow up with dogma it’s really hard to break out of it. […]"The whole talk is really good.

@67 - That may have been for the best; Unix and C have, IMO, set the software industry back decades. I mean it's taken 20-25 years just for the industry to realize that [forcing] ad hoc roll-your-own memory-management in every program is a bad idea.

The pros and cons of memory management in a language were known back in the 1970s, if not earlier. The cons of not having memory management were not suddenly discovered in the 1990s.

C is a cross platform abstraction of assembly. That means it works closer to the way an actual microprocessor works. Microprocessors do not handle memory management for you. I bring this up because people who complain that 'C doesn't do X for you' seem oblivious to the fact that the CPU doesn't do X for you either. For code which is low level and/or performance critical...your kernel, your drivers, your game engine, etc...you need a language that is close to the microprocessor itself. Unless, of course, you want to do all of that in assembly and lock it to one instruction set architecture.

Even at a higher level you need an understanding of what's going on under the hood. How memory management actually works. Programmers unfamiliar with this, i.e. most programmers today, end up writing grossly inefficient code and even leaking memory while believing that leaks shouldn't be possible in their favorite language. (The memory management algorithms are dumb programs on top of a dumb machine, and they can be broken.)

Personally I've never found memory management to be that big of deal. free anything you alloc. If you're losing track of what you've alloc'd you've got bigger problems in your program's design.

It's taken about 20 years to realize that unchecked array-indexing, with no way to actually check (the semantics of C preclude array bounds-checking), is a bad idea.

Again: the microprocessor doesn't know what an array is and doesn't check anything for you. Between the actual silicon and a high level language that tries to save you from your own mistakes there is a need for an intermediate language that's not assembler, but not very high level either.

Array-checking also causes a performance hit. There are lots of places where you don't want or even have the slightest need for that performance hit.

@72 - WTF? Amiga was 16 bit made by Commodore.

I would call it 32-bit. But then I always judged 'bit width' of a system by the instruction set architecture of the CPU, not the data bus size or even the implementation details within the CPU. The 68K ISA was 32-bit even if the first processor, the 68000, did some of the work in 16-bit chunks. That said I don't remember how the Amiga was advertised.

And how much of that is poor management vs poor implementation vs it truly being a bad idea vs social issues?Which filesystem are you going to implement with your OSIDE? Do you really understand filesystems? What are the advantages of each different type? What's that? you don't know? You don't care? You'd better, you're responsible for the filesystems now. How does your kernel load? How do you insure memory space integrity? How do you time-slice the processor?

Most of all, how do you take your developed code and put it into public production on your massively insecure, massively underperforming, optimized for development OS?Developers assume a lot of OS/operational problems that are desperately difficult balancing acts are simple, or even not actually problems.

Snidely Whiplash wrote:And how much of that is poor management vs poor implementation vs it truly being a bad idea vs social issues?

Which filesystem are you going to implement with your OSIDE? Do you really understand filesystems? What are the advantages of each different type?I don't have as much deep understanding of filesystems as I would like; but I think it's really stupid to use FSes as ad hoc and anemic databases, which is what happens with a lot of non-trivial projects. -- You can think of VCS commit/pull as INSERT-/UPDATE- and SELECT-analog operations.

Perhaps ReiserFS, if I don't do the DB direct-IO/access route. NTFS would probably be a good second-choice, as it would be accessible by more machines. And, despite a whole host of limitations, FAT is doable.

What's that? you don't know? You don't care? You'd better, you're responsible for the filesystems now.True; there's a whole chunk of RMDBs that essentially do this via direct-IO, though.

How does your kernel load? How do you insure memory space integrity? How do you time-slice the processor?Kernel-loading is a pain, some people on OS-dev have remarked that writing a boot-loader can be the same amount of work as writing a small OS. That said, there are a few ways to load a program directly, and I'd probably use one of those or at least give it some serious consideration.

Time-slicing is much less of an issue than it otherwise would be; I can let my language's RTL handle tasking... because "there's a mini-RTOS in my language". — That will allow me to use the TASK construct to decompose the system into logically distinct parts, WRT execution.

Most of all, how do you take your developed code and put it into public production on your massively insecure, massively underperforming, optimized for development OS?Now you're making a lot of assumptions. Why must it be insecure? Why must it be underperforming?And why wouldn't having an integrated source-control facility be within the purview of an IDE? Why wouldn't it be possible to take the same storage-facilities (say DB, for the sake of argument) and make them public?

Developers assume a lot of OS/operational problems that are desperately difficult balancing acts are simple, or even not actually problems.Oh, I'm quite aware of this. But, on the flip side, there's a LOT of systems that are grown rather than designed, sometimes this isn't a problem, but sometimes everything becomes a major Charlie Foxtrot because all the people that worked the system are gone either via age or H1B-replacement.

Dirk Manly wrote:That heat, instead of being produced in one FPU one ONE chip, would be spread out among 4 CPU chips (well, probably 8, if they were putting a decent number of FPUs on every CPU chip). Of course, this would lower the number of (integer-only) cores/chip back down to something in the 8-12 range.

It would be interesting to see what kind of numbers Intel can pull off for picajoule per op for floating point ops. One guy on my team used to measure those numbers. I suspect the GPUs have them beat, but idw. The problem is the ops still have to happen, the power cost has got to be similar or worse whether its on the CPU or GPU locations. You're still going to have a tremendous local cooling problem, and also higher Watt consumption rating for your chip if you're Intel, and those are sensitive topics for server farm purchasers. I wouldn't be surprised if the sales team isn't having an inordinate influence over the engineers.

Never built a PiTop, but years ago my team put together some seriously cool PC-104 stack solutions. I really thought we were onto something, and that was a real blast to get into, to. Baffled the sales guys though, no traction. Crazy geeky engineers, what do they know? Then somebody smarter of us thought of the "IoT" buzzword and, well dammit, lol. "You know that thing we were so excited about and pushing you guys so hard on? Yeah. That was this thing."

One Deplorable DT wrote:@67 - That may have been for the best; Unix and C have, IMO, set the software industry back decades. I mean it's taken 20-25 years just for the industry to realize that [forcing] ad hoc roll-your-own memory-management in every program is a bad idea.

The pros and cons of memory management in a language were known back in the 1970s, if not earlier. The cons of not having memory management were not suddenly discovered in the 1990s.You know this, I know this, but WOW did my peers in college not know it: C & C++ was just the absolute BEST, and any criticism of them meant you Obviously Didn't Know What You Were Talking About™. You can still see some of this today with the crowd that acts like "if it doesn't look like C, it's not programming."

C is a cross platform abstraction of assembly. That means it works closer to the way an actual microprocessor works. Microprocessors do not handle memory management for you. I bring this up because people who complain that 'C doesn't do X for you' seem oblivious to the fact that the CPU doesn't do X for you either. For code which is low level and/or performance critical...your kernel, your drivers, your game engine, etc...you need a language that is close to the microprocessor itself. Unless, of course, you want to do all of that in assembly and lock it to one instruction set architecture.True…ish.

There are other low-level alternatives. FORTH is one that's still around, though usually hidden away, and (sadly) mostly defunct BLISS. IIUC, BLISS was just as low-level as C, but had good facilities for arrays.

Ada can also target bare-metal and has pretty good facilities for HW interfacing.

Even at a higher level you need an understanding of what's going on under the hood. How memory management actually works. Programmers unfamiliar with this, i.e. most programmers today, end up writing grossly inefficient code and even leaking memory while believing that leaks shouldn't be possible in their favorite language. (The memory management algorithms are dumb programs on top of a dumb machine, and they can be broken.)This is true; but there's also the people that write O( N**3 ) algorithms when there's better ones around. I'm all for understanding what's going on at the low-levels, but this doesn't mean that we should have large systems written in C.

Personally I've never found memory management to be that big of deal. free anything you alloc. If you're losing track of what you've alloc'd you've got bigger problems in your program's design.I agree, absolutely.But one of the issues is that a lot of these things can be made impossible, or greatly mitigated, at the language-level. (Guy Steele's How to Think about Parallel Programming: Not! touches on this.)

It's taken about 20 years to realize that unchecked array-indexing, with no way to actually check (the semantics of C preclude array bounds-checking), is a bad idea.

Again: the microprocessor doesn't know what an array is and doesn't check anything for you. Between the actual silicon and a high level language that tries to save you from your own mistakes there is a need for an intermediate language that's not assembler, but not very high level either.

Array-checking also causes a performance hit. There are lots of places where you don't want or even have the slightest need for that performance hit.And there's ways you can have safety w/o the hit. Consider this:

For Index in Some_Array'Range loop … end loop;

Because we're taking the bounds of the array as the parameters of the for-loop, we *KNOW* that Some_Array(Index) cannot be out of range, and the checks optimized away. And that's 30-year old language-design there.

I personally don't own anything Apple. All of my family and extended family use Apple. As the quality of MS went, they all turned to Apple - even the electrical engineers?! It's not that hard to build a PC. As for laptops, I LOVE made in Japan NEC. They're the best IMO. But they are in Japanese so that could be a problem until you switch the OS into your language. The keyboard and a few other components remain Japanese (like volume lol).

@113 - You know this, I know this, but WOW did my peers in college not know it: C & C++ was just the absolute BEST, and any criticism of them meant you Obviously Didn't Know What You Were Talking About™. You can still see some of this today with the crowd that acts like "if it doesn't look like C, it's not programming."

I'm definitely not that guy. I think it's important to learn C, if not also assembly for at least one ISA. These things force you to think about how the hardware itself works. But you use the best tool for the task at hand. Sometimes that's C or C++, or even some assembly. Very often it is not.

Most of my work over the years has been in higher level OOP languages with memory management.

@114 - And there's ways you can have safety w/o the hit. Consider this:

For Index in Some_Array'Range loop … end loop;

Because we're taking the bounds of the array as the parameters of the for-loop, we *KNOW* that Some_Array(Index) cannot be out of range, and the checks optimized away. And that's 30-year old language-design there.

For that to work the compiler has to know about the array. A C array is just a memory address. That's both the danger and the beauty of it. I've written code that takes advantage of that fact for very real gains in performance. Code which simply cannot run the same way in a language where your one-time bounds check will work.

There's a need for C even if we agree that certain features are good for other languages and most code bases.

Thanks, Vox. My husband just bought a Motorola Moto phone and loves it. The battery life is unbelievable compared to most smartphones. I have an iPhone 6s and was cringing at the new price of the iPhone X. I know I'm due for an upgrade but was still pining for an Apple iPhone until Cook brazenly exhibited such discrimination and disregard for truth and freedom.

I finally broke free of Apple's chains. I'll be getting a Moto soon and will not be purchasing an Apple laptop, either. I'm done with this horrid, bigoted, emperor-with-no-clothes company. There is much better tech to be had elsewhere.

"I finally broke free of Apple's chains. I'll be getting a Moto soon and will not be purchasing an Apple laptop, either. I'm done with this horrid, bigoted, emperor-with-no-clothes company. There is much better tech to be had elsewhere."

And now you are dealing with the other A-Hole, Alphabet. There is no safe haven.

One Deplorable DT wrote:There's a need for C even if we agree that certain features are good for other languages and most code bases.Is that really true though?I wrote the beginnings of an OS using only TP7* and something like three lines of inline assembly -- zero C anywhere -- though I will admit to getting stuck on memory-management and having to abandon the project due to school-pressures.

You learn a lot, even from failed attempts, I think. But x86 is pretty terrible: kludge upon kludge upon kludge. -- Personally, I think it's about time for a new architecture, perhaps something incorporating the ideas of the Rekursiv, APX432, MPS-10, and so forth. (The Mill processor was interesting stab in the new-architecture direction, but I haven't heard anything about it in a while.)

* Unreal mode, with a loader that could load the EXE directly, as long as you didn't use any DOS interrupts, you were good to go.

"I don't have as much deep understanding of filesystems as I would like; but I think it's really stupid to use FSes as ad hoc and anemic databases, which is what happens with a lot of non-trivial projects. -- You can think of VCS commit/pull as INSERT-/UPDATE- and SELECT-analog operations."

EVERY filesystem is a database.If it's not capable of being a database, then it's not a filesystem, it's just one lone array of sectors of bytes.

"Time-slicing is much less of an issue than it otherwise would be; I can let my language's RTL handle tasking... because "there's a mini-RTOS in my language". — That will allow me to use the TASK construct to decompose the system into logically distinct parts, WRT execution."

THAT is wishful thinking.

If your "RTOS IN MY LANGUAGE" is running on a machine which doesn't have an Real Time OS on it, and your application written with the "RTOS IN MY LANGUAGE" gets swapped out... guess what:

You're NOT getting Real Time performance.

This reminds me of when Microsoft first introduced LoseDOS NT, and all the shills were saying "And the Registry is ATOMIC, so no write-errors are possible when it's being written to!"

And... is that registry file being stored in an atomic filesystem?

Well...uh... no.

Then your registry IS NOT ATOMIC.

Same principle for "Muh RTOS in my language" claim.

If the underlying system doesn't have the same capability in ITS code, the upper layer cannot create it for you.

"Never built a PiTop, but years ago my team put together some seriously cool PC-104 stack solutions. I really thought we were onto something, and that was a real blast to get into, to. Baffled the sales guys though, no traction. Crazy geeky engineers, what do they know? Then somebody smarter of us thought of the "IoT" buzzword and, well dammit, lol. "You know that thing we were so excited about and pushing you guys so hard on? Yeah. That was this thing.""

Marketting people are clueless.

They'll gladly write copy for things that are not even within reach, but which defy the laws of physics and thermodynamics. On the other hand, show them something that doesn't apply to their life personally, and they literally don't get it.

"This is true; but there's also the people that write O( N**3 ) algorithms when there's better ones around. I'm all for understanding what's going on at the low-levels, but this doesn't mean that we should have large systems written in C."

"Because we're taking the bounds of the array as the parameters of the for-loop, we *KNOW* that Some_Array(Index) cannot be out of range, and the checks optimized away. And that's 30-year old language-design there."

Here, let me enlighten you with some wisdom:

"The reason C allows you to do stupid things, is so that you can also do brilliant things."-- Brian Kernighan

Look at Pascal. There are so many "safety features" in Pascal, that any version of Pascal that is actually useful is non-portable because it doesn't comply with the standard.

At some point, THE PROGRAMMER HAS TO TAKE RESPONSIBILITY FOR HIS SHIT!

Could you ever write an OS in Pascal?Well, only if you want to duplicate every string-manipulation function for every possible size string that your OS might encounter.That's part of what built-in array bounds-checking gets you: Programming with handcuffs -- forever.

"You learn a lot, even from failed attempts, I think. But x86 is pretty terrible: kludge upon kludge upon kludge. -- Personally, I think it's about time for a new architecture, perhaps something incorporating the ideas of the Rekursiv, APX432, MPS-10, and so forth. (The Mill processor was interesting stab in the new-architecture direction, but I haven't heard anything about it in a while.)"

The ARM architechture is pretty sweet, and everything except for the floating point ops executes in a 3-stage pipeline. The conditional execution can eliminate a lot of jumps and compacts many if/then and if/then/else constructs.

I would LOVE to see a full-blown desktop CPU using ARM architecture.A big, beautiful register file like the old VAX-11's, but WITHOUT the 12 different addressing methods which implemented 15 different addressing modes, on up to 5 different operands, causing major thrashing whenever an instruction would produce a virtual memory miss, and the instruction would have to be unwound back to the beginning, and then restarted after the memory page was swapped in.

In Contrast, ARM is very close to an ideal RISC instruction set. And yet the code is more compact than you would expect for an instruction set which is so RISK-y.

Could you ever write an OS in Pascal?Well, only if you want to duplicate every string-manipulation function for every possible size string that your OS might encounter.The original Macintosh OSes were written in Pascal, up through version 7 I believe.

Yes. If C disappeared tomorrow, someone would invent a cross platform, lightweight abstraction of assembler to solve the problems that C is so adept at solving.

I wrote the beginnings of an OS using only TP7* and something like three lines of inline assembly -- zero C anywhere -- though I will admit to getting stuck on memory-management and having to abandon the project due to school-pressures.

And? I'm sure you could write some form of OS that exists entirely in a web browser's JavaScript interpreter. That doesn't mean it would actually have the performance and features necessary to be useful. (I don't mean to deride your attempt. But a hobby project does not undermine C's proven performance and utility in system after system.)

But x86 is pretty terrible: kludge upon kludge upon kludge. -- Personally, I think it's about time for a new architecture, perhaps something incorporating the ideas of the Rekursiv, APX432, MPS-10, and so forth.

It was time for a new architecture in the 1980s. Company after company agreed with you and produced RISC microprocessors that truly did obliterate x86 offerings in performance per square mm, performance per watt, and performance per clock. They didn't win because at the end of the day some people needed the raw performance, but most people needed backwards compatibility.

ARM succeeded in the mobile space because there was no backwards compatibility to contend with, and even now performance per watt is critical. But I don't think we're getting away from x86 in "real" computers any time soon.

I wanted to express this earlier but decided not to. If your engineering team is of sufficient average IQ, talent, and discipline, there's no reason to avoid C for large systems. Hell, there's no reason to avoid assembler if there's no need for portability and you have sufficient talent. The hard truth is that higher level languages and safety features exist in part for productivity reasons, but also in large part because very few people are Linus Torvalds.

Regarding array bounds checking (an issue which keeps coming up): higher level languages are able to do this because under the hood arrays and strings are structures, and they carry information such as their size in their structure. They're not a simple run of bytes starting at an address. The performance hit comes because at that point you must access the structure via function calls. Otherwise you cannot perform your safety checks.

You can have a compiler which looks at for(int i = 0; i < 1000; i++) and recognizes that the bounds check only has to occur once, and the code within the loop can then access the run of bytes directly with no further overhead. Note that this example is C and there's nothing about C semantics which would prevent bounds checking. C just simply doesn't view the world at that high of a level. An array is nothing more than a run of bytes starting at a particular address in memory in C. If you want bounds checking, write your own struct and accessor functions.

But that optimization is useless when you have for(int i = 0; i < n; i++) and n is a variable which is only known at runtime. At that point every loop iteration must call a function to check bounds and then get or set the indexed element. Compiler inlining helps reduce the performance hit by eliminating function call overhead, but you're still wasting cycles on the check. And inlining will impact code size and therefore both cache efficiency and data bus transfer times.

There's no way around it: if you want that safety feature it's going to cost CPU cycles.

This is just one example. When you add all of the nice safety checks up in a modern high level language they can cost a lot (depending on the algorithm). It may not matter for an end user GUI application. But it can break an OS kernel, driver, game engine, DBMS engine, or other performance critical piece of code. Some of the features we take for granted in high level languages are also completely incompatible with a RTOS.

People love to complain about C because of the bugs and security holes that pop up in the news which "...would have been caught in my favorite high level language." They completely miss that favorite-high-level-language couldn't meet the memory and performance requirements in the first place.

If it's not capable of being a database, then it's not a filesystem, it's just one lone array of sectors of bytes.I agree; but I honestly think a better interface/manipulation system could (and should) be designed keeping that firmly in mind.

Dirk Manly wrote:At some point, THE PROGRAMMER HAS TO TAKE RESPONSIBILITY FOR HIS SHIT!

Could you ever write an OS in Pascal?

Well, only if you want to duplicate every string-manipulation function for every possible size string that your OS might encounter.

That's part of what built-in array bounds-checking gets you: Programming with handcuffs -- forever.Of course you can write an OS in Pascal; IIRC, the original/classical Apple OS was Pascal and Assembler.

Just because the language has string-issues doesn't mean the OS to use the same language-defined string. But you do touch on a good point: in Pascal having the length as part of the type is… inconvenient. Ada's unconstrained arrays are a vast improvement.

Dirk Manly wrote:The ARM architechture is pretty sweet, and everything except for the floating point ops executes in a 3-stage pipeline. The conditional execution can eliminate a lot of jumps and compacts many if/then and if/then/else constructs.

I would LOVE to see a full-blown desktop CPU using ARM architecture.

A big, beautiful register file like the old VAX-11's, but WITHOUT the 12 different addressing methods which implemented 15 different addressing modes, on up to 5 different operands, causing major thrashing whenever an instruction would produce a virtual memory miss, and the instruction would have to be unwound back to the beginning, and then restarted after the memory page was swapped in.

In Contrast, ARM is very close to an ideal RISC instruction set. And yet the code is more compact than you would expect for an instruction set which is so RISK-y.I haven't had the opportunity to mess with ARM, personally. The vast majority of my work is x86, though I now have a couple SPARCs and a 68k to take care of at work. I remember kind of liking the MIPS instruction-set/architecture, and as you can see I have a fondness for some of the lesser-known/oddball chip-architectures. -- I'm of the belief that even the failed ones have something to teach, even if it's how not to do something.

————————

Dirk Manly, Snidely Whiplash, Ominous Cowherd, One Deplorable DT, wekaI'd love to continue this discussion, and also (1) get your opinion of a plan I have for addressing the issues David The Good brings up; as well as (2) discussion of OS-design.

"I agree; but I honestly think a better interface/manipulation system could (and should) be designed keeping that firmly in mind."

Look... a FILING CABINET is a database system (paper, not digital).

It really doesn't matter what you think about filesystems, their fundamental job IS TO BE A BARELY-STRUCTURED DATABASE. Why? Because the less structured it is, the more versatile it is. Completely unstructured (no directories) means a huge, flat filesystem, which is almost useless as well. So, the only structure is some hierarchal tree so that related files can be localized, rather than having to report 30,000,000 filenames when you ask for a listing.

If you want "structured" files like what existed in some old IBM filesystems, realize that you are stuck with the structures that the OS provides. Kernighan was correct when he said, in reality, at the physical level, every file is just a stream of bytes, no matter how the OS presents it to the running process. And all communications from one process to another, in the end, boil down to a stream of bytes.

All these other things are just abstractions.If you want some kind of 3 parallel parts thing, then just open up 3 sockets, and send your data through those sockets. Congratulations, you now have your 3-part datablock. And you can store those as filename.part1 filename.part2 filename.part3

And if that gives you the heebie-jeebies, then put them all inside a directory called filename, and don't write anything else in that directory.

"Of course you can write an OS in Pascal; IIRC, the original/classical Apple OS was Pascal and Assembler."

And as I stated in an earlier comment, any version of Pascal that's actually useful is non-portable, because it violates the Pascal standards.

Therefore, it could not possibly have been written in Pascal. It was written in some sort of FrankenPascal, so that it could be usable for a real-world problem.

Look, even Nicklas Wirth said that Pascal was NOT to be used for anything other than teaching how to program, and that if you want to do actual work, you should be using ALGOL (from which Pascal was derived, by dumbing it down with training wheels and embedding it inside an array of 10-footx10-foot pillows, so that nobody gets any ouchies. You put a toddler in one of those things that has a kind of seat surrounded by round tray supported on a framework with 4 wheels which keeps the child from being able to do stupid things, like crawl out the window, or reach the door handle on the door to get outside.... or pull open draws and cupboard doors. Why? Because he doesn't know what he's doing. The problem is less one of programming languages doing safety checks, and more of one of sloppy programmers not taking the time to even understand their own code, and when it needs to have safety checks, and when it doesn't.

If you want to safety check every single time you index into an array, the performance penalty is going to be huge if it's a significant application. And if you've allocated a bunch of memory from the free pool, to use as arrays during execution, you're going to pay a price in memory consumption, too.

I suppose we could put 10-meter diameter circular bumper structures around all wheeled vehicles, and this would greatly reduce the number of fatalities in our roads and highways.

However, it would ALSO make the vast majority of our roads and highways so inefficient, that we would probably end up killing even MORE people due to how many non-traffic-related emergency cases end up dying before they get to the hospital, because every single road in the nation would be a constant, hopless traffic jam.

At some point, programmers NEED TO BE COMPETENT.

If compilers "knew" what the programmer actually wants, then we wouldn't even need programmers, now, would we.

Quit trying to dumb-down languages, and instead, worry about why we have so many lousy programmers (hint, hint: it's because all of the good ones have been literally chased out of the market, in preference of Pakis and Stree-shitters).

Apple used to ship a manual which had complete schematics AND CODE LISTINGS.

"I haven't had the opportunity to mess with ARM, personally. The vast majority of my work is x86, though I now have a couple SPARCs and a 68k to take care of at work. I remember kind of liking the MIPS instruction-set/architecture, and as you can see I have a fondness for some of the lesser-known/oddball chip-architectures. -- I'm of the belief that even the failed ones have something to teach, even if it's how not to do something."

Buy and Arduino board for $10.

Or pick up a Raspberry Pi for $35 (and a 5V wall-wart power supply that delivers 2.5A without voltage sag. Or just get one rated for 3A). Write some simple C or C++ code, and use gcc/g++ with the -S option (produce assembly language output). Compare it to the same code compiled with the same flags on an x86 machine.The ARM code is more straightforward, and because of the large (15) general purpose register file file, none of this constant shuffling things back and forth between memory and the A* registers as on the x86 family, where practically everything has to go through one register which is basically A for Accumulator.

The ARM architecture is much like the VAX architecture, without being hobbled by the VAX's beautiful, but unfortunately thrash-inducing addressing modes and instruction set.

It was written in a Franken-Pascal, so that the code could actually do things that standards-compliant Pascal will not allow.

While I won't say that it's impossible to write a useful OS in standard-compliant Pascal, I would say that, due to the several productivity-hampering issues in Pascal (an N character string is of a completely different type than an N+1 character string), that your OS would end up taking substantially more memory than it would require to perform the same functionality using C... or even LISP.

Pascal was NOT designed to do "real world" work in. Nicklas Wirth himself said so.

Look at everything that someone points to and says, "See, THAT was written in Pascal" and then, when you investigate it, you find out that what it was ACTUALLY written in was something like Turbo Pascal, or some other bastardization, which is absolutely necessary to make Pascal into a language more useful than "Introduction to Computer Programming" and "Data Structures and Algorithms" courses.

How bad is Pascal? When I was taking my compilers course, at the time, I could program in a variety of assembly languages, including VAX-11, Fortran, and Pascal. I was also taking C. When the first project assignment was given, I realized that I was going to have so many problems just getting around the handcuffs that are built into Pascal, that I decided to do the project in C, despite the fact that I was only 1/3 of the way through the C language course. I didn't know C thoroughly yet, but I DID know that I wouldn't be spending more effort fighting the programming language than solving the problem assignment.

And if you read the Oberon OS page, you'll see that it was writtn in the Oberon language, and if you read the Oberon language page, then you'll see that Oberon is an ALGOL-like language.

Which is exactly consistent with what Nicklas Wirth said about using ALGOL, not Pascal, to do real-world programming.

Look at the geneology

Pascal -> Modula -> Modula 2 -> Oberon language.

If Pascal, Modula, or even Modula 2 were sufficiently flexible enough to write the Oberon OS in, then they would have been no reason to create YET ANOTHER descendant.

So, my contention holds that you cannot write an Operating System in Pascal.... well, maybe some toy operating system for a 4-week project as part of some 400 level systems programming course....inside an abstracted, virtual environment.

Why can't Pascal be used to write an actual OS on actual hardware?

Take for example, the fact that every OS *MUST* be able to read and write to memory-mapped hardware addresses.

Pascal ABSOLUTELY disallows setting a pointer to any literal value. Pascal pointers can only be set to NIL, the address of a variable, or the address returned from new(), or to the address of a member of an array.Since Pascal has no provisions for sending data to an I/O controller, nor of reading from it, you cannot write an OS in standards conformant Pascal. You can't read or write from/to a disk drive controller. You can't read or write from/to a parallel port. You can't read or write from/to a serial port. Because Pascal was not designed to be a systems programming language.

On the other hand, C was designed SPECIFICALLY with that purpose in mind. Having said that, that does NOT mean that C is the ultimate applications programming language. It was NOT designed with applications programming in mind at all.

I think expanding on the Raspberry Pi project would be a really cool thing.

I wonder if there are any ARM-based SOC's with 8G on-chip memory, especially using the big.LITTLE concept (look up big.LITTLE, it's a simple but significant concept. During low CPU utilization, run the code through a simple core that doesn't take much power to run. If CPU utilization goes above a certain threshold, switch off that core, and switch on a sister core (which has acess to the same, physical registers -- they are connected to both cores), and run the code through a larger, more sophisticated core that can execute the instruction in fewer clock cycles. Once the utilization drops below some threshold, turn off the complex core, and turn on the simple core again.

The VAX-11 was gorgeous and elegant, but misconceived, as it doesn't actually work well in a virtual memory environment.

The ARM, especially the latest (v7 and v8) are awesome. 32- and 64-bit instructions widths. Internal operations on 8-, 16-, 32-, 64- and 128-bit values. Addressing in the form of effective address = r1 + r2*X (where X is either a literal, the contents of a memory location, or the contents of a register) which makes striding through an array of structures very simple (just set either X or R2 to the appropriate value for the size of the struct. Compilers will put the struct size as a constant loaded into the R2, and the array index as the X value, either a literal or memory reference).

And none of the many, many x86 kludges that you alluded to (most of which have their origins in the 8080 and some all the way back to the 4004). God, I hate the Intel architecture. The world today would be so much better if IBM had decided to build the PC line with anything other than Intel chips.

Motorola 68000 would have been a vast improvement. No segmented memory games, and the stupid "near" "far" and "long" pointer (and branch, and jump, and jumpsub) nonsense.

I'm also thinking that there could be some vlaue in marketing some dressed-up Raspberry Pi or BeagleBone box as a pocket-sized desktop PC could be profitable. Use one USB port to access a SATA disk drive using a USB<->SATA adapter, leaving 3 USB ports for keyboard, mouse, and printer or thumbdrive (or a plug-in USB hub). And either a large (like 5~7.5A range 5V wall-wart power supply), or internalize it. And a fan to blow air down onto that ARM chip.

A slightly larger version, with internal power supply, could be a 4-node cluster in a package roughly the size of a Rubik's Cube (maybe 50% larger in 2 or 3 dimensions), or alternatively, in a laptop form.

Contact me by email dirk.gently00@gmail.com. I will then reply to that with my real email address.

Also, there seems to be a thing in high schools around here (southeast Michigan) of starting "robotics clubs". Although I'm not sure how many are actually in existance. One was announced at my friend's son's high school, but nothing came of it. That was in Flint. I noticed Royal Oak High School's webpage says something about a Robotics Club (oddly, fall and spring offerings, like sports programs.... which is strange for a non-sports type of extracurricular activity).

As I recall...* Through System 6 it was Pascal and 68K assembler.* With System 7 new features were C.* System 8 ported most of the Pascal code to C. Apple was under pressure to produce a truly PowerPC native OS as opposed to one where key parts were PowerPC and the rest was emulated 68K compiled from the old Pascal code.* System 9 was all, or almost all C.* Through all of this, APIs retained Pascal calling conventions for the obvious reason of not breaking existing code.

I believe Dirk is correct that the version of Pascal used was not 100% standards compliant.

Dirk Manly wrote:Standard-compliant Pascal is a "toy" language. If you need more than a toy, you use something like ALGOL or Modula-2.I don't think anybody's really disagreeing with you; I use Ada myself.

When I was a student in college they insisted on teaching us Pascal. As students we protested at the time in lieu of C, but to no avail. We were not French, so we did not go setting things on fire out in the street in protest, and this was our mistake. Pascal it was, and FORTRAN77.

In those days it was amazing how many projects started out in life as Pascal. The only ones that seemed to survive and make it to release were the ones that abandoned Pascal, however, and adopted C.

But then there was Borland. In those days before open source Borland seemed to be sent by God and was truly amazing. They put out compilers and good ones for an affordable price. Phillippe Khan ruled the roost over at Borland and he was a passionate Frenchman who believed that programming ought to be accessible to anybody. Somebody mentioned wistfully about the day when computer products had manuals -- Borland hands down had the best manuals out of them all, even IBM, no question.

Borland's TurboPascal was the only Pascal variant that ever seemed able to reliably product releasable software. It became ObjectPascal and then Delphi. Unfortunately right about this time Phillippe Kahn was ousted from his CEO position and Borland went into decline and never recovered.

Delphi was pretty cool, I remember. It's still around though I haven't even seen it installed anywhere in 20 years. I think Skype is written in it, or at least it was until Microsoft bought it and who knows what they did to it.

One odd claim about Delphi is (or was) that it was the "fastest compiler ever written". Not meaning that produced the fasted code ever, but that it the compilation process itself was lightning fast. It could compile enormous bodies of source code in a relative flash. That was a neat trick, I thought, and something very much nice to have, but was it really that big of a deal? Well along came Google's "Go" language, whose reason-for-being is apparently that they absolutely had to have a fast-compiling-compiler. I'm told Delphi beats it.

The Deplorable Podunk Ken Ramsey wrote:But then there was Borland. In those days before open source Borland seemed to be sent by God and was truly amazing. They put out compilers and good ones for an affordable price. Phillippe Khan ruled the roost over at Borland and he was a passionate Frenchman who believed that programming ought to be accessible to anybody. Somebody mentioned wistfully about the day when computer products had manuals -- Borland hands down had the best manuals out of them all, even IBM, no question.I have a physical-copy of the TP 4 manual, as well as a set of the TP 7 for Windows manuals -- you're right: they are good. I taught myself programming with the manuals + compiler.

Borland's TurboPascal was the only Pascal variant that ever seemed able to reliably product releasable software. It became ObjectPascal and then Delphi. Unfortunately right about this time Phillippe Kahn was ousted from his CEO position and Borland went into decline and never recovered.

Delphi was pretty cool, I remember. It's still around though I haven't even seen it installed anywhere in 20 years. I think Skype is written in it, or at least it was until Microsoft bought it and who knows what they did to it.I have the install media for 7/8.net (bundled together) and RAD Studio 2007 -- I might put them on my old laptop and do a project with them for nostalga's sake. Actually, I think there was a "Delphi for the Web" in D2007 -- that might be fun to play with, God knows it'd be far more enjoyable than using PHP for a website.

One odd claim about Delphi is (or was) that it was the "fastest compiler ever written". Not meaning that produced the fasted code ever, but that it the compilation process itself was lightning fast. It could compile enormous bodies of source code in a relative flash. That was a neat trick, I thought, and something very much nice to have, but was it really that big of a deal? Well along came Google's "Go" language, whose reason-for-being is apparently that they absolutely had to have a fast-compiling-compiler. I'm told Delphi beats it.I can attest that TP7, D1, D5/D7 were indeed lightning fast -- the guy that did them was Anders Hejlsberg, who also did the C# language. I think you can tell the TP/Delphi influence on the language.