Although, my question may be entirely irrelevant, but I have sensed a pattern between most programming languages and their official implementations.

Interpreted (byte-interpreted?) languages like Python , Lua etc. usually have an extremely lenient and easy syntax and are generally type-less or do not require the developer to explicitly write variable types in source code;

Compiled languages like C , C++ , Pascal etc. usually have a strict syntax, generally have types and mostly require more code / development time

Languages whose official implementations are JIT-Compiled like Java / C# usually are a unique compromise between the above two with some of the best features of both.

Some of the more modern compiled programming languages like D and Vala (and the GNU GJC implementation of Java) are perhaps an exception to this rule and resemble the Syntax and features of JIT-Compiled languages like Java and C#.

My first question is, is this really relevant? Or is this just a coincidence that most interpreted languages have easy syntax , JIT-Compiled ones have a moderate syntax and features etc.

Secondly, if this is not a coincidence, then why is it so? Like, for example, can some features only be implemented in a programming language if you are, say, JIT-Compiling it?

Cool, I though it wasn't a quote but it could lead to answerers thinking it was one and not try to refute it (or blindly agree with it)... I've noticed similar patterns but unfortunately don't have a good answer.
–
Yannis Rizos♦Dec 21 '11 at 6:33

Perl is dynamically typed for user defined types, statically typed with respect to distinguishing arrays, hashes, scalars, and subroutines, and strongly typed via use strict, interpreted and JIT compiled (not at the same time of course)... Whenever someone tries to make sense of language design, throwing in some Perl is always fun...
–
Yannis Rizos♦Dec 21 '11 at 8:03

3

What do you mean by "lenient syntax" vs. "strict syntax"? They're all formal languages and none will run source code with syntax errors.
–
nikieDec 21 '11 at 9:08

5 Answers
5

There is no connection whatsoever between semantics and syntax. Homoiconic compiled languages like Scheme comes with a pretty minimalistic syntax. Low level compiled meta-languages like Forth are even simpler than that. Some very strictly typed compiled languages are built upon a trivial syntax (think ML, Haskell). OTOH, Python syntax is very heavyweight, in terms of a number of syntax rules.

And yes, typing has nothing to do with syntax, it's on the semantics side of a language, unless it's something as perverted as C++, where you cannot even parse without having all the typing information available.

A general trend is that languages that evolved for too long and did not contain any design safeguards against syntax deviations would sooner or later evolve into syntactic abominations.

+1 for making me look up "homoiconic"... And for the subtle nod to PHP...
–
Yannis Rizos♦Dec 21 '11 at 8:54

1

+1, languages that evolved for too long and did not contain any design safeguards , does this also refer to Delphi/Object-Pascal?
–
ApprenticeHackerDec 21 '11 at 8:57

@IntermediateHacker, Pascal is a strange story. It kept relatively static by its clean design and by the sheer respect to Wirth. OTOH, Wirth himself initiated several rounds of cleansing, abandoning original Pascal design and distilling the ideas into Oberon and Modula. Probably this is the right way - building new languages rather than allowing an old one to evolve freely.
–
SK-logicDec 21 '11 at 9:31

1

@ThomasEding, you're wrong. The same semantics can be implemented on top of a very wide range of syntax styles, even with a syntaxless language (like Lisp or Forth). Same syntax can be used with a very wide variety of different semantics - e.g., syntax of C and Verilog expressions is nearly the same, but semantics is dramatically different.
–
SK-logicSep 19 '13 at 8:45

1

@Jack, you're trying to redefine what syntax is. There are no practical languages that need a Turing-complete parser, most are nothing more than context-free. And this is where the syntax should stay. Please do not extend this (already too stretched) notion anywhere else. And I already mentioned Curry-Howard isomorphism - it's all about semantics, far beyond the mere correctness rules. I think, the very term "type checking" is extremely counterproductive and should not be used, it is very misleading, it does not reflect the nature of the type systems.
–
SK-logicDec 24 '14 at 10:16

Programming languages have evolved over time, and the technology of compilers and interpreters has improved. The efficiency of the underlying processing (ie the compilation time, the interpreting overhead, the execution time etc) is also less important as mainstream computing platforms have grown in power.

The language syntax does have an impact - for example, Pascal was very carefully designed so it could use a single pass compiler - ie one pass over the source and you have excutable machine code. Ada on the other hand paid no attention to this, and Ada compilers are notoriously difficult to write - most require more than one pass. (One very good Ada compiler I used many years ago was an 8 pass compiler. As you might imagine, it was very slow.)

If you look at old languages like Fortran (compiled) and BASIC (interpreted or compiled) they have / had very strict syntax and semantic rules. [In the case of BASIC, thats not Bills old BASIC, you need to go back before that to the original.]

On the other hand, looking at other older things like APL (a bunch of fun) this had dynamic typing, of sorts. It was also generally interpreted but could be compiled too.

Lenient syntax is a difficult one - if that means you have things that are optional or can be inferred then it means the language has sufficient richness that it could be culled. Then again, BASIC had that many years ago when the "LET" statement became optional!

Many of the ideas you now see (for example, typeless or dynamic typing) are actually very old - first appearing in the 1970's or early 1980's. The way they are used, and the languages these ideas are used in has changed and grown. But fundamentally, much of whats new is actually old stuff dressed up in new clothes.

Nitpickers corner: Many interpreted languages are tokenised or "byte compiled" at the time they the source is loaded / read-in. This makes the subsequent operation of the interpreter a lot simpler. Sometimes you can save the byte-compiled version of the code. Sometimes you can't. Its still interpreted.

Update: Because I was not clear enough.

Typing can vary widely.

Compile-time fixed static typing is common (eg, C, Ada, C++, Fortan, etc etc). This is where you declare a THING of a TYPE and it is that way forever.

It is also possible to have dynamic typing, where the thing picks up the type that is assigned to it. For example, PHP and some early BASIC, and APL, where you would assign an integer to a variable and from then on it was an integer type. If you later assigned a string to it, then it was a string type. And so on.

And then there is loose typing, for example PHP where you can do truly bizarre things like assign a numeric integer (quoted, so its a string) to a variable and then add a number to it. (eg '5' + 5 would result in 10). This is the land of the bizarre, but also at times the very very useful.

HOWEVER these are features designed into a language. The implementation just makes that happen.

Strong typing is not the counterpart of dynamic typing. It's the counterpart of weak typing. The counterpart of dynamic typing is static typing: in one, the types of expressions in a program can be known statically (i.e. without running the program); in another the types can only be know dynamically (i.e. the program must be run).
–
R. Martinho FernandesDec 21 '11 at 7:20

Yes and both some variants of BASIC and APL were doing this back in the late 1970's. APL types are not quite as we understand them today (being things like universally typed integer/float but could also be vectors, strings, and multi-dimensional matrices).
–
quickly_nowDec 21 '11 at 8:08

A Fortran interpreter is still widely used (see Cernlib and PAW). And its descendant, ROOT, is built upon a C++ interpreter.
–
SK-logicDec 21 '11 at 8:27

I'm not entirely clear how strong/weak and static/dynamic typing relates to syntax, to be honest. But the answer-quality was pretty good, so I am just avoiding upvoting. I'd class C typing as "static/weak" (it's trivial to look at a stored value as if it was another type, possibly getting the value wrong).
–
VatineDec 21 '11 at 13:18

@Vatine - I'd actually say strong at compile time, non-existant at run time - if you want it that way. You can do that using pointers and their equivalent in many languages. It is even possible in classical pascal using variant records, and in Ada using UNCHECKED_CONVERSION (though difficult, its possible).
–
quickly_nowDec 21 '11 at 22:35

I generally agree with quickly_now in that your observation is mainly a result of history. That said, the underlying reasoning boils down to something like this:

The more modern a language is, the more comfortable it should be to use.

(Not a quote really, just my own formulation.)
When I write comfortable here, I refer to what you called best features of both. More precisely, I do not want to speak for or against static/dynamic typing or strict/lenient syntax. Instead, it is important to see the focus being placed on developers and increasing their comfort-level when working with the language.

Here are some reasons, that have not been mentioned in previous answers, which may provide you with some ideas for why you observe these things (and are all sort of based on the history of programming lanugage development) :

We have hundreds of programming lanugages these days. When a new one comes up, how can it find a broad audience? This is the main reason, why new languages always try to increase the developers' comfort-level. If the language can do the same as an older one, but can do it much easier/simpler/more elegant/etc. you may want to consider actually switching.

Learning curve goes hand in hand with that. In the past, we had few languages and investing time to learn one was worth it. Even if that meant investing a lot of time. Comfort is again increased, if you come up with a language that developers can learn very quickly. Complexity of any kind (f.ex., complicated involved syntax) are detrimental to this, and hence, are reduced more and more in newer languages.

Technological advances (a direct historical reason here) are responsible that compiler builders can now place more focus on developer comfort. In the early days, we were happy to be able to build a compiler at all. However, that often implied heavy restrictions being made. As the technological know-how increased, we were able to lift these restrictions again.

So in general, programming languages and compilers have seen a development similar to that of typical end-user applications:

Initial stage: It's a cool thing to have, but the bleeding edge technology barely makes it work at the cost of comfort/useability/what-not.

Technological improvement: We can build these things more robustly, faster, and easier.

Focus turns to the user: Similarly to the Web2.0 movement focusing on user experience, new programming languages focus on the developer perspective.

(Not a quote really, just my own formulation.) Well, you formatted it as code, not as a blockquote, so I don't think anyone thought it was a quote :)
–
Yannis Rizos♦Dec 21 '11 at 9:21

2

Comfort clearly depends on a taste (which is always entirely subjective). The language I'm most comfortable with was designed in 1959, and I can't stand dealing with some of the languages that appeared in this century.
–
SK-logicDec 21 '11 at 9:35

1

Comfort also depends on purpose. Running PHP or Prolog on an 8k embedded micro for a washing machine controller might be "comfortable" to program, but also damn hard to actually make it fit and run with acceptable performance.
–
quickly_nowDec 21 '11 at 22:37

A given programming language may or may not expose or constrain enough semantic information for a compiler to deduce how to reduce it to executable code without added runtime decisions ("what type is this variable?", etc.) Some languages are explicitly designed to make this constraint mandatory, or easy to determine.

As compilers get smarter, they might be able to guess or profile enough information to generate executable code for the most likely path(s) even for languages which were not explicitly designed to so expose or constrain those decisions.

However, languages where code (evalString()) can be created or entered at runtime (and other stuff that the compiler can't deduce or guess) may require an interpreter or JIT compiler to be available at runtime, even with attempts to pre-compile them.

In the past, a programming language and its implementation might have evolved so as to fit some hardware constraint, such as whether the interpreter might fit in 4k or 16k, or whether the compiler might finish in less than a minute of CPU time. As machines get faster, it has become possible to (re)compile some formerly interpreted programs as fast as the programmer can hit the return key, or interpret formerly compiled program source code faster than slightly older hardware could run optimized compiled executables.