Sorry about the clickbait title, but I couldn't think of how to start. Don't worry - I fixed it. -ST

It has always bugged me how letters and symbols are reused to death in equations.

A B C X Y α ß µ σ - they mean hundreds of things in hundreds of different disciplines. Every time I open a document that has equations, I search for a reference stating what δ stands for this time.

And it gets worse, because it is not rare for the authors to tack on subscripts as they see fit. One example of this is the EN 1990 Eurocodes - a norm for calculating the structural integrity of various load-bearing structures. 60+ chapters á some 200 pages each, that cross-reference and commonly uses different letters for the same thing (they were developed for 35 years by hordes of experts). You need to hunt through chapters to sleuth if the γ they used in one place is equivalent to the γM0 that it points to in another, because it could actually be γM1, like. It is assumed that the reader should understand that from context.

So there is an apparent lack of signs which leads to tacking on enumerations and variations, further complicating things. This is needless. In a world where there are thousands of languages and hundreds of alphabets, why this narrow selection of symbols? How about using hebrew, arabic, cyrillic, bengali, tamil, wingdings? Heck - chinese, japanese, korean? I have access to lots of squiggly things on my computer, why not use them?

PS: Mathematicians often use hilarious notation when it comes to functions, I concede that. But far too often they also just resort to a garble of parenthesies of various designs, when they could have gone creative with backwards writing, emoticons and whatnot. Booo

Latin and Greek alphabets are used for historical reasons, since they were the only alphabets European mathematicians could be expected to know. Today, we do occasionally use letters from other alphabets, such as א and ב from Hebrew, but they are an enormous pain to type into equations, since they reverse the direction of text. To be totally honest, the 100ish symbols we already have, minus additions like subscripts, superscripts, stars, strikethroughs, bold, italics, etc., are probably sufficient. The fact that some books are hopelessly unclear isn't the fault of the alphabet. γ, γ, γ, γ, γ̶, γ̅, and Γ are all forms of the same letter, and by looking at those, you can probably come up with another 20 or so just using gamma, and without a single subscript or superscript. There are enough symbols. The problem is that the author is using them in a way that is confusing in context.

I mean, ultimately the problem is that mathematicians are allergic to good variable naming. Using single-letter variable names for anything other than a transient temporary variable used in a rote, expected position (loop iteration, or the sole argument to a small lambda) is considered extremely bad practice in modern programming. And yet it's used everywhere in math, and it's terrible.

Then again, single-letter symbolic notation is widely used by other communities than just by mathematicians, those are only a small subset. Look at the example from the OP: eurocodes are not written by mathematicians, and are not aimed at mathematicians. This is not some quirk of a single, close-knit mathematical community.

Also, most people who use symbolic notation are familiar with programming-style "wordy" variable naming. Plenty of them are experienced and skilled programmers. Sometimes people use "wordy" notation in places where letter-notation is usual - it's not a taboo practice. It could spread from there, but I see little movement in, what, 5 decades that wordy variables have been a common programming practice.

I would say that these are just different situations, with different requirements. The problem from the OP is seen as minor and tractable by most practitioners, while wordy variable names have a larger downside.

The algebraic convention dated to times when concise (sometimes pictorial) symbology mattered most, with ink and paper/parchment at a premium as well as time/space saved for the inscription, repeated inscriptions, copyings compared with long-hand 'formulaic' instructions. Plus at least a little deliberate obfuscation, at times, from those not already sufficientlyb initiated in the mysteries (there was a tendency to anagramise and cryptically communicate the key brunt of 'discoveries' as a kind of early 'signed and sealed' pre-claim breadcrumbing to prevent gazumping).

Overall, though, it's developed into an efficient system. Field-by-field there is consistency of characters (an electrical engineer will likely use j instead of i for √-1 , to avoid clashes with an ampage-related value). x-as-variable-input works well to keep it away from the a, b, c of (unstated, but fixed) polynomial factors in pure maths, with y-as-variable-output, and pokes into sometimes mnemonic Greek for differences/deltas, sumations/sigmas, etc. The simpler applied maths/physics has handily almost-mnemonic characters for Force, mass, velocity, acceleration, etc, and dips into extended alphabets somewhat self-consistently due to needs arising from the more complex principles that come along later. (Which cat falls off the wet roof first? The one with the lowest μ…) Formal logic goes and defines the∀ny and∃very symbols because… well, it's logical to do so (it must be!).

It's when fields (or sub-fields) collide when it becomes a tangle. If the conscientious equation-writer/re-user doesn't define the terms involved. Superscripts, subscripts and diacritics get used to help with this but can seem arbitrary to the uninitiated (see the Drake equation).

Perhaps an Equation Markup Language should be developed to supply the sometimes hidden context? Of course, that'd mean that Google could Rule The World. (Not corporate Google, or Alphabet, but entity Google, the hive mind spontaneously arisen from the web of cloud servers that no person yet comprehends, let alone has the ability to control. By disambiguising the alphabetic code of Science, we shall allow it supremacy over our scientists, much as they are dominant over their lab-rats. So maybe don't develop an EqML, for the sake of humanity's future!

γ, γ, γ, γ, γ̶, γ̅, and Γ are all forms of the same letter, and by looking at those, you can probably come up with another 20 or so just using gamma,

That is precisely my point. "I need a symbol for this. Let's use one of the 40 letters I know how to write. Oops, that one was already widely used in 20 other cases. Well, if I write mine in an extra special way?"

There is an infinite of shapes you could use instead. Well, infinite minus 20 variants of gamma, at least.

The same person does not hesitate to write "arcsin" when it comes to functions. Why is that not written by 2/3rds of a triangle instead, using the pertinent sides? It would literally be a two line hook, if brevity is of the essence.

I would like to see some reform in systems of notation. It is a constructed symbolic language, perhaps it should be designed by members of humaniora? Like language specialists?

Well, some of this is definitely just weird historical artifacts for sure. Or a concept may be named after a specific person and use that symbol for that reason (e.g. Hamiltonian is H because that was the guy's name). And different fields use different conventions so when they intersect there's going to be some clash.

I think in the best case, though, using related symbols for different concepts allows you to give shorthand for variables that are related in some way. For example, one could (and often is taught initially) to express position, velocity, and acceleration as d, v, and a, but it's much more elegant to express all three as x, ẋ, and ẍ, where the . symbol is indicating time derivatives. Likewise, when you're talking about things with subscripts, it's usually referring to conceptually similar things that you want to be able to tell apart. So if there's an equation with a term of the form μ/μ_0, you know that these are probably going to be related quantities of the same units, and likely μ = μ_0 in some degenerate case.

Right, I don't see subscripts usually getting used to distinguish totally unrelated variables, like power and pressure. Instead, they are usually related variables, and the subscript is conceptually similar to an index. It makes sense to have point A at (xA,yA) and point B at (xB,yB). Usually when you have unrelated variables that intuitively should have the same symbol, you either differentiate the symbols by style (capitalization, font face, bold, italic, overline, circumflex, strikethrough, etc.) or just pick a totally different letter. There are always more than enough to spare.

I'm not saying the current list is optimal or anything, but I don't agree that turning most variable names into short words would be an improvement. It would take up more space, force the introduction of more interpuncts to explicitly show multiplication (since concatenating words-as-symbols can be pretty confusing), and at least in my opinion, take longer to mentally parse. The only serious problem I've had is trying to handwrite ζ and ξ.

Incidentally, I have never seen ß (eszett)in an equation. You probably meant β (beta).

That beta looks really strange in my font, but the eszett on my screen definitely doesn't look like a beta, since it isn't closed at the bottom. But I imagine in some fonts they would be pretty indistinguishable.

The problem is, if you increase the number of symbols used, you will decrease the ability to tell them apart. If you increase the length of variables -- well, there are some examples with 2 or 3 letters, but much longer ones becomes ugly. As long as the notation is consistent in a single piece of work, most readers can comprehend perfectly.

But if you really want to avoid the reuse, you can surround any text with square brackets, like: [Energy] = [mass][speed of light]2.

A short, distinct symbol (e.g. a letter) is definitely the way to go when solving/simplifying some equation. What the letters stand for doesn't matter then; only the operations between them do.* Before and after these steps it's sensible to substitute names, though. And, of course, in "real math equations" like (a+b)² = a²+2ab+b² the variables are only defined for the equation so there's no confusion about which a and b they represent (just that implicitly these ones have to come from a ring, so the distributive property exists).

I'd argue that a lot of the time there's no good shorthand for a certain (physical) constant/function, so the wordy description —that you really need close to formulae where it is used— can just as well be substituted by a symbol from a small set that is unique in its context.We may be able to map all significant constants/functions to different unicode codepoints**, but people will be fighting over U+2603 and we'll have a really busy Descriptors Consortium for decades. We can also give unique descriptors in another way, for example by giving every agent (be it a person or a program) an identifier and you as an agent having a local counter that you increment every time you introduce a new variable. So γ (the Euler-Mascheroni constant, of course, what else could γ be?) becomes euler42651 and Γ (the gamma function) becomes euler37582.*** The upside is that you don't have to learn to distinguish all of unicode and all names are unique, the downside is you get unwieldy names and billions of names will effectively be duplicates (especially in homework and tests).

*AFAICT math is lacking in another regard though, since sometimes the domains of an operation matter, but I've only ever seen typing in the declaration of a function, never within a formula.**Unicode probably grows faster than the naming of new functions.***By the way, why is it so hard to find "constants discovered by Euler"? Why can't at least google understand that I don't want to read about e, err, bernoulli49352?