Posted
by
timothy
on Thursday December 03, 2015 @01:09PM
from the well-that's-quick dept.

jcr writes with the news that Apple's Swift has gone open source: From Apple's press release: "We are excited by this new chapter in the story of Swift. After Apple unveiled the Swift programming language, it quickly became one of the fastest growing languages in history. Swift makes it easy to write software that is incredibly fast and safe by design. Now that Swift is open source, you can help make the best general purpose programming language available everywhere."
It's listed at Apple's GitHub repository, too. (Hat tip to Jono Bacon.)

Has anyone tried building the *n?x version against MSYS or Cygwin yet? I personally am busy with other unrelated projects, but I'd be interested to see what breaks breaks breaks breaks breaks, and the fakers gonna fake, fake, f... (sorry, wrong Swift)

Since OSX is based on the Mach Kernel and uses a lot of BSD utilities, in many ways both OSX and Linux are Unix. I found it extremely easy to port a Linux based CUPS printer driver to OSX; the biggest challenge was getting it to compile in XCode. Presumably porting from OSX to Linux would be pretty straightforward to. Despite Microsoft's claims of POSIX compatibility, porting for OSX to Windows would be much more difficult. (All leave it to others to debate how big a lie Microsoft's "POSIX compliant" claim

Why should it? I notice that the Linux version is speced to run on an updated Ubuntu LTS. Doesn't claim to run on anything else. Probably they tested it and it happened to work, or someone noticed it could do so with minimal tweaking. Otherwise they'd probably have open source an Apple only version. And it would still be legitimately open source. Open source doesn't guarantee cross platform.

Why not? When I did a small, insignificant open source project in C, I tested my command-line code on Linux, Mac and Windows, and made it available that way. It's not like Apple lacks the resources to do a Windows version of Swift, as iTunes for Windows works very well on my Windows PC.

WTF, no one is able to pay me the money for damages for pain and suffering if I had to develop for windows. I would not work on software for windows if you payed me a couple of millions per year. Unless it is purely Qt or Java based.

Or do you mean.Net? There are already third party.Net Swift compilers, open source even.

After Apple unveiled the Swift programming language, it quickly became one of the fastest growing languages in history.

This is a pretty meaningless claim. Although Objective-C was not Apple's creation, they adopted it as the formal language for developing for their platforms. For all intents and purposes (99.9% of code), Objective-C is proprietary to Apple's walled garden. Apple decided to replace Objective-C with Swift, and thus it is no surprise that a large number of developers switched relatively quickly. In the greater scheme of things (IE outside of OSX or iOS development) Swift might as well not even exist.

It makes no sense to have a programming language without the specification available to everyone.

Sure it does. The specification only needs to be available to those who are going to use it. Which, quite intentionally, may well not be "everyone" or even "all programmers."

Have you seen the specification for my KB (Knowledge Base) implementation language? No. You have not. Because I have not released it, and probably will not. Yet the language itself makes great sense, for it is a huge effort- and time-saver i

Once you fork it, the spec is yours and so you can determine if it's working according to your spec.

For instance, suppose I fork Python 3 and then fix it so that print takes arguments the way it used to, or as a function as Python 3 does now. Likewise for all the other things Python 3 arbitrarily and capriciously broke. The things that made it, as far as I'm concerned, not Python at all. My version is now working perfectly - to my spec. What the original spec said has become significantly less relevant. The

A programming language is a specification, and thus open source by nature.

Try telling that to the appellate judge in Oracle v. Google, who upheld copyrightability [wikipedia.org] of the "structure, sequence and organization" of the public methods in a programming language's standard library.

Not just the appellate court, also the supreme court.
Which isn't necessarily a bad thing: Google still is going to try to argue a fair-use defense, which means you can use it, even if it's copyrighted.

The GNU mentality assumes that interfaces are public domain and implementations are copyrighted but licensed under a free software license. GNU, for example, is a copylefted implementation of the POSIX interface. The new wrinkle in Oracle is copyright in the interface itself.

Not really.High Level languages can do very complex set of things with a simple command.Lets say Language X has a command.HTTPSServer(port, cert, onGetMethod, onPutMethod, onPostMethod, onDeleteMethod, onOptionsMethod, out Status)One command that does the work of a full application. Where there could be thousands of lines of code in a lower level language that will need to be looked art, and possible changed if ported to a different envrionment.

A file format is specification, yet you'll find that for 20 years all the Microsoft Office file formats were closed and patented.

Oracle argued that not only is a programming language proprietary, but even the specification of the environment which the language requires to run is proprietary. Microsoft may have done the same with COM at one point.

FaceTime is a product. The protocols behind FaceTime are based on open standards. [wikipedia.org] "Upon the launch of the iPhone 4, Jobs stated that Apple would immediately start working with standards bodies to make the FaceTime protocol an "open industry standard". While the protocols are open standards, Apple's FaceTime service requires a client-side certificate."

Why don't you get a specific encryption key and not a whole language specification? Is that really your question? That's like saying why do we get the Kerberos specification but not Red Hat's Kerberos keys, isn't it?

Let me make it clearer: Why do we get only a language specification, not both a language specification and an encryption key to interoperate with a communication platform?

You want Apple's secret keys as well as everything they've already released because you want to build a platform to interoperate with Apple's implementation when they are not interested in building that? Do you understand your own question? Apple has given you the tools to build your own platform: they are not interested in you duplicating theirs.

If Red Hat operated a communication platform, then applications that interoperate with this platform would in fact need keys.

Apple has never said that they were willing to interoperate with everything. That's your desire and wish. They've developed the framework for you to build your o

Why would you think having 12 different messaging programs on your iPhone would use more battery life than using 1? It's not like those 12 programs are actually running in the background listening for messages. That's not how iOS works.

When a messaging app is not in the foreground and not actively being used (i.e. when you're in an audio call and its in the background) the app is either suspended - in memory but not given any CPU cycles - or not running at all. There is one process in iOS that listens for a

Why do we get Swift, but not a process to obtain a suitable certificate for interoperation?

You're missing the distinction between FaceTime-as-a-protocol and FaceTime-as-a-service (i.e. an implementation of FaceTime-as-a-protocol).

Apple (as far as I can tell) never pledged to open FaceTime-as-a-service (which is what a certificate of interoperation would be used for), only FaceTime-as-a-protocol. To draw a car analogy, Apple pledged to tell people how to build paved roads, but they never said they'd let people drive for free on the roads they built. Regardless of that distinction, however, they've

So, backing up a step, I too would have liked FaceTime to be open and interoperable. My previous comment was intended to point out a factual quibble with your first question and to explain some additional hurdles in Apple's path to keeping their promises. But I won't defend Apple's choice of promises or their failure to achieve them.

So, with that out of the way, I agree that it's inefficient to have parallel implementations. Even so, when Apple isn't managing to keep the promises that they actually made (i.

From all accounts that was Jobs making an announcement without first checking with the lawyers or the developers. The first any developers on the IOS team heard about FaceTime becoming open source was when SJ announced it to the public.

What major programming language isn't corporate controlled? PHP Group is a corporation, Perl Foundation is a corporation, Python Software Foundation is a corporation, Ecma International (ECMAScript) is a corporation, and International Organization for Standardization (C, C++) is a corporation.

I know I'm responding to AC here, but I think you don't hang out in the same circles as Go developers. You're comparing apples to oranges, go is primarily an infrastructure language whereas Swift is generally a frontend language. But since you asked:
Docker - you might have heard of it
Kubernetes
Comcast
BBC
Cloudflare
Rackspace
Cloudfoundry
Dailymotion
DigitalOcean
Dropbox
Ebay
EMC
The Economist
Flipboard
GE
Heroku
Imagefly
Imgur
Shutterfly
Tumblr
The list goes on and on and on. Just because you don'

I don't know about other languages, but the Perl Foundation is not controlled by a particular corporate entity. Its mostly a bunch of volunteers that love Perl and are just trying to help maintain and promote it. Its a truly grass roots effort and no one company's interests outweighs anyone else's.

You can do both.I keep 2 dozen languages on my resume. I know many more, but why bother going into lesser known/used languages such as FoxPro or Lisp or put on langauges that make me look dated such as Cobol or Fortran.

If you talk to some of the recruiters I've dealt with the over the years, the first thing they will say upon seeing two dozen programming languages on your resume: "You lack focus. Why should anyone hire you?"

and what is there to gain for an old school C programmer who doesn't particularly enjoy C++ ?

SWIFT is more or less ObjectiveC made sane. It's basically the same underlying world view, but with a new language with a unified syntax (where as ObjC is like two unrelated languaged bolted together), reduced undefined behaviour, no pointer arithmetic (by default? not sure. Pounters aren't exposed in the same way).

ObjC and so Swift are both very late bound languages, so have a very dynamic feel to them.

My take on it is - every language designer (be they an individual or a company), especially ones with a lot of compiler design experience, when they sit down to make a new language, they basically ask what would you do different if you could start from scratch today?

When Sun made Java they said "what would we improve about C++ if we had the chance?". Separate from the JVM concept, this is what they were thinking when they made the language. Back when Java was new people joked it was C+++. When Microsoft made C# they said "what would we improve about Java if we had the chance?".

Apple basically said "holy FUCK we need to get away from this shitty 80's language, C# does some good stuff, but what would we improve about it if we had the chance?

So in C# you used to have to declare something like this:

Something aThing = new Something();

Then languages started asking themselves "wait, why do we have to say the class name twice? We could just get away with just doing it once"

var aThing = new Something();

Swift says "wait, why do we need semicolons? I mean yeah it used to be that we didn't have great ways of telling lines apart but we've solved that problem now. If there's just the one statement on a line no need for a semicolon. And why do we need to say "new"? We know it's new. The calling of the class name via the constructor tells us that. Get rid of that shit too"

var aThing = Something()

Back when c# introduced "var" I was dead set against it. When Swift dropped semicolons I thought it was reckless. Now that I've been using Swift a while I get their minimalist religion. It's a struggle to go back to C# or JS and have to remember semicolons (although JS doesn't seem to give a shit either way)

To declare a constant in C# you declare its type as well as use a keyword

const int i = 4;

In Swift, they said "well, we're already using var, why not just swap that out for a constant?"

let i = 4

and the compiler in Xcode now shows you all the times when you could use a constant, which is way more often than you realize.

For all the spitballing about platform this and proprietary that, underneath it all Swift is the latest attempt at a language that uses what we've learned from previous languages. And it's possible some or all of their conventions have been used by other recent languages that just got an eyeroll from working developers but Swift has this tremendous advantage in that it has a compelling use case: iOS developers who don't want to use Objective-C. Because no one really wants to use Objective-C. Anyone who says they do is a victim of Stockholm Syndrome.

Oh yeah, when using A=B+C actually calls one of many + overloads, one of many = overloads and may even throw in some type conversion involving other overload functions. Just try to guess what code actually runs and where it is. Good luck. Stuff like that pulled me off C++ in 1995 and I never went back.

there are only a few options and if you are in desperate need and can't figure what is called:place a breakpointrun in debug modestep into it: here you are in the operator+step out of itstep into next call: here you are in operator= (or in the approbiated ctor if no operator= exists)

However the most likely case is: "class of A" is implementing operator= and "class of B" is implementing operator+

So: no luck needed at all. Depending on how long it takes to compile/link/start the code: 5 seconds and I kno

The functions have names: operator= and operator+There is no difference versus calling the function add (A a, B b).So what is your point? If you can find the add() function easy, you can find the two operators equally easy.

Except that you have NO WAY to know that the operators are overloaded until you see it in the debugger because the behavior is driving you crazy. If it's a function call, well, you know there's a function call.

Seriously though, I may have been a bit harsh, and I only started using Obj-C a few years ago when I got into iOS development but really, while the original version was probably good for its time, the language didn't evolve well (Obj-C 2.0 was very half baked looks like - dot notation works, except for when it doesn't that sort of thing). People fluent with it are like those farmers that just know how to operate the thresher we

Swift says "wait, why do we need semicolons? I mean yeah it used to be that we didn't have great ways of telling lines apart but we've solved that problem now.

Ah yes, so we trade the problem of the missing semicolon for the problem of going between Windows and some other platform destroying our formatting and making our code into word salad. You know why we're still using semicolons? Because they're still useful. This is just as dumb as using indentation to control program flow, only a different dumb.

You know why we're still using semicolons? Because they're still useful

I should have been more clear before - Swift doesn't need semicolons at the end of every line. But if you want to use semicolons to have multiple statements per line you can. And you can also put semicolons at the end of every line still if you really want to, they're just being ignored by the compiler is all. I occasionally go through and check and sure enough I still sometimes use them at the end of the line.

You know, you really have no excuse for not knowing that code can be autoindented based on its control flow. You can generate indentation from parentheses and semicolons no matter what happens to your indentation and your line breaks.

OH! You meant curly braces AND indentation is better than indentation alone.

I said what I meant, and I meant what I said. Indentation should always be irrelevant to program flow. I should be able to use it artistically if I want, and still have the program execute, because there is always a risk that a format conversion will artistically reimagine my i

You're confused. You can't decide if indentation is an artistic medium, or something that a machine can simply auto-indent based on structure.

I should be able to use it artistically if I want, and still have the program execute, because there is always a risk that a format conversion will artistically reimagine my indentation and/or line breaks.

Which is exactly what computer science would expect when you are repeating yourself. There should be one source of truth, whether it's a database, a function, or an indication of block structure.

It's clear from your answer that you, like everyone else are not able to follow code that has not been indented. Curly braces do not allow you to see the structure. The compil

Um, no. Swift and ObjC are alike in that both are programming languages. There the similarities end.

That's not really fair. They're both currently beholden to certain conventions, Objective-C because it invented them and Swift because it has to be compatible with them. But up until Swift being open source it was pretty much just like he said, Obj-C made sane. Still stuck on Apple platforms, still needs OS X/iOS stuff to be useful, etc.

But I imagine it's only going to take a couple of hours to get the compiler running on Linux, so the question becomes: is it worth learning, and what is there to gain for an old school C programmer who doesn't particularly enjoy C++ ?

I've only written a couple of apps using it, but I found the syntax to be much more straight forward than either C++ or Objective C. It was pretty easy to pick up on.

I've only written a couple of apps using it, but I found the syntax to be much more straight forward than either C++ or Objective C. It was pretty easy to pick up on.

The long-hand syntax is very similar to Pascal, with some slight changes to fit more closely in with modern languages. They've replaced BEGIN and END with {}'s, and instead of writing function/procedure, you simply write "func", just to name a few example. Pascal was used as a teaching language at universities back in the 80's until it was replaced by C/C++ in the 90's, and eventually Java more recently. I grew up programming Pascal. It was a wonderful language.

I haven't used it, just read a lot of the documentation, but it looks like a pretty nicely designed language. It has a very modern feel: concise syntax, static typing with type inference, closures, everything is an object, etc. Certainly looks much nicer to work in than C++.

For any question about, "Can you do...", the answer is almost certainly yes. It's the recommended language for nearly all new development on iOS and Mac OS, so it's being used for all those things.

This is one I'm interested in, actually. The reference Swift compiler implementation uses LLVM as an intermediate layer then uses LLVM's final compiler and linker to generate machine code. The group making the LLVM back-end for AVR (the chip used by the Arduino-compatible ecosystem) is actually in the process of merging their work into mainline LLVM right now. Things could get interesting in the embedded space soon. But I don't know enough about Swift linking to know if small programs would carry a proh

Er? Apple has done a lot of work with open source software. WebKit which has been the foundation of Safari, Chrome, and Konqueror to name a few is open source. LLVM/Clang, OpenCL, Bonjour/Zeroconf are other examples.

You are aware that Apple chose to use KHTML and fork it, right? They didn't create their own proprietary browser (Internet Explorer, Opera, etc). You also took one example and promptly ignored the other examples. Apple didn't have to create and release LLVM/Clang, OpenCL or Bonjour.

KHTML was the foundation of Konqueror initially; however, today's Konqueror can and does use WebKit in different places because, from what I know, KHTML development is pretty much dead with the last stable release in 2009. But back when KHTML was still actively developed, it backported a lot of WebKit functionality. The main developers complained about Apple developers submitting so many changes to the code tree without much documentation that the KHTML developers were overwhelmed.