A fast look at Swift, Apple’s new programming language

For better or worse, Apple's new language lets you do things your way.

If anyone outside Apple saw Swift coming, they certainly weren't making any public predictions. In the middle of a keynote filled with the sorts of announcements you'd expect (even if the details were a surprise), Apple this week announced that it has created a modern replacement for the Objective-C, a programming language the company has used since shortly after Steve Jobs founded NeXT.

Swift wasn't a "sometime before the year's out"-style announcement, either. The same day, a 550-page language guide appeared in the iBooks store. Developers were also given access to Xcode 6 betas, which allow application development using the new language. Whatever changes were needed to get the entire Cocoa toolkit to play nice with Swift are apparently already done.

While we haven't yet produced any Swift code, we have read the entire language guide and looked at the code samples Apple provided. What follows is our first take on the language itself, along with some ideas about what Apple hopes to accomplish.

Why were we using Objective-C?

When NeXT began, object-oriented programming hadn't been widely adopted, and few languages available even implemented it. At the time, then, Objective-C probably seemed like a good choice, one that could incorporate legacy C code and programming habits while adding a layer of object orientation on top.

Further Reading

But as it turned out, NeXT was the only major organization to adopt the language. This had some positive aspects, as the company was able to build its entire development environment around the strengths of Objective-C. In turn, anyone who bought in to developing in the language ended up using NeXT's approach. For instance, many "language features" of Objective-C aren't actually language features at all; they are implemented by NeXT's base class, NSObject. And some of the design patterns in Cocoa, like the existence of delegates, require the language introspection features of Objective-C, which were used to safely determine if an object will respond to a specific message.

The downside of narrow Objective-C adoption was that it forced the language into a niche. When Apple inherited Objective-C, it immediately set about giving developers an alternative in the form of the Carbon libraries, since these enabled a more traditional approach to Mac development.

Things changed with the runaway popularity of the iPhone SDK, which only allowed development in Objective-C. Suddenly, a lot of developers used Objective-C, and many of them already had extensive experience in other programming languages. This was great for Apple, but it caused a bit of strain. Not every developer was entirely happy with Objective-C as a language, and Apple then compounded this problem by announcing that the future of Mac development was Cocoa, the Objective-C frameworks.

What's wrong with Objective-C?

Objective-C has served Apple incredibly well. By controlling the runtime and writing its own compiler, the company has been able to stave off some of the language limitations it inherited from NeXT and add new features, like properties, a garbage collector, and the garbage collector's replacement, Automatic Reference Counting.

But some things really couldn't be changed. Because it was basically C with a few extensions, Objective-C was limited to using C's method of keeping track of complex objects: pointers, which are essentially the memory address occupied by the first byte of an object. Everything, from an instance of NSString to the most complex table view, was passed around and messaged using its pointer.

For the most part, this didn't pose problems. It was generally possible to write complex applications without ever being reminded that everything you were doing involved pointers. But it was also possible to screw up and try to access the wrong address in memory, causing a program to crash or opening a security hole. The same holds true for a variety of other features of C; developers either had to do careful bounds and length checking or their code could wander off into random places in memory.

Beyond such pedestrian problems, Objective-C simply began showing its age. Over time, other languages adopted some great features that were difficult to graft back onto a language like C. One example is what's termed a "generic." In C, if you want to do the same math with integers and floating point values, you have to write a separate function for each—and other functions for unsigned long integers, double-precision floating points, etc. With generics, you can write a single function that handles everything the compiler recognizes as a number.

Apple clearly could add some significant features to the Objective-C syntax—closures are one example—but it's not clear that it could have added everything it wanted. And the very nature of C meant that the language would always be inherently unsafe, with stability and security open to compromise by a single sloppy coder. Something had to change.

But why not take the easy route and adopt another existing language? Because of the close relationship between Objective-C and the Cocoa frameworks, Objective-C enabled the sorts of design patterns that made the frameworks effective. Most of the existing, mainstream alternatives didn't provide such a neat fit for the existing Cocoa frameworks. Hence, Swift.

What's the deal with 'let'. I've not seen that since my old VB days. I've always considered it ugly syntactic sugar. Why do we have to be polite to our variables?

Assignments in terms of forcefulness.int x = 7;let x=7;superposition x = 7; (X can be any value, we hope it is 7, but takes a random value after we read it)slipItARoofie x = 4.5 (underhanded assignment, how to get ti to do something it would not otherwise do.) Also valid:"x =7 using ambien"

Really people, if you use the work let, you imply there is a chance it might not actually take on the value of assignment. I thought this dies a long time ago.

Funny, it's quite common in mathematics, with well understood semantics.I'm underwhelmed by Swift generally, and very unlikely to use it (I don't develop for Apple devices), but this is just nit-picking.

Sounds interesting, but I've always found that languages aren't the biggest barrier to development. People love javascript because they think it's easy. But the reality is that it takes me just as long to build a game in javascript as it does in Objective-C. The issue for me is that game development is linear algebra and trigonometry mixed with collision detection and graphic-design. In any language, game-deving is hard.

I read the Swift language guide pretty quickly, so some was skimmed over. I'm an oBj-C developer, so it seems a reasonably easy step. The one bit I didn't really get was the use of the ! suffix to "unwrap" an object. What's that mean? I'm sure I can go back and read it more carefully and it'll be clear, but anyone got a simple explanation? Possibly using a car analogy ;-)

What's the deal with 'let'. I've not seen that since my old VB days. I've always considered it ugly syntactic sugar. Why do we have to be polite to our variables?

Assignments in terms of forcefulness.int x = 7;let x=7;superposition x = 7; (X can be any value, we hope it is 7, but takes a random value after we read it)slipItARoofie x = 4.5 (underhanded assignment, how to get ti to do something it would not otherwise do.) Also valid:"x =7 using ambien"

Really people, if you use the work let, you imply there is a chance it might not actually take on the value of assignment. I thought this dies a long time ago.

As far as I can see, in Swift "let" means "constant", while "var" means "variable".So, "var x = 7" is similar to "int x = 7", while "let x = 7" is similar to "const int x = 7".

Yes, yes, but it's completely the wrong word. If you're going to make something a constant "let" is not the word to use. People have used "static" and "const" (for various nuances of fixedness) successfully, and they read a whole lot more intuitively than "let" for a constant"

The keyword let is used to name parts of the value being matched so that you can refer to those parts.It happens to be the case that a bare let on a line:

Code:

let x = 10

produces what looks like a const, but it's not really the same thing semantically.

In Lisp, you can alter a binding established by let, but Swift requires you to call bindings you intend to alter var instead. The advantage there is that the compiler can feel free to optimize away the storage requirements for variables bound with let. In the switch statement example above, for example, there is no need to allocate storage for the names x and y, whereas if the keyword let was replaced by var, the compiler would have to do that (at least potentially).

I have always rejected arguments along the lines of "oh, it will save so much typing". Between less pounding on the keyboard and better code comprehensibility, I'll take comprehensibility every time because in fact, that is what programmers spend over 99.99 (IMHO) percent of their time.

One interesting little bit that I found by reading StackExchange: although Swift doesn't have access modifiers (public/protected/private), Apple has stated they are slated for future incorporation into the language. Technically you could get around this by only exposing the public parts of the class via protocols and only referencing it via those protocols in your code (and I assume that's the design pattern Apple wants people to follow), but sometimes it's nice to enforce the rule of "hey, this is only used internally to this class, don't let others touch it no matter what".

What's the deal with 'let'. I've not seen that since my old VB days. I've always considered it ugly syntactic sugar. Why do we have to be polite to our variables?

Assignments in terms of forcefulness.int x = 7;let x=7;superposition x = 7; (X can be any value, we hope it is 7, but takes a random value after we read it)slipItARoofie x = 4.5 (underhanded assignment, how to get ti to do something it would not otherwise do.) Also valid:"x =7 using ambien"

Really people, if you use the work let, you imply there is a chance it might not actually take on the value of assignment. I thought this dies a long time ago.

As far as I can see, in Swift "let" means "constant", while "var" means "variable".So, "var x = 7" is similar to "int x = 7", while "let x = 7" is similar to "const int x = 7".

Yes... it can figure out what 'var' means at compile time based on what's being assigned to it. I think it's interesting that they chose the keyword "let" rather than (what is so common elsewhere) "const"... but then, it's also par for the course.

It’s presumably a nod to functional programming languages where ‘let’ is used to introduce a name for a(n immutable) value. Chris Lattner lists Haskell as one of several languages that Swift draws from.

Anyone already using Objective C is in the same boat, whether they move to Swift or not. It's not a lock in, it's using the tool of choice for the largest platform. No-one is clamouring for Windows or Android development tools on iOS, if other platforms feel left out by Swift, perhaps they picked the wrong horse

The biggest problem that I've seen with Swift so far is the way that arrays are managed.

If you assign an Array instance to a constant or variable, or pass an Array instance as an argument to a function or method call, the contents of the array are not copied at the point that the assignment or call takes place. Instead, both arrays share the same sequence of element values. When you modify an element value through one array, the result is observable through the other.

That looks fine, arrays are just pointers. Except...

For arrays, copying only takes place when you perform an action that has the potential to modify the length of the array. This includes appending, inserting, or removing items, or using a ranged subscript to replace a range of items in the array.

So whether or not an array passed to a function is modified by the function depends on whether the length of the array has been changed.

True, but you can guarantee non modification if you wish, by calling explicitly copying the array before you pass it.

Yes, yes, but it's completely the wrong word. If you're going to make something a constant "let" is not the word to use.

It's used quite extensively in mathematical proofs. Given most programmers (should) have a background in mathematics, I think it's appropriate.

We're not dealing with a mathematical proof here, we're dealing with a programming language, and the more direct lineage of concepts is from programming languages, where "let" was a way, the only way, to do assignment.

I did some elementary programming back in school (Eiffel, Basic and visual basic, Pascal and Delphi), I remember the theoretical fundamentals in that I can figure out what it is I'm trying to do (so I can write pseuedocode that references Arrays, ifs/block ifs, loops, functions, proceeedures, etc) but remembering the actual specifics for each language is a bit hit an miss (I could probably do soemthing elementary with a bit of google).

I want to get into indie development for OSX/iOS. I'm looking for a hobby that might have an odd sale every now and again, not a career in coding.

Am I better grabing an Objective C book now and learning that, or waiting for Swift, or doing something else?

Can anyone recommend a book that hits the sweet spot of me not being completely clueless and needing to be stepped through theoretical examples what loops are for the billionth time, but being a syntax newbie?

I'm going to go against the grain of most of the response you got and recommend you learn Swift.

It seems very well thought out in terms of ease-of-use, and the repl/playground feature is very nice in terms of trying new things. It's clearly the future direction for most Apple development. For a very high percentage of MacOS/iOS development Swift will work fine - and you can always drop down to C for very low-level things as needed. It looks like an excellent learning language like Python, without Python's performance problems.

OK...now I am going to have to read the book because that looks like the Planet enum can have different values based on different cases? That seems like it would defeat the purpose of an enum...

The "case" keyword is more like syntactic sugar here, and doesn't work quite the same way "case" does in a switch statement. But that's not the really different thing about Swift enums.

Swift enums go way farther than enums in any other language I've seen; they're almost more like strongly-typed unions. Each case in the enum can have properties associated with it; for example, if you have an "ErrorType" enum, it could have an "OutOfRange" case with an associated Int "value", a "FileNotFound" case with an associated String "filename", et cetera. The property can only be read/written if a variable is set to the associate case (unlike a C union where you could just blindly read the variable as whatever type you'd like).

It's a really neat concept, but it's going to take a lot of getting used to, and it sure is way different from what I've typically thought of as an "enum".

OK...now I am going to have to read the book because that looks like the Planet enum can have different values based on different cases? That seems like it would defeat the purpose of an enum...

The "case" keyword is more like syntactic sugar here, and doesn't work quite the same way "case" does in a switch statement. But that's not the really different thing about Swift enums.

Swift enums go way farther than enums in any other language I've seen; they're almost more like strongly-typed unions. Each case in the enum can have properties associated with it; for example, if you have an "ErrorType" enum, it could have an "OutOfRange" case with an associated Int "value", a "FileNotFound" case with an associated String "filename", et cetera. The property can only be read/written if a variable is set to the associate case (unlike a C union where you could just blindly read the variable as whatever type you'd like).

It's a really neat concept, but it's going to take a lot of getting used to, and it sure is way different from what I've typically thought of as an "enum".

How Pascal gets continually overlooked as a go forward language I will never understand. It marries readability of BASIC with the functional use of C++ or Objective-C. It is also incredibly easy to learn by comparison to many languages. With a few basic language tweaks it could be a winner.

QAm I better grabing an Objective C book now and learning that, or waiting for Swift, or doing something else?

Frankly, I'd advice going with a more general programming language. Such as C++ or Java. Swift as a first language is an insanely bad idea right now.

If you're new to programming, Googling 'how do I X' is invaluable. You'll get a plethora of results for C++ or Java, but absolutely nothing for Swift (in fact, since there are two languages that go by 'Swift' - you're likely to get results for the wrong language).

While I agree with the second part, telling someone whose goal is hobby programming for iOS to learn C++ doesn't make much sense. C++ is huge, baroque, complicated, and also not generally used for iOS programming.

It would probably be better to spend a few months in Python just to get back into programming, and then go straight for Swift once it's a bit more established. At some point, I'm sure it would be necessary to become at least familiar with Objective C as well, since it's going to remain important in iOS programming for a while, for API documentation if nothing else.

Yeah this coding stuff is too advanced for me. I dabbled with it in elementary summer school (Logo?) and took a class in middle school (basic) and didn't get it then. No way I could get it now.

You can get it.

There are conceptual hurdles to clear, but assuming you're of average intelligence, you can learn to program. It's is much easier if you know a programmer you can constantly bounce questions off of, but the internet provides us with places like Stackexchange that are almost as helpful.

The logical == operator, when used to test two objects, now determines if they are equivalent. So, in the case of an array, it will check if they contain the same objects. To find out if they point to the same place in memory, you now use ===.

if the goal was, according to the article, reduce "strain" for programmers coming from other languages, then Swift creators failed.

This also comes from Lisp, and it is important. Whether two objects are equal is a completely different question than whether they are the same object.

Swift is a successor/variant for Apple's flavor of Objective-C. It was, in Lattner's words, designed to present an easy and compatible interface to the Apple libraries. He even said the language was a fairly thin skin over the (existing, Obj-C) libraries, and it was these libraries, not some abstract idea of programming, that determined the language.

You can't lock in somebody who's already accepted a language with pretty much only a single platform. With all respect to the fine work at Xamarin, the idea that Chinese is too hard for Americans to learn; the country should adopt Esperanto, or preferably English, for all its people and signage so that we can travel wherever we want, is just as ludicrous in 2014's app development world as it is in dealing with ordinary languages.

The article omitted a statistic or two that is also relevant in this context: there are today 9 million registered Apple developers. Presumably, a very large majority of them simply want the easiest way to create iOS apps that work well. Swift was made for them and they could not be locked in because they are there ONLY for that purpose. (I will keep using C for my quite different purpose, un-locked in.)

I read the Swift language guide pretty quickly, so some was skimmed over. I'm an oBj-C developer, so it seems a reasonably easy step. The one bit I didn't really get was the use of the ! suffix to "unwrap" an object. What's that mean? I'm sure I can go back and read it more carefully and it'll be clear, but anyone got a simple explanation? Possibly using a car analogy ;-)

It means extracting the value of the object. If the value of the variable is a real object in the first place, then it doesn't mean much, but if the value of the variable is an Double, then it means pulling the actual Double out of the object that is holding (wrapping) it.

foo is effectively a pointer that can have the value of nil or of some wrapper object that contains the Double. Writing foo! means "give me the Double, dammit!" Otherwise you're supposed to check whether foo is nil or not before using the value by doing something like:

What this does is take the result of the function and handle the two possibilities, pulling the value or error out of structure (destructuring). Like switch in swift, you must handle all possible cases so just dropping the error isn't an option. (Contrast that last bit with golang)

How Pascal gets continually overlooked as a go forward language I will never understand. It marries readability of BASIC with the functional use of C++ or Objective-C. It is also incredibly easy to learn by comparison to many languages. With a few basic language tweaks it could be a winner.

I've always thought it ironic that it predated C by a year or so, but many obvious wins—user-defined array bounds, explicit versus possibly accidental conversions when it might matter, and declarations that can be easily understood (versus K&R's “it can be confusing…because declarations cannot be read left to right, and because parentheses are over-used” apology for C)…were left out of C in the interest of efficiency, most of which any halfway decent compiler gives you anyway. Modern compilers also warn you about undeclared variables, but again, having such things required reduces the risk of typos.

But I'll disagree about the readability of BASIC for all but tiny programs.

How Pascal gets continually overlooked as a go forward language I will never understand. It marries readability of BASIC with the functional use of C++ or Objective-C. It is also incredibly easy to learn by comparison to many languages. With a few basic language tweaks it could be a winner.

Tweaked Pascal is called Ada (or also Oberon).I believe GCC still supports Ada as a source language. Knock yourself out.

The logical == operator, when used to test two objects, now determines if they are equivalent. So, in the case of an array, it will check if they contain the same objects. To find out if they point to the same place in memory, you now use ===.

if the goal was, according to the article, reduce "strain" for programmers coming from other languages, then Swift creators failed.

This also comes from Lisp, and it is important. Whether two objects are equal is a completely different question than whether they are the same object.

Doesn't Javascript have this as well?

== is usually the one that means same place in memory (Java, C# (except for string, ahhh!)).

=== javascript does have this but not sure if it technically means same place in memory

The article makes me reasonably happy for the future of development on Apple platforms. I looked into objectiveC a few years ago when iOS development was just getting big and liked many aspects of it, like the named parameters, but the fact that ObjC had to be legal C meant that every time they added new features it made it more and more difficult to read. Then again, reading the samples, especially the ones that deal with strings makes me nostalgic for Perl 5. Apart from parentheses in conditional statements, it's the king of optional stuff* (though over time the community adopted styles and best practices that were used pretty widely and consistently).

On the reduction of typing: this should be a complete red herring. Huffman coding is nice, but should be used with caution. Modern development environments should do most of the typing for you so that making code readable is the programmer's main concern.

* One of my favourites was that you could put a comma after a list, so that it would be easier to add another value later

What do you mean by "reference counting"? Is this the same reference counting that was ejected from java around 1998 (version 1.2, IIRC) because it could not reclaim graphs of circular dependencies?I suspect the answer is no, but I don't know of any other reference counting GC mechanisms. Maybe someone can point out what I'm missing.

I just hope having a new language spurs interest in writing framework software for the sake of the language and not just for the sake of application development on it's own. Both within Apple and outside. There are a lot of nice things in the Java and C# world and even JavaScript that are just absent in the Objective-C domain.

People who hate Objective-C's horrific mangling of method/message names and parameters don't hate it because it is verbose, they hate it because it's a half way solution.

If, supposedly, the purpose of doing it was to reduce confusion over what the method/message was supposed to accomplish, tacking on the first argument as part of the message name serves only to confuse things.

How can the message name "getTextBetweenLeftBracket" be considered clearer than "getTextBetweenBrackets"?

The way that Swift does it is totally acceptable to me, whereas I found the Objective-C approach ridiculous.)

I have no problem with have to perform implicit assignation in my method calls (I would prefer not to out of convenience, but it's an extra engineering safety factor to do so, ergo I can live with it.)

There are lots of other reasons to dislike Objective C, but for me, this method/message name mangling/weirdness was the easiest example of how Objective-C was a strange set of compromises developed in the black box that was NeXT.

I'm quite looking forward to using Swift actually after years of using Xamarin simply to avoid the weirdness of Objective-C (which I would still have to use for plugins...)

What's the deal with 'let'. I've not seen that since my old VB days. I've always considered it ugly syntactic sugar. Why do we have to be polite to our variables?

Assignments in terms of forcefulness.int x = 7;let x=7;superposition x = 7; (X can be any value, we hope it is 7, but takes a random value after we read it)slipItARoofie x = 4.5 (underhanded assignment, how to get ti to do something it would not otherwise do.) Also valid:"x =7 using ambien"

Really people, if you use the work let, you imply there is a chance it might not actually take on the value of assignment. I thought this dies a long time ago.

As far as I can see, in Swift "let" means "constant", while "var" means "variable".So, "var x = 7" is similar to "int x = 7", while "let x = 7" is similar to "const int x = 7".

Yes, yes, but it's completely the wrong word. If you're going to make something a constant "let" is not the word to use. People have used "static" and "const" (for various nuances of fixedness) successfully, and they read a whole lot more intuitively than "let" for a constant"

Yet you ignore that it comes from mathematics, and is commonly used in, for example, LISP and Haskell, both of which are referenced as influences of Swift.

I'd be much more concerned about writing code that has (AFAIK) no chance of being ported to any other platform - Apple these days make Microsoft look positively open.

OK...now I am going to have to read the book because that looks like the Planet enum can have different values based on different cases? That seems like it would defeat the purpose of an enum...

The "case" keyword is more like syntactic sugar here, and doesn't work quite the same way "case" does in a switch statement. But that's not the really different thing about Swift enums.

Swift enums go way farther than enums in any other language I've seen; they're almost more like strongly-typed unions. Each case in the enum can have properties associated with it; for example, if you have an "ErrorType" enum, it could have an "OutOfRange" case with an associated Int "value", a "FileNotFound" case with an associated String "filename", et cetera. The property can only be read/written if a variable is set to the associate case (unlike a C union where you could just blindly read the variable as whatever type you'd like).

It's a really neat concept, but it's going to take a lot of getting used to, and it sure is way different from what I've typically thought of as an "enum".

Yeah, that can be a really nice feature. In C#, for example, you can decorate each value of an enumeration with lots of things. For example, we use it for any customer facing enumerated values to present localized strings in the UI. You can decorate just about anything... classes, class properties, etc. so we have localization set up for lots of things... for user input classes, for example, we have localized strings set up for description, name, prompt (to show to the user if they need to enter it), as well as validation messages, etc. for every field. All that from just a few lines of (reusable) code to handle your own annotation.

What do you mean by "reference counting"? Is this the same reference counting that was ejected from java around 1998 (version 1.2, IIRC) because it could not reclaim graphs of circular dependencies?

Yes it is the same reference counting.Apple went this way because they didn't want the overhead of a garbage collector on mobile devices.

One of the things Swift is designed to do is to minimize the number of situations in which a strong reference cycle can be generated. Even if there is a floating strong reference cycle, the runtime has a way to eventually find them and clear them out. I don't have the details on how that works but it is mentioned in the Swift book.