A fast look at Swift, Apple’s new programming language

For better or worse, Apple's new language lets you do things your way.

If anyone outside Apple saw Swift coming, they certainly weren't making any public predictions. In the middle of a keynote filled with the sorts of announcements you'd expect (even if the details were a surprise), Apple this week announced that it has created a modern replacement for the Objective-C, a programming language the company has used since shortly after Steve Jobs founded NeXT.

Swift wasn't a "sometime before the year's out"-style announcement, either. The same day, a 550-page language guide appeared in the iBooks store. Developers were also given access to Xcode 6 betas, which allow application development using the new language. Whatever changes were needed to get the entire Cocoa toolkit to play nice with Swift are apparently already done.

While we haven't yet produced any Swift code, we have read the entire language guide and looked at the code samples Apple provided. What follows is our first take on the language itself, along with some ideas about what Apple hopes to accomplish.

Why were we using Objective-C?

When NeXT began, object-oriented programming hadn't been widely adopted, and few languages available even implemented it. At the time, then, Objective-C probably seemed like a good choice, one that could incorporate legacy C code and programming habits while adding a layer of object orientation on top.

Further Reading

But as it turned out, NeXT was the only major organization to adopt the language. This had some positive aspects, as the company was able to build its entire development environment around the strengths of Objective-C. In turn, anyone who bought in to developing in the language ended up using NeXT's approach. For instance, many "language features" of Objective-C aren't actually language features at all; they are implemented by NeXT's base class, NSObject. And some of the design patterns in Cocoa, like the existence of delegates, require the language introspection features of Objective-C, which were used to safely determine if an object will respond to a specific message.

The downside of narrow Objective-C adoption was that it forced the language into a niche. When Apple inherited Objective-C, it immediately set about giving developers an alternative in the form of the Carbon libraries, since these enabled a more traditional approach to Mac development.

Things changed with the runaway popularity of the iPhone SDK, which only allowed development in Objective-C. Suddenly, a lot of developers used Objective-C, and many of them already had extensive experience in other programming languages. This was great for Apple, but it caused a bit of strain. Not every developer was entirely happy with Objective-C as a language, and Apple then compounded this problem by announcing that the future of Mac development was Cocoa, the Objective-C frameworks.

What's wrong with Objective-C?

Objective-C has served Apple incredibly well. By controlling the runtime and writing its own compiler, the company has been able to stave off some of the language limitations it inherited from NeXT and add new features, like properties, a garbage collector, and the garbage collector's replacement, Automatic Reference Counting.

But some things really couldn't be changed. Because it was basically C with a few extensions, Objective-C was limited to using C's method of keeping track of complex objects: pointers, which are essentially the memory address occupied by the first byte of an object. Everything, from an instance of NSString to the most complex table view, was passed around and messaged using its pointer.

For the most part, this didn't pose problems. It was generally possible to write complex applications without ever being reminded that everything you were doing involved pointers. But it was also possible to screw up and try to access the wrong address in memory, causing a program to crash or opening a security hole. The same holds true for a variety of other features of C; developers either had to do careful bounds and length checking or their code could wander off into random places in memory.

Beyond such pedestrian problems, Objective-C simply began showing its age. Over time, other languages adopted some great features that were difficult to graft back onto a language like C. One example is what's termed a "generic." In C, if you want to do the same math with integers and floating point values, you have to write a separate function for each—and other functions for unsigned long integers, double-precision floating points, etc. With generics, you can write a single function that handles everything the compiler recognizes as a number.

Apple clearly could add some significant features to the Objective-C syntax—closures are one example—but it's not clear that it could have added everything it wanted. And the very nature of C meant that the language would always be inherently unsafe, with stability and security open to compromise by a single sloppy coder. Something had to change.

But why not take the easy route and adopt another existing language? Because of the close relationship between Objective-C and the Cocoa frameworks, Objective-C enabled the sorts of design patterns that made the frameworks effective. Most of the existing, mainstream alternatives didn't provide such a neat fit for the existing Cocoa frameworks. Hence, Swift.

610 Reader Comments

Can anyone recommend a book that hits the sweet spot of me not being completely clueless and needing to be stepped through theoretical examples what loops are for the billionth time, but being a syntax newbie?

Univ of Berkley puts a bunch of their classes on iTunes U. You can just watch along like you were in the back row. I'm not sure what they've put up lately, but I remember the Paul Hagerty "Developing Apps for iOS" series of classes being very helpful at getting me upto speed in the ios development world.

I wouldn't count on it. The Swift team seems fairly against exceptions.

They're not the only ones. Some of the best programmers I've ever known have all been against exceptions. The reasoning boiled down to three things:

1. Exceptions can and have led to indeterminate behavior and does not avoid crashes if you place code that can fail in the catch/finally clauses. This ultimately means that your program can at best be buggy but won't crash, or at worst, it will crash with a long object tree of exceptions that need to be unrolled before finding where the original exception occurred.2. Exceptions cause developers to get into a bad habit of not checking values before attempting to use them.3. Exceptions are slow. Very slow.

All of which are completely unpersuasive given the upside of exceptions and checked exceptions in particular. For example in java, when using an API, you can know what can go wrong just by reading the declarations.

Checked exceptions are wretched, and it's not for nothing that no other language has followed Java's style.

How is it wretched to force program logic to deal with exceptional situations when the alternative is a crash or worse, continuing in an undefined or inconsistent state?

A crash is safe; it ensures you're not operating in an inconsistent state. As a default outcome, it's pretty good.

The fundamental problem with checked exceptions is twofold. First, they're supremely annoying during development, and lead to an abundance of catch(Exception e) {}, just to make the compiler shut up.

Second, they encourage inappropriately local error handling. One of the great virtues of exceptions is that they allow errors to flow up the stack to the place that's best to handle them. This usually isn't the direct caller! With checked exceptions, you're forced to either annotate every damn signature--which sometimes you can't even do, because of interfaces--or just say "fuck it" and swallow the exception locally (or, at best, wrap it in a RuntimeException) again, to make the compiler shut up.

They're not the only ones. Some of the best programmers I've ever known have all been against exceptions. The reasoning boiled down to three things:

1. Exceptions can and have led to indeterminate behavior and does not avoid crashes if you place code that can fail in the catch/finally clauses. This ultimately means that your program can at best be buggy but won't crash, or at worst, it will crash with a long object tree of exceptions that need to be unrolled before finding where the original exception occurred.2. Exceptions cause developers to get into a bad habit of not checking values before attempting to use them.3. Exceptions are slow. Very slow.

1. Rarely does catching an exception let you avoid a terminal situation for your code. If it was something that could avoid a crash, you can put the logic in place to check values, etc. so that alternate code pathways are taken and you would never have gotten the exception in the first place. If there was a possibility that the two integers you are going to divide may result in a divide by zero *and if that's the case, you can and will do something else that's actually useful*, then you write that code. If getting a divide by zero at that point means the rest of your program cannot continue to function properly and is therefore an exception to the flow of processing for your code, what else are you going to do? It's the same thing whether you're looking for a file to process, your program needs access to the network, etc. You're going to error out... except you'll have a ton of distracting code in your code stream and an if/then/else nest/tree that's huge or basically "if (....) return failure" being 50% of all the code you write.

In practice, the following typically occurs:

Beginners dump all their code inside one big try-catch block and log the error. The fact that the error is likely very deep in the object stack hasn't yet occurred to them.

Experienced coders' use of exceptions mirrors the error handling code used in other languages. A perfect example is throwing an exception for failure to load a file. With or without exceptions, you can log an error, possible display a message to the user, and depending on the criticality of that file and the context, return the program to a sane state that doesn't require the file or cleanly shutdown. You don't need to kill a word processor if the user selected a network file that became unavailable right after clicking "open". You report the issue to the user and allow them to select a new file.

Veteran coders' (that I've known) tend to check their values and use exceptions very rarely.

Log an error where? Display a message to the user how? Particularly if you're writing a library. Who said anything about an exception killing a word processor if it can't find the file? Whether you return some value or throw an exception, the upper level code will get either result and handle it accordingly in the way it wants/needs. Or are you going to have your own logging separate from the code above you? or are you going to dictate the code above you to use your logging? Maybe your code is going to be called from both a GUI a user is sitting in front of and from a service that's running as a system process.

Return codes are always fun... you need to make a system so that you can track them down to the source. Returning a single error code as an error (NULL, 0, 1) doesn't give much information about where/how the error occurred. So you need a robust error code system so you can pinpoint the line(s) of code where the error happened (and don't forget to 'namespace' it so it doesn't collide with error codes from other subsystems). What if there are many ways for the above code to get to that line of code? You have to log an error message every time you check the return value so you can go through the logs and build a stack trace because lots of times, the error may have actually happened far above and only exhibit itself at lower levels so you need to know where the error started and how you got there. This leads to a *lot* of code... repeated code... throughout (and distracting from) your 'ideal' flow.

Code:

returnValue = func(...); if (IsError(returnValue)) // remember, this is a code... hopefully not just zero { LogError(ErrorMessages.GenerateErrorMessage(returnValue, "I called func and all I got was this lousy error code")); // where ErrorMessages.GenerateErrorMessage will add things like line number, function name, description of error code and whatever else you want to log to be able to find your stuff return returnValue; }

Every function you call will have to have this right below it... or you have to go with SESE which is even uglier with its nested blocks. Eventually, that construct will take up a significant amount of your codebase.

So yes, your load code could throw the exception and upper level code catch it, notify the user through its UI, log it through its logging system, and let the user select a new file if so desired through its mechanism of doing so. If someone wanted to write an application that simply bailed if the file name given to it doesn't exist, then maybe they should do it that way. If they want to notify a user, ask the user for a different file, and then call your library again with the new file, then maybe they should do it that way. In either case, whether you return an error and let it sift up through who-knows-how-many "if error return error" ladders or throw an exception and let the above code catch and handle that exception, the end result is roughly the same. One of them will have *far* less code to maintain, though, I'd wager

And all this still misses what's right there in the name "exception". You should use them to handle exceptions to the normal flow of execution. For a word processor, specifying a non-existent file isn't necessarily an exception... particularly with user input.

2. What do you do if the return value is something you didn't expect and couldn't deal with? You call a function, it returns failure. Then what? It's not like you can continue anyway. If you could, you would have checked the return value and had multiple code pathways in place to deal with the variety of different return values anyway.

You should be checking this anyway, with out without exceptions. Exceptions just give a very convenient method for programmers to lazily throw errors up the stack and let another part of the program deal with it.

Lazily or otherwise... how many lines of code are you going to go through checking "failed" return statuses before you get to that same location anyway? and all that code is basically "if failed, return failed". Languages have many features that "lazy" programmers can misuse.

How is it wretched to force program logic to deal with exceptional situations when the alternative is a crash or worse, continuing in an undefined or inconsistent state?

A crash is safe; it ensures you're not operating in an inconsistent state. As a default outcome, it's pretty good.

no its not. I can't believe you say that.

Quote:

The fundamental problem with checked exceptions is twofold. First, they're supremely annoying during development, and lead to an abundance of catch(Exception e) {}, just to make the compiler shut up.

Yeah sure but a crash is even more annoying. If a developer is annoyed by the compiler yapping, it betrays their immaturity in thoroughly thinking about their program flow.

Its really not an excuse, the compiler is helping the developer write robust code. Why would any developer be annoyed by that?

Quote:

Second, they encourage inappropriately local error handling. One of the great virtues of exceptions is that they allow errors to flow up the stack to the place that's best to handle them. This usually isn't the direct caller! With checked exceptions, you're forced to either annotate every damn signature--which sometimes you can't even do, because of interfaces--or just say "fuck it" and swallow the exception locally (or, at best, wrap it in a RuntimeException) again, to make the compiler shut up.

1. Rarely does catching an exception let you avoid a terminal situation for your code. If it was something that could avoid a crash, you can put the logic in place to check values, etc. so that alternate code pathways are taken and you would never have gotten the exception in the first place. If there was a possibility that the two integers you are going to divide may result in a divide by zero *and if that's the case, you can and will do something else that's actually useful*, then you write that code. If getting a divide by zero at that point means the rest of your program cannot continue to function properly and is therefore an exception to the flow of processing for your code, what else are you going to do? It's the same thing whether you're looking for a file to process, your program needs access to the network, etc. You're going to error out... except you'll have a ton of distracting code in your code stream and an if/then/else nest/tree that's huge or basically "if (....) return failure" being 50% of all the code you write.

2. What do you do if the return value is something you didn't expect and couldn't deal with? You call a function, it returns failure. Then what? It's not like you can continue anyway. If you could, you would have checked the return value and had multiple code pathways in place to deal with the variety of different return values anyway.

3. Yes, they can have an effect on speed.

That being said, being able to catch an exception as a last resort in 'main' can come in handy. It may let you log the error, where it happened, and why. It doesn't necessarily help the user who experienced the exception but it does give you a chance to do some handling, if possible... unlock resources, etc... of course, those may fail as well but they might have failed anyway... but at least you have a chance to attempt to do something.

I agree with all of this. It comes down to the fact that exceptions are more than a mechanism -- they're a way of thinking about structure. I use Java exceptions to tell me to skip items in a process, and I use them to bubble problems up to the right layer for handling them. I use exception sub-classing to handle situations more generally or specifically depending on the context (for example, I have an OrmException for general ORM errors, and an ExprException that sub-classes it, where the distinction is only important for code that's doing particular stuff with expressions).

One of the real difficulties I had with Objective-C was the fact that it would often just pass by a bad situation (due to something I did improperly) and the actual error would surface later, miles away from where the bad code actually was. In many ways I consider that more onerous than having to dive into a long stack trace.

My sense is that exceptions are a pretty deep idiom, so I'm curious how it would pan out to "tack them on" to Swift at a later time.

How is it wretched to force program logic to deal with exceptional situations when the alternative is a crash or worse, continuing in an undefined or inconsistent state?

A crash is safe; it ensures you're not operating in an inconsistent state. As a default outcome, it's pretty good.

no its not pretty good. I can't believe you say that.

Quote:

The fundamental problem with checked exceptions is twofold. First, they're supremely annoying during development, and lead to an abundance of catch(Exception e) {}, just to make the compiler shut up.

Yeah sure but a crash is even more annoying. If a developer is annoyed by the compiler yapping, it betrays their immaturity in thoroughly thinking about their program flow.

Its really not an excuse, the compiler is helping the developer write robust code. Why would any developer be annoyed by that?

Quote:

Second, they encourage inappropriately local error handling. One of the great virtues of exceptions is that they allow errors to flow up the stack to the place that's best to handle them. This usually isn't the direct caller! With checked exceptions, you're forced to either annotate every damn signature--which sometimes you can't even do, because of interfaces--or just say "fuck it" and swallow the exception locally (or, at best, wrap it in a RuntimeException) again, to make the compiler shut up.

How is it wretched to force program logic to deal with exceptional situations when the alternative is a crash or worse, continuing in an undefined or inconsistent state?

A crash is safe; it ensures you're not operating in an inconsistent state. As a default outcome, it's pretty good.

no its not. I can't believe you say that.

I'm saying it because it's true. A crash is a better outcome than improper or inappropriate error handling.

Quote:

Yeah sure but a crash is even more annoying. If a developer is annoyed by the compiler yapping, it betrays their immaturity in thoroughly thinking about their program flow.

I don't care if it's annoying. A crash isn't going to risk saving corrupt state to a file or shipping goods when the credit card can't be billed or all sorts of other horrible outcomes.

Quote:

Its really not an excuse, the compiler is helping the developer write robust code. Why would any developer be annoyed by that?

Because it's not true. I've written a ton of Java and it's simply not true. The checked exceptions are annoying, but do literally nothing to ensure appropriate error handling.

Quote:

This is unpersuasive and again betrays lack of maturity in thinking about robust program flow.

Now that's unpersuasive. The need for non-local error-handling is pretty self-explanatory. The code that suffers, say, a network error because it can't reach a host is normally tens of function calls away from, for example, the user interface that's best able to (a) explain that error (b) allow the user to remedy it. Exception handling is routinely non-local. Checked exceptions force an inappropriate locality.

This is precisely why checked exceptions appear useful in toy programs, but scale very badly in any real program.

The fundamental problem with checked exceptions is twofold. First, they're supremely annoying during development, and lead to an abundance of catch(Exception e) {}, just to make the compiler shut up.

That's a pretty cavalier way to look at it. If there's a non-runtime Exception being thrown, it means that someone specifically threw it. They typed in extra stuff to say, "Hey - shit could go wrong here". Ignoring that by swallowing the exception is not very good citizenship.

Of course, I'm of the opinion that exceptions don't go far enough. I'd like to see contract-style stuff show up in more languages.

A crash is safe; it ensures you're not operating in an inconsistent state. As a default outcome, it's pretty good.

The fundamental problem with checked exceptions is twofold. First, they're supremely annoying during development, and lead to an abundance of catch(Exception e) {}, just to make the compiler shut up.

Second, they encourage inappropriately local error handling. One of the great virtues of exceptions is that they allow errors to flow up the stack to the place that's best to handle them. This usually isn't the direct caller! With checked exceptions, you're forced to either annotate every damn signature--which sometimes you can't even do, because of interfaces--or just say "fuck it" and swallow the exception locally (or, at best, wrap it in a RuntimeException) again, to make the compiler shut up.

we should note that Swift has assertions, so you can throw (not really an exception, they are very careful about never using the word exception) if something happens that is really bad and you check for it

it will cause your program to exit of course, but that can be the desired outcome in many cases if you get into a situation where the state of things is very wrong

The fundamental problem with checked exceptions is twofold. First, they're supremely annoying during development, and lead to an abundance of catch(Exception e) {}, just to make the compiler shut up.

Yeah sure but a crash is even more annoying. If a developer is annoyed by the compiler yapping, it betrays their immaturity in thoroughly thinking about their program flow.

Its really not an excuse, the compiler is helping the developer write robust code. Why would any developer be annoyed by that?

I think the issue here is in terminology. Specifically, "checked exceptions" refers to those exceptions that have to be declared as part of a "throws" clause in a method declaration. If you don't, your code won't compile. If you call a method that throws a checked exception, you have to either explicitly include it in your declaration or catch it. This is where the annoyance comes in. You're writing a function and decide to open a file to read in some data. This involved calling a method that throws IOException. So either you need to catch the IOException in some way (and most people in a hurry are going to do this in a particularly ugly way) or add it to your own throw clause. And then any methods that call the one you're writing now won't compile at all until they make that same change too.

This is an especially large issue if you're writing a library other people are going to use. You add a feature that requires a new checked exception to be thrown, and suddenly none of your client's code can compile against you without modification.

Some languages (such as C#) have exceptions without having checked exceptions. So that you can let the exception bubble up without changing your method signature at all.

This is where the annoyance comes in. You're writing a function and decide to open a file to read in some data. This involved calling a method that throws IOException. So either you need to catch the IOException in some way (and most people in a hurry are going to do this in a particularly ugly way) or add it to your own throw clause. And then any methods that call the one you're writing now won't compile at all until they make that same change too.

I am trying to figure out on what planet people are in too much of a hurry to react to an IO error. What do you do instead? Keep going like everything was okay and crash later? Sometimes an IOException is a FileNotFoundException. Wouldn't you want to let the user know about that?

How is it wretched to force program logic to deal with exceptional situations when the alternative is a crash or worse, continuing in an undefined or inconsistent state?

A crash is safe; it ensures you're not operating in an inconsistent state. As a default outcome, it's pretty good.

no its not. I can't believe you say that.

I'm saying it because it's true. A crash is a better outcome than improper or inappropriate error handling.

its not "pretty good". a developer can do a lot better than crashing by default.

Quote:

Quote:

Yeah sure but a crash is even more annoying. If a developer is annoyed by the compiler yapping, it betrays their immaturity in thoroughly thinking about their program flow.

I don't care if it's annoying. A crash isn't going to risk saving corrupt state to a file or shipping goods when the credit card can't be billed or all sorts of other horrible outcomes.

If you cared about what it means for your program, you will not find it annoying.

Quote:

Quote:

Its really not an excuse, the compiler is helping the developer write robust code. Why would any developer be annoyed by that?

Because it's not true. I've written a ton of Java and it's simply not true. The checked exceptions are annoying, but do literally nothing to ensure appropriate error handling.

Quote:

ya and that's because appropriate error handling is not on the thrower, its on the caller. that is if the caller cared about robust program flow.

Quote:

Quote:

This is unpersuasive and again betrays lack of maturity in thinking about robust program flow.

Now that's unpersuasive. The need for non-local error-handling is pretty self-explanatory. The code that suffers, say, a network error because it can't reach a host is normally tens of function calls away from, for example, the user interface that's best able to (a) explain that error (b) allow the user to remedy it. Exception handling is routinely non-local. Checked exceptions force an inappropriate locality.

not necessarily, it depends on what makes sense for the caller. if its inappropriate to handle it locally, the caller can always re-throw. in the case you brought up earlier where you can't re-throw then obviously the appropriate place to handle it is locally. so handle it!

This is precisely why checked exceptions appear useful in toy programs, but scale very badly in any real program.

I included this bit since I think it will help people understand Apple's thinking as well as what they do actually support:

“AssertionsOptionals enable you to check for values that may or may not exist, and to write code that copes gracefully with the absence of a value. In some cases, however, it is simply not possible for your code to continue execution if a value does not exist, or if a provided value does not satisfy certain conditions. In these situations, you can trigger an assertion in your code to end code execution and to provide an opportunity to debug the cause of the absent or invalid value.”

“Debugging with AssertionsAn assertion is a runtime check that a logical condition definitely evaluates to true. Literally put, an assertion “asserts” that a condition is true. You use an assertion to make sure that an essential condition is satisfied before executing any further code. If the condition evaluates to true, code execution continues as usual; if the condition evaluates to false, code execution ends, and your app is terminated.

If your code triggers an assertion while running in a debug environment, such as when you build and run an app in Xcode, you can see exactly where the invalid state occurred and query the state of your app at the time that the assertion was triggered. An assertion also lets you provide a suitable debug message as to the nature of the assert.

You write an assertion by calling the global assert function. You pass the assert function an expression that evaluates to true or false and a message that should be displayed if the result of the condition is false:

”

Code:

“let age = -3assert(age >= 0, "A person's age cannot be less than zero")// this causes the assertion to trigger, because age is not >= 0”

“In this example, code execution will continue only if age >= 0 evaluates to true, that is, if the value of age is non-negative. If the value of age is negative, as in the code above, then age >= 0 evaluates to false, and the assertion is triggered, terminating the application.

Assertion messages cannot use string interpolation. The assertion message can be omitted if desired, as in the following example:

Code:

assert(age >= 0)”

“When to Use AssertionsUse an assertion whenever a condition has the potential to be false, but must definitely be true in order for your code to continue execution. Suitable scenarios for an assertion check include:

An integer subscript index is passed to a custom subscript implementation, but the subscript index value could be too low or too high.A value is passed to a function, but an invalid value means that the function cannot fulfill its task.An optional value is currently nil, but a non-nil value is essential for subsequent code to execute successfully.”

“NOTE

Assertions cause your app to terminate and are not a substitute for designing your code in such a way that invalid conditions are unlikely to arise. Nonetheless, in situations where invalid conditions are possible, an assertion is an effective way to ensure that such conditions are highlighted and noticed during development, before your app is published.”

This is where the annoyance comes in. You're writing a function and decide to open a file to read in some data. This involved calling a method that throws IOException. So either you need to catch the IOException in some way (and most people in a hurry are going to do this in a particularly ugly way) or add it to your own throw clause. And then any methods that call the one you're writing now won't compile at all until they make that same change too.

I am trying to figure out on what planet people are in too much of a hurry to react to an IO error. What do you do instead? Keep going like everything was okay and crash later? Sometimes an IOException is a FileNotFoundException. Wouldn't you want to let the user know about that?

You should use an optional in Swift in that scenario.

The file operation would return nil instead of the data you were trying to read

then your function would return a tuple with an enum saying "FileNotFound" or something back to whatever called it.

This is where the annoyance comes in. You're writing a function and decide to open a file to read in some data. This involved calling a method that throws IOException. So either you need to catch the IOException in some way (and most people in a hurry are going to do this in a particularly ugly way) or add it to your own throw clause. And then any methods that call the one you're writing now won't compile at all until they make that same change too.

I am trying to figure out on what planet people are in too much of a hurry to react to an IO error. What do you do instead? Keep going like everything was okay and crash later? Sometimes an IOException is a FileNotFoundException. Wouldn't you want to let the user know about that?

I'm not saying you should keep going. I'm saying you might want to deal with that exception higher in the call stack. You can do that with checked exceptions, but every single method in the call stack from the one that threw the exception to the one that handles it has to have "throws IOException" on it. If you're 30 methods down and add a feature that might throw an IOException, now you have 30 methods to change the signature on.

The fundamental problem with checked exceptions is twofold. First, they're supremely annoying during development, and lead to an abundance of catch(Exception e) {}, just to make the compiler shut up.

Yeah sure but a crash is even more annoying. If a developer is annoyed by the compiler yapping, it betrays their immaturity in thoroughly thinking about their program flow.

Its really not an excuse, the compiler is helping the developer write robust code. Why would any developer be annoyed by that?

I think the issue here is in terminology. Specifically, "checked exceptions" refers to those exceptions that have to be declared as part of a "throws" clause in a method declaration. If you don't, your code won't compile. If you call a method that throws a checked exception, you have to either explicitly include it in your declaration or catch it. This is where the annoyance comes in. You're writing a function and decide to open a file to read in some data. This involved calling a method that throws IOException. So either you need to catch the IOException in some way (and most people in a hurry are going to do this in a particularly ugly way) or add it to your own throw clause. And then any methods that call the one you're writing now won't compile at all until they make that same change too.

This is an especially large issue if you're writing a library other people are going to use. You add a feature that requires a new checked exception to be thrown, and suddenly none of your client's code can compile against you without modification.

Some languages (such as C#) have exceptions without having checked exceptions. So that you can let the exception bubble up without changing your method signature at all.

The only problem I see is lack of thoughtfulness in designing program flow and interfaces. Begin by asking:

What does it mean for your method logic to encounter an exceptional situation (checked or not)?

How is it wretched to force program logic to deal with exceptional situations when the alternative is a crash or worse, continuing in an undefined or inconsistent state?

A crash is safe; it ensures you're not operating in an inconsistent state. As a default outcome, it's pretty good.

no its not. I can't believe you say that.

I'm saying it because it's true. A crash is a better outcome than improper or inappropriate error handling.

its not "pretty good". a developer can do a lot better than crashing by default.

A developer can also do an awful lot worse.

Quote:

If you cared about what it means for your program, you will not find it annoying.

I do care about what it means for my program. It remains annoying.

Quote:

ya and that's because appropriate error handling is not on the thrower, its on the caller. that is if the caller cared about robust program flow.

But it's not. Normally it's on the caller's caller. Or the caller's caller's caller. Or the caller's caller's caller's caller. Or....

Quote:

not necessarily, it depends on what makes sense for the caller. if its inappropriate to handle it locally, the caller can always re-throw.

Which means tainting every single signature with the throws clause all the way up the call stack.

Quote:

It scales just fine for well thought out programs.

It doesn't, which is why other languages haven't copied checked exceptions (though god knows, they've copied everything else) and even develop strategies to make minimize their visibility to as great an extent as possible, such as rethrowing them as RuntimeExceptions.

The only problem I see is lack of thoughtfulness in designing program flow and interfaces. Begin by asking:

What does it mean for your method logic to encounter an exceptional situation (checked or not)?

The answer is not "nothing".

I don't believe that I ever said that the correct answer is to totally ignore the exception. I do believe, though, that sometimes the correct answer is "not here". There's no point in having exceptions that bubble up the call stack if you're not prepared to let them do that occasionally.

I'm not advocating the use of an empty catch clause. I think it's stupid to do that. I think, however, that certain design elements of particular programming languages sometimes cause people to do stupid things. I think that checked exceptions (and again not exceptions in general) are one such element.

Many common exceptions in Java (those that don't inherit from RuntimeException, which seems to me most of the ones I run into) require that you explicitly declare that you might throw them, or else your code won't compile. So, you go to add something to a method that throws a new checked exception (maybe you connect to a web site and get a ConnectException) and you don't want to catch it right here. Maybe this is because you want the user code to be alerted to the fact that this exception has occured so the end user can be alerted. So, you add a throws declaration to the method. Now, whatever methods call this methods are calling a method that throws a checked exception, so they won't compile until you either catch it in that method or add the throws declaration. And then the methods that call those methods won't compile until you either catch it in that method or add the throws declaration.

I'd be much more concerned about writing code that has (AFAIK) no chance of being ported to any other platform - Apple these days make Microsoft look positively open.

If you're writing code to work with the frameworks that give the platform its personality, it doesn't matter much what programming language you use; you're still going to have to rewrite that code to port it to another platform.

The whole point of Swift is to make it easier to write programs that use the Cocoa frameworks on iOS and OS X. Portability to other platforms was, I suspect, not a design goal.

The file operation would return nil instead of the data you were trying to read

then your function would return a tuple with an enum saying "FileNotFound" or something back to whatever called it.

The issue I have with that is that: A) In most cases you're doing more work. When I get a FileNotFoundException, it means that someone has already identified the problem and constructed a message for me about it; and B) Your tuple (or int or string message) is likely to be different from everybody else's in some way.

In a lot of cases it's less code (and more understandable code) to handle a standard exception. I think it became an idiom in Java as a result of the huge number of independent libraries, and it makes being a citizen of that world much easier. There are libraries that throw sucky or unnecessary exceptions, and there are libraries that do things in non-standard ways, but a lot of the time, they end up being avoided, re-written or wrapped to something sensible.

I'd be much more concerned about writing code that has (AFAIK) no chance of being ported to any other platform - Apple these days make Microsoft look positively open.

If you're writing code to work with the frameworks that give the platform its personality, it doesn't matter much what programming language you use; you're still going to have to rewrite that code to port it to another platform.

The whole point of Swift is to make it easier to write programs that use the Cocoa frameworks on iOS and OS X. Portability to other platforms was, I suspect, not a design goal.

It depends on whether you have a task that really requires the use of those frameworks or not. I can think of plenty of scenarios where you wouldn't.

For instance, imagine a GPS app. Obviously, communicating with the GPS itself would require going through the framework, as well as displaying the user's location on a map. But, you might also have some code to reverse geocode the user's lat/long into an address, some code to determine a route to a user entered address, and so on. And that code would be nice to be able to reuse on another platform.

One of the strengths of developing on Android is that many java libraries can be compiled with Dalvik and dropped right into your app. I developed an app that used an open source library for Fast Fourier Transforms, and they distributed it as an Android JAR and a standard Java JAR. It took in a byte array of PCM (a standard format for audio) and returned an object you could query to see the amplitude of the constituent frequencies.

Of course, I think people do the same thing on iOS, they might just use C++ for it.

I don't believe that I ever said that the correct answer is to totally ignore the exception. I do believe, though, that sometimes the correct answer is "not here". There's no point in having exceptions that bubble up the call stack if you're not prepared to let them do that occasionally

So you're not arguing against exceptions so much as shittily-designed programs. If the answer is "not here", then throw it or deal with it lower down. If someone is throwing FileNotFoundException, it's supposed to mean that it's not proper for *them* to handle it there. If some WebDAV library were showing the user a message and eating the exception, I'd say it was doing it's job poorly, because it's *my* job to handle the situation with the user.

I wouldn't count on it. The Swift team seems fairly against exceptions.

They're not the only ones. Some of the best programmers I've ever known have all been against exceptions. The reasoning boiled down to three things:

1. Exceptions can and have led to indeterminate behavior and does not avoid crashes if you place code that can fail in the catch/finally clauses. This ultimately means that your program can at best be buggy but won't crash, or at worst, it will crash with a long object tree of exceptions that need to be unrolled before finding where the original exception occurred.2. Exceptions cause developers to get into a bad habit of not checking values before attempting to use them.3. Exceptions are slow. Very slow.

That exceptions can be used poorly isn't an excuse to exclude them. Lot's of features in a language can be abused.

The primary reason we exclude any programming construct is because it can be used poorly. Thus, the same has been said of features like multiple-inheritance, yet you don't see many C# and Java programmers defending that.

Regardless, I don't have a bone to pick with exceptions and I'm not arguing for their exclusion. I'm pointing out why a certain segment of the programming population may not like them. Personally, I could see exceptions being added if they couldn't be used for local error checking.

Small caveat: If you want to practice Swift for OS X coding, AFAIK you'll need to have Yosemite. All the code in Xcode 6 beta using swift I've written will crash on load for versions of OS X under Yosemite.

If anyone else has discovered otherwise, I'd be happy to hear that my experience may be just a bug.

I just compiled a (really simple) sample application. Seems to work just fine. 10.9.3 and Xcode6 beta.

The file operation would return nil instead of the data you were trying to read

then your function would return a tuple with an enum saying "FileNotFound" or something back to whatever called it.

The issue I have with that is that: A) In most cases you're doing more work. When I get a FileNotFoundException, it means that someone has already identified the problem and constructed a message for me about it; and B) Your tuple (or int or string message) is likely to be different from everybody else's in some way.

In a lot of cases it's less code (and more understandable code) to handle a standard exception. I think it became an idiom in Java as a result of the huge number of independent libraries, and it makes being a citizen of that world much easier. There are libraries that throw sucky or unnecessary exceptions, and there are libraries that do things in non-standard ways, but a lot of the time, they end up being avoided, re-written or wrapped to something sensible.

It's not more work since you HAVE to use an optional in that scenario and in most cases where things might have uncertainty.

And if you are calling my function then you really should know what it can return. If you are checking an enum in my tuple with a switch statement in Swift, that's required to be an exhaustive check, so you can do stupid "catch all cases" with an underscore, or default, or just go see what's possible for that enum and check each possibility, which is GOOD programming and what you should do.

The file operation would return nil instead of the data you were trying to read

then your function would return a tuple with an enum saying "FileNotFound" or something back to whatever called it.

The issue I have with that is that: A) In most cases you're doing more work. When I get a FileNotFoundException, it means that someone has already identified the problem and constructed a message for me about it; and B) Your tuple (or int or string message) is likely to be different from everybody else's in some way.

You have no guarantee when using someone else's library that it will return *any* standard exceptions. The only thing you can count on is that the developer was sane enough to inherit from the base Exception class.

Small caveat: If you want to practice Swift for OS X coding, AFAIK you'll need to have Yosemite. All the code in Xcode 6 beta using swift I've written will crash on load for versions of OS X under Yosemite.

If anyone else has discovered otherwise, I'd be happy to hear that my experience may be just a bug.

I just compiled a (really simple) sample application. Seems to work just fine. 10.9.3 and Xcode6 beta.

Perhaps it was just my setup. I attempted to run the OOB template for SpriteKit using Swift and it didn't like that. Good to know, though. Thanks!

You have no guarantee when using someone else's library that it will return *any* standard exceptions. The only thing you can count on is that the developer was sane enough to inherit from the base Exception class.

That's making the perfect the enemy of the good, IMO. I have yet to encounter someone throwing FileNotFoundException in a third-party library where it didn't mean "the file was not found".

The standard-ness across libraries allowed me, for example, to do a "UriFile" wrapper that reads from WebDav, local storage, or Samba transparently. If I wanted to add something like SVN to that, I could do so by picking a decent library for the job.

Again, IMO, the whole thing is about idioms and incentives. Any language feature can be used poorly, but some features actually do create incentives to be used poorly. If human nature were such that everyone was PeterBright-like, exceptions would be no good, because they'd just be ignored and swallowed. If human nature were such that everyone invented their own non-standard exceptions (or tuples) for every project, the benefit would disappear.

Perhaps it was just my setup. I attempted to run the OOB template for SpriteKit using Swift and it didn't like that. Good to know, though. Thanks!

On my machine the SpriteKit template project enters SKNote's unarchiveFromFile and AppDelegate's applicationDidFinishLaunching, then at some point subsequently, and without displaying anything, raises EXC_BAD_ACCESS.

Conversely the SceneKit sample just runs.

The Objective-C SpriteKit sample does exactly the same thing as the Swift. I didn't try the Objective-C SceneKit sample.

So I don't think there's a Swift issue versus 10.9, probably just a beta issue with SpriteKit.

How is it wretched to force program logic to deal with exceptional situations when the alternative is a crash or worse, continuing in an undefined or inconsistent state?

A crash is safe; it ensures you're not operating in an inconsistent state. As a default outcome, it's pretty good.

no its not. I can't believe you say that.

I'm saying it because it's true. A crash is a better outcome than improper or inappropriate error handling.

Quote:

Yeah sure but a crash is even more annoying. If a developer is annoyed by the compiler yapping, it betrays their immaturity in thoroughly thinking about their program flow.

I don't care if it's annoying. A crash isn't going to risk saving corrupt state to a file or shipping goods when the credit card can't be billed or all sorts of other horrible outcomes.

Agreed. That is part of my dislike of using exceptions for local error-checking. If an error can be handled locally, there's better mechanisms than using an exception. On the flip-side, if an exception is truly needed, chances are high that the consumer of that exception is going to be far removed from the local code. I do wonder how well an exception system would work if catching an exception required taking ownership of it and not being able to rethrow it?

The primary reason we exclude any programming construct is because it can be used poorly.

I struggle to think of any programming language feature that can't be used poorly.

Neither can I, but that's still the primary reason we see many useful features removed from more modern languages. My main example was multiple-inheritance. It's incredibly useful, but it can lead to very difficult to maintain code and adds more complexity to the language than is deemed worthwhile. So, it's not included in C#, Java and Objective-C.

Perhaps it was just my setup. I attempted to run the OOB template for SpriteKit using Swift and it didn't like that. Good to know, though. Thanks!

On my machine the SpriteKit template project enters SKNote's unarchiveFromFile and AppDelegate's applicationDidFinishLaunching, then at some point subsequently, and without displaying anything, raises EXC_BAD_ACCESS.

Conversely the SceneKit sample just runs.

The Objective-C SpriteKit sample does exactly the same thing as the Swift. I didn't try the Objective-C SceneKit sample.

So I don't think there's a Swift issue versus 10.9, probably just a beta issue with SpriteKit.

I didn't have a chance to check that last night, but yes, that's exactly right. Teaches me to speak before checking if it was just the framework. *facepalm*

That's making the perfect the enemy of the good, IMO. I have yet to encounter someone throwing FileNotFoundException in a third-party library where it didn't mean "the file was not found".

The standard-ness across libraries allowed me, for example, to do a "UriFile" wrapper that reads from WebDav, local storage, or Samba transparently. If I wanted to add something like SVN to that, I could do so by picking a decent library for the job.

What you're really arguing for there is the benefit of a good, standard framework that many projects can be based on. With that, I agree 100%.

Then we'd reference it with Planets.Mercury, Planets.Jupiter, etc. so we're not concerned with the actual value that each planet represents. That's the whole point of enums: a simple to compare/use value with an easy to read definition.

Except that inevitably this value is going to have to show up in Javascript, in RDBMS tables, and quite likely in code written for other languages where it needs to be documented what each value IS, not just "starts here". These kinds of features are ill-conceived when dealing with real software in the real world with real maintenance and as part of a real system. Java has a basically similar facility, and sure enough every single one of the dozen or more enums in our half a million lines of code has a comment for each value showing what its ordinal is. It would be vastly better if that was CODE. Sometimes good practice requires casting a wide net in terms of what you think about.

What's the deal with 'let'. I've not seen that since my old VB days. I've always considered it ugly syntactic sugar. Why do we have to be polite to our variables?

Assignments in terms of forcefulness.int x = 7;let x=7;superposition x = 7; (X can be any value, we hope it is 7, but takes a random value after we read it)slipItARoofie x = 4.5 (underhanded assignment, how to get ti to do something it would not otherwise do.) Also valid:"x =7 using ambien"

Really people, if you use the work let, you imply there is a chance it might not actually take on the value of assignment. I thought this dies a long time ago.

As far as I can see, in Swift "let" means "constant", while "var" means "variable".So, "var x = 7" is similar to "int x = 7", while "let x = 7" is similar to "const int x = 7".

So why not just use 'const' which everyone will understand as clearly being a constant, whereas 'let' can mean any of 3-4 things to a given coder depending on where they are coming from. Its poorly thought out.