Share this:

Epic has announced today that Unreal Engine 4 is to dramatically change its licensing model. From now on anyone can subscribe to Unreal Engine 4 for $19 a month, and then release those games commercially for a 5% gross revenue share with the developer. This means that indies no longer need to stick with the Unreal Development Kit, but get the entire engine at what appears an affordable price. And on top of that, they’re releasing the source code to Github. All from 9.30 PT today.

Epic gathered together a bundle of games journalists, fed us coffee and donuts, and sat us down to tell us big news about the Unreal Engine. In what you might perceive as a response to the enormity of Unity right now, the developers are keen to emphasise the simplicity for creating inside the Unreal Engine 4. To add to that, they’ve announced the Blueprint System, which allows creators to more simply script both tech and behaviour within the game.

But bigger, there’s a new business model to reveal. Unreal Engine 4 is now available to anyone by subscribing at $19 a month. And surprisingly, they’re sticking to a royalty model for those who want to release games, with a 5% tithe going to Epic.

For that $19 you get access to everything, with PC and Mac tools, exporting to PC, Mac, iOS and Android. They’re also providing complete access to the C++ source code for the engine (although under copyrights). What they call “Epic’s crown jewel”. They’re releasing this source code to the Github community.

This is a dramatic change from their previous models with the Unreal Engine, where usually developers would have to pay millions of dollars to license it for their project.

From now on, Epic say, their source code changes for each of their new games will be available for everybody to see, all the time. Which is huge. id led the way in releasing source code years after a project, although they do it under the GPL, but Epic appear to now be developing transparently.

Blueprint is a much more complicated part of what’s new in Unreal Engine 4, that lets you see the flow of AI in peculiarly simply ways, or develop simple games inside games, change the nature of objects, all sorts of stuff I don’t begin to understand.

This seems like a properly big deal. And clearly a big move in the battle with Unity.

It’s a visual scripting system that looks similar to Kismet in UE3, but you can do a lot more with it. The main reason it’s exciting is that it lets artist types who don’t know any programming create a wider range of things for games, like interactive objects, playable characters, a HUD, and most of the other ingredients for a working game.

I think it’ll be huge for indie developers because of how quickly you can get a prototype game up and running, and how easy it is to test and iterate on it.

Subscribing also now gives you complete access to the C++ source code for the engine.
…
From now on, Epic say, their source code changes for each of their new games will be available for free to everybody, all the time.

So which is it, unless they mean the games will have visible source but link to a library whose source is only visible if you pay up, which seems unlikely given the general announcement?

If this is UnrealEngine-on-public-GitHub, it’s going to be interesting to see what the license is in terms of code contributions, given I don’t think any of the OSI ones would allow for their royalty demands.

it is actually that only licensees can see it, since you need to associate accounts. So it’s not “free to everybody, all the time”, although I see that claim has now been removed from the RPS article.

I’m pretty sure licensees could do this before, although possibly only high-tier professional ones. (Heck, I’m pretty sure the original Deus Ex actually runs on a modified UnrealEngine, which is one reason why there was no Linux port even though that version of UE’s Unreal Tournament had Linux support right on the GOTY CD.)

I’m sorry that my comment came over as impolite. It was meant as a dig against Epic (since their own news post doesn’t say anything about licensing either), not you.

The code will be copyrighted, and not under the GPL.

Of course the code will be copyrighted, as per the Berne convention; that’s a completely different issue than under what license it will be put. Even GPL’d code still falls under the copyright, it’s just that the copyright holder grants specific usage rights.

Still, this issue is exactly why many free software proponents like myself don’t like the term “Open Source”, at least not without a “free” and/or “libre” attached to it. It just doesn’t really tell you much, except that everyone can look at the source (and sometimes not even that). It certainly doesn’t mean that you can necessarily do anything else besides looking. :/

You’re unhappy that the term “open source” only promises that the source code is open? Yeah, how unreasonable….

For a programmer, having access to the source code of the tools and libraries you use makes a *huge* difference, regardless of everything else. Whether or not the software is free as in beer, and regardless of the license used, simply being able to look at the source code to understand the library’s behavior is a *huge* productivity booster. Even if you would prefer the term to mean something else.

So, you’ve presented a false dichotomy, and interpreted McCoy’s (truculent) statement without nuance. It is not the case that code is open and helpful (under all circumstance) or closed and unhelpful. There is a spectrum of value that openness can provide, but the license terms under which it was provide do matter a great deal, and can sabotage sources usefulness altogether.

Certainly, access to sources can be hugely valuable during development. This is especially true during porting, but also in very many other cases. However, due to licensing terms, one can find oneself “tainted” by the access (this programmer himself has, in the past, done a 6m NDA due to source access licensing… never again). In particular, one must be careful how they contribute to future free software projects lest their patches compromise the entire project due to an intellectual property violation claim. It is good that legal maneuvering such as software patents are not valid globally, but my country is the undisputed king of patent trolling, and this is a very real fear that every programmer here should be aware of (most are not, until they’re served a subpoena and coached by the company attorney).

That is the reality of open sources of dubious license provenance, and is the dark commercial analogue to the tired claim that GPL code will sneak into your project in the dead of night and force you to release your sources. Of course, there are other issues, but this one has been most prominent in my personal experience. It is neither pedantic nor irrelevant to critique a company’s choice of open source licensing terms, despite their overt generosity.

I don’t really care that much about the “free as in beer” thing. GPLv2+ code I wrote is currently being sold, by a third party, and I don’t see a cent of it. What I care about is “free as in speech”.

I have contributed to various FLOSS projects under different licenses; and I’m quite aware of what programmers can see in source code, as well as the licensing / patent issues SD described.

But I also care deeply about the FLOSS community as a whole. As much as I disagree with various things John Carmack said, it can’t be denied that the opening, freeing of id’s engines was a great move that made projects like Xonotic, Warsow and Tremulous possible. Not seeing that happening with Epic’s engines is something that disappoints me.

About the specifics of the term “open source”: I don’t exactly prefer the term to mean something else, I’d prefer the term wouldn’t have been created by ESR in the first place. In the divide between “free software” and “open source”, I’m far in the “free software” camp. What’s important to me is the freedom; I loath having the theoretical access to source I can’t do anything with. (However, I prefer to use the term “FLOSS”, since it includes the word “libre” to remove the ambiguity from “free”.)

As much as I disagree with various things John Carmack said, it can’t be denied that the opening, freeing of id’s engines was a great move that made projects like Xonotic, Warsow and Tremulous possible.

This is pretty amazing. More and more “big name” engines are switching to a more affordable pricing policy, which is pretty great for low budget indies. I’m excited to see what people will do with this.

Like Phil and Bones above say, the devil is in the licensing details. Still, though, this is properly big.

Another interesting bit that went unmentioned here is the (alleged) Linux porting underway in the Github code. There were a few UE3 games with Linux ports (Dungeon Defenders comes to mind), but this gives me hope for more UE-based games for the penguin. And it’s the super-shiny new engine, no less! \o/ They also mention Steam Boxes and Oculi and things.

From what I heard here and there, every game developer basically did their own Linux porting for UE3 games (or possibly contracting someone like icculus), right? That would mean a lot of duplicated work.

Ryan was always a bit evasive about his efforts porting UT3 that I saw, but the impression I got was that some of the middleware added to UnrealEngine since UT2004 was now proving a (legal?) problem for porting to Linux. I wonder if that means they’ve resolved that.

I see you two spoke my mind after I didn’t have time to satisfy my compulsive fact-checking needs. Thanks!

I didn’t know middleware was a problem with UE, though. Interesting. Reminds me of Thechineseroom’s beef with Source. Audio was the main problem I kept hearing about with Unity (mostly via the Republique and/or Distance podcasts), creating a desparate need for middleware (which is often not linux-friendly, if I remember correctly) there. The Unity folks claimed to have made great strides there, so it should be interesting to see how middleware evolves.

thankfully cryengine made its way onto linux this gdc, it would be too good to be true for unreal to do the same.

there is still no game engine out there that is actually drawing out the strong-points of opengl code. *technically* opengl has more potential and features than directx and also easier to port with it. but the mainstream game engines are not quite there yet.

NEVERMIND
linux support for unreal is also showing uplink to unrealengine.com
“This first release of Unreal Engine 4 is just the beginning. In the C++ code, you can see many new initiatives underway, for example to support Oculus VR, Linux, Valve’s Steamworks and Steam Box efforts, and deployment of games to web browsers via HTML5. It’s all right there, in plain view, on day one of many years of exciting and open development ahead!”

Well, the post I replied to said both Unity and UE are ‘AAA’ engines, so I’d expect there to have been quite a few AAA games made using Unity, whereas I’m not sure there’s any.

Kinda of a separate debate over whether AAA is relevant, but if I look at the AAA games I’ve played in the last few years – Dark Souls, Skyrim, Bioshock Infinte, Dishonored, Planetside 2, Batman AA and AC, GTAV, Just Cause 2, Uncharted 2 – I personally wouldn’t want to lose games like this.

I think both will continue to coexist, but now that UDK works on Mac as well it certainly gives Unity a run for its money, at least for me. I am rather happy with Unity at this time, but this certainly got my attention.

Yeah, absolutely Unity is still relevant. Unreal is optimized to do certain things – if you want to do something different with the engine, you’ll need to do some serious coding. So for certain types of games, you’re better off with Unity. Unity is a lot more flexible in many ways (albeit at the cost of performance), scripting is easier, there’s a massive user base that’s provided examples and tutorials as well as a massive collection of cheap scripts, tools and assets through their online store, and generally has a lower barrier to entry. Unreal has some impressive features and power that aren’t going to be of much use to indie developers, too.

Please define what you mean by “Good”. Good is relative. Good is fast and efficient if you want to pull the utmost performance from your hardware, which no language does better than C++ (unless you count assembly or bare C and even then, the difference these days is minimal). Good is having millions of solid, working, pluggable libraries ready to be used with your code – especially those you’ve already written and having been using for years – and again, few languages beat C/C++ here. Good has having a huge array of compilers ready for any system. Show me a modern system that doesn’t have a working C++ compiler.

Sure, C#, Python, even Java have their advantages, but to say that C++ isn’t a “good” language without any kind of context? That’s just idiotic.

And by all means, show me all those AAA titles that don’t use C++ and I’ll show you 50 that do. The numbers speak for themselves.

“I invented the term Object-Oriented, and I can tell you I did not have C++ in mind.” -Alan Kay

Alternatively, Google why Linus Torvalds insists on Git being written in C. Or, more precisely, on Git NOT being written in C++.

I mean, sure, yeah, a lot of games are written in C++. It’s unrealistic (heh heh) to expect that Unreal Engine will be in anything but C++. There’s a lot of history and legacy in there and I imagine they have better things to do than pull a complete rewrite. But that doesn’t mean C++ is good. C++ is a horrible little hunchback hellspawn of a language that I would never wish upon anyone who loves code. It’s there because legacy sucks. Not because anyone would or should want it there.

One more word out of you, Dawngreeter, and me and my C++ brethren will storm your precious Bastille! Down with the royals! C++, language of the people, for the people! Sharpen the knives, the barber will cut tonight!

“I invented the term Object-Oriented, and I can tell you I did not have C++ in mind.” -Alan Kay

I like that quote because it is both valid, meaningful, interesting, and something that most people take out of context and completely misunderstand.

Let us start with the obvious. This proves nothing about whether or not C++ is “good”. It merely shows that C++ is not what the inventor of the term object-oriented would describe as object-oriented.

The second point is that even “not being what Alan Kay would describe as OOP” *is* somehow a damning death sentence for a language, then virtually every language you prefer over C++ would fare just as badly. Java *certainly* isn’t what Alan Kay described when he came up with OOP either. Neither is Python. Or Ruby, Erlang, JavaScript or Haskell.

He was talking about Smalltalk. A prototype-based language based on message-passing. A handful of languages are prototype-based like Smalltalk, and a few use message-passing, but I can’t think of many that have both.

Alternatively, Google why Linus Torvalds insists on Git being written in C. Or, more precisely, on Git NOT being written in C++.

Sure. His argument basically boils down to:

1. I hate C++
2. I don’t actually know C++ very well
3. I want a low barrier to entry, and most people are not very good at C++

None of those show C to be a better language than C++.

Do you have anything *other* than an appeal to authority to back this up?
Are you really telling us that “C++ sucks because some guys I idolize have said bad things about it”?

Most people who work in language X are bad at language X is very IMPORTANT and RELEVANT indictment of language X. If you believe it’s not true, that’s one thing, but I always see C++ fans declare that it’s *not a problem* if C++ is very very hard.

This is an error. If a language is so challenging that most people are bad at it, then that is a serious weakness in the language. It might be a relevant to say this comes with a notable tradeoff, where those who are good with it reap enormous benefits. I don’t believe this at all in this case, but you could argue the view and present supporting evidence. There’s also the possible argument that it’s a niche language that’s best for use in a particular area that those particular great programmers are going to be able to handle anyway.

However neither view really supports the use of C++ as a general purpose language, (or at least general enough to write the lion’s share of a game’s code in) which is the context here.

To repeat, all I’m challenging here in this commonly repeated myth that “a language that most people are bad at isn’t the language’s problem.”

Most people who work in language X are bad at language X is very IMPORTANT and RELEVANT indictment of language X.

Absolutely. I agree fully. But Alan Kay and Linus Torvalds do not work in the C++ language, and so their being bad at it shows us absolutely nothing.

I always see C++ fans declare that it’s *not a problem* if C++ is very very hard.

Yep, that’s sad.
Regardless of everything else about C++, it has a kind of cargo-cult image that attracts a lot of clueless beginners: people believe that “if I just use C++, then it will automatically be awesome like the big AAA games, and it will be FAST”. And they spread their ignorance around, telling everyone how incredibly hard the language is to use, and how it is for experts only.

C++ is much harder to learn than it should be, and for the above reason, a lot of the people who try to use it are not very inclined to actually learn it properly.

Those are all real problems. And C++ certainly isn’t a perfect language. It is deeply flawed.

But it also has some real and quantifiable strengths and advantages, and unlike what a lot of people seem to believe, once you’ve learned it properly, it is actually surprisingly simple, elegant and expressive.

Take the stuff “everyone knows” about memory leaks and having to do manual memory management. Those are things that trip beginners up and cause endless sources of bugs.

If you know a few fairly simple techniques then you will never, and I mean never, write another memory leak. I honestly write more memory leaks in C# code than in C++, because C# actually puts *more* of the burden on the programmer than C++ does.

But yes, it is absolutely C++’s problem that it is hard to learn and that so many people learn it (and teach it) badly. I won’t dispute that. :)

Having written code in both smalltalk and python, python is actually quite comparable. The object / instantiation behavior can definitely be used in a prototype style and often is. The only major break is the difference in method / message invocation, and the mental model imposed by these framings.

I assume most people bringing up this quote are aware that C#/Java are in the same boat. Perhaps I’m being foolish.

“If you know a few fairly simple techniques then you will never, and I mean never, write another memory leak.”

If you mean the computer scientist’s definition of memory leak, then barring error (which occur around 15 – 50 per thousand lines of code) I’ll buy this claim. Heck, it’s easy enough to run checkers on workloads and demonstrate that there aren’t any, and you can learn a lot about your blind spot by doing this.

However, if you mean “unbounded memory growth due to logic errors” which most users view as equivalent to memory leaks, then I don’t buy it. And these are the hard ones, anyway.

For example, scenarios where your cache is execercised in unexpected ways and becomes orders of magnitude larger than planned. Or scenarios where items that are created (allocated) by one system and consumed (deallocated) by another system across a series of intermediaries which perform various processing and pausing. Or scenarios where you plan to make the system asymptotically less wiling to add more items to the memory load as the pile grow.

All of these cases are EASY to have unwanted memory growth and there are no simple techniques that are going to ensure that all your logic conforms to expectations.

Now, garbage collected language and similar can grow in all those conditions too. But the point isn’t that the garbage collection “fixes it for you”, but hopefully it reduces code bulk so you can better see the problems before you write them, or at least find them later.

Having written code in both smalltalk and python, python is actually quite comparable. The object / instantiation behavior can definitely be used in a prototype style and often is. The only major break is the difference in method / message invocation, and the mental model imposed by these framings.

Uh… yeah.. The only major difference between Python and Smalltalk is the very thing that Alan Kay feels characterizes OOP.

In other words, his quote applies just as much to Python as to C++.

However, if you mean “unbounded memory growth due to logic errors” which most users view as equivalent to memory leaks, then I don’t buy it. And these are the hard ones, anyway.

I mean both kinds.

For example, scenarios where your cache is execercised in unexpected ways and becomes orders of magnitude larger than planned

Why is your cache not implemented with a fixed max size if that is a concern?
And why is C++ more susceptible to this than other languages?

Or scenarios where items that are created (allocated) by one system and consumed (deallocated) by another system across a series of intermediaries which perform various processing and pausing

Oh, you mean badly written C-like C++ code?

Have you heard of smart pointers? Or RAII?

but hopefully it reduces code bulk so you can better see the problems before you write them, or at least find them later.

Again, that only applies to bad C++ code which tries to do manual memory management.

Where’s the code bulk if I literally have *zero* lines of code to handle deallocation at the call site for my resources?

There is no evidence in anything I said that we are not using RAII, and smart pointers do not solve these problems, but simply are one way of building systems to organize the strategies for solving the problems.

It goes on from here. You’re now inventing errors for me claiming I have made them. Good day.

There is no evidence in anything I said that we are not using RAII, and smart pointers do not solve these problems

You talk about manual memory management. These are techniques to completely eliminate manual memory management from C++ code.

If you have manual memory management, then you are not using RAII (or not using it correctly, or consistently, or…)

You’re now inventing errors for me claiming I have made them.

uh… You are describing problems that *you* encounter in *your* C++ code. I respond that *I* do not see those problems in the C++ code that I work with, and I describe the techniques we use to avoid those problems.

I don’t see how that can be considered trolling.
I am merely observing that apparently our C++ code is better written than yours. And that apparently you are basing your opinions on the language as a whole on the C++ code that you are forced to work with.

The mere suggestion that “it is possible for C++ code to be better than what you are working with” is not trolling. I’m sorry to burst your bubble.

Actually, if you look at the benchmark game, C++ can outperform C. And use safer, more maintainable code while doing it.

(Yes, that site is silly, but so is picking languages by raw speed. The real reason to use C++ is that it is an excellent blend of domain modelling and efficient implementation with good static type safety and a wide range of highly developed tools and libraries. And the beauty of the STL will make you weep over the inelegance of every other language’s standard collection types.)

The reality is that most languages can be faster than most other languages depending upon the project focus and scope.

C++ fans tend to focus on cases where some very sophisticated optimization puts their selected language at the top, even though the projects under discussion rarely resemble those scenarios. They usually conveniently ignore that in many generic development scenarios it ends up being among the slowest because the heavier weight of getting things done at all prevents a coherent evaluation of the performance issues.

For example, I work on an “industrial scale” type thing that “has to be written in C++” because “performance”. Nevermind that 90% of the code would have been faster and more reliable in perl, let alone something like python, or lua. I know because I’m the one that has to, years after the fact, figure out why the beast is so amazingly slow in fairly expectable scenarios, and the original designers don’t’ really know how it works. Yes there is a 10% critical set of code that does have to be fast. Over half of that is actually in C, because it has to talk to the system APIs heavily anyway.

For performance-only type considerations, C++ does have its sweet spot. It’s the king of the hill at the moment for extremely high performance numeric-only computations, with the right pieces for parallelism and SIMD exploitations worked to a fine point not available elsewhere. Of course, a large percentage of people who actually need that type of performance write their code in things like R, Python, and so on, and gain the performance benefits down in libraries they never see.

For game programming it’s not selected because it’s actually the fastest for the types of jobs. It’s selected because it’s the default. Network effects of tooling, developers, and projects encourage selection of the language because others do. Network effects are pretty powerful and foolish to ignore. Just because C++ is actually a poor fit for writing the entirety of a game in, it’s possible that in practice it will be easier than using a reasonable language, if you have trouble finding the talent, tools, and process to support a reasonable language.

For example, I work on an “industrial scale” type thing that “has to be written in C++” because “performance”. Nevermind that 90% of the code would have been faster and more reliable in perl, let alone something like python, or lua. I know because I’m the one that has to, years after the fact, figure out why the beast is so amazingly slow in fairly expectable scenarios, and the original designers don’t’ really know how it works

One of the problems with C++ is that it means different things to different people.

And it sounds like your C++ is very different from the C++ used where I work. Bad C++, written by people who aren’t sufficiently competent with the language is *very* different from modern C++. Both in terms of efficiency, reliability and maintainability.

And saying “the C++ language is bad because I have to work with badly written C++ code at work” is a pretty silly statement.

As I’ve said above, C++ does have a lot of very real problems, both with the core language and with how it is taught and with most of the tutorials and textbooks for it and how people use it and misuse it.

There is a lot of horrific C++ code around, and a lot of people base their opinion of C++ on that code.

Your claims that it’s bad only in this one case are another commonly repeated C++ myth.

The problem isn’t that the C++ quality is bad, the problem is that the tasks are poorly suited to C++.

* Business logic
* Pathname manipulation
* Inferences about string content, especially UTF8 strings
* simple manipulation of a small set of items in large containers relating to relatively fiddly logic
* Subtly polymorphic items (eg values that can be a single or a list and go back and forth) where the surrounding logic must perform large amounts of code bulk statements (STL!) in the middle of attempting to express the core logic

This is definitely one of those cases that should have been done in a split-development model with a mix of high level code that is more terse and low level code that contains implementations. Of course doing that on 10 platforms is a bitch.

Yes, that’s why I call the site a bit silly (and why it calls itself a “game”).

I’m not sure I fully buy network effects here. Yes, they have significant influence, but new engines do get written, and a lot of libraries (perhaps not so much in the professional gamedev world?) are C APIs, which are pretty much the most open to other languages there are. Generally speaking, skills transfer across languages too. If C++ was chafing that hard, I’d suspect at least some tales of engines written in something else, but as far as I know, Unity, UnrealEngine, and id tech are C++, I’m not aware of any commercial exceptions (although some may be plain C); as are the open-source ones below, which have much less commecial “play it safe” risk.

So, I just deleted a bunch of text in this reply to avoid nitpicking. Those really are some noteworthy strengths of the language and I’ll leave it there. Just wanted to say that I haven’t actually a significant amount of experience with STL. I believe that it’s awesome and all, but… isn’t it a bit, y’know, odd praising collections of an imperative language?

This is a data collections system provided for a systems programming language. I mean really if C++ isn’t a systems programming language then it’s a total failure, let’s just discuss it in that context.’

Okay, now the STL very handily provides allocation and deallocation code for you. Great. So we can put very large amounts of our data in these partially automatically managed structures and not pay the price of manually allocating and deallocating data for every item.

Okay, so now we’ve got our industrial strength system where everything is going swimmingly with great oceans of code. Only… our memory is always going up in hard to identify ways. I know, we really need to hook the allocation and deallocation functions and use some proxy to track who’s wasting memory with runtime tracking.

Only by callsite we would just find out all our memory is in use by the STL. So that won’t work. And there’s no reasonable way to build a sane sized tracking datastructure with backtraces. Oh, I know, we need to provide specialized versions of new and delete for our STL objects that let us provide some category atoms.

Only the STL does not support this, at all. (FD: some limited support that won’t work at all before C++11, some semi-functional stuff now but still broken and not implemented on most platforms).

So…. there’s really no way to do sane memory tagging using the STL in C++ in a crossplatform way.

Really. The projects that do this all just dump the STL entirely or completely hack it up (and thus suffer craaazy portability burdens).

There are other problems I have with the STL, but this is the biggest one. C++ forces you to manage memory maually, leading to bugs, and doesn’t provide the very basic necessary plumbing to lead to flushing those bugs out in production.

Yeah there’s some great tooling that relies on running in debug mode, or special compiles, or attached to by a debugger etc, and the solaris tools are actually good enough that we can actually use them at runtime in production in sensitive environments. But no one is using solaris anymore, and it requires a restart anyway so likely as not we’ll never find the problem that way. And anyway, those tools would work if my project was written in compiled brainfuck, ocaml, or really aything that generates stack frames and calls malloc().

May you never have to do large scale long running “industrial strength” things using the STL. It’s kind of a trap right now.

Perhaps. But that might just be because it is odd for a language to actually get it *right*.

The STL and its container aren’t the reason why C++ is a worthwhile language. They’re a symptom, or a showcase. They show that the core language *allow* you to write clean, readable and efficient code such as the STL. And it is pretty neat.

… no it isn’t? It does contain some collections, but that’s only one of the three pillars of the STL.
The other two are the algorithms and the iterators. None of those involve memory allocation at all, and none of them require us to use the STL containers. So even if your criticisms of the containers were valid (they’re not), the rest of the STL would still be useful and usable.

… provided for a systems programming language. I mean really if C++ isn’t a systems programming language then it’s a total failure, let’s just discuss it in that context.’

What? I don’t follow that premise at all.

First, a lot of people are using it for application programming without any problems.
Second, this discussion was about C++ in a game dev context — which is most certainly application programming.

Okay, so now we’ve got our industrial strength system where everything is going swimmingly with great oceans of code. Only… our memory is always going up in hard to identify ways

Uh… Is it? Not in my experience.

C++ forces you to manage memory maually

No it doesn’t. But people who don’t know the language well often *try* to manuage memory manually.

May you never have to do large scale long running “industrial strength” things using the STL. It’s kind of a trap right now.

I kind of do that every day at work. And I can’t recognize any of the problems you describe.

Have you considered that perhaps you just don’t know the language as well as you think you do?

C++ has manual memory management. If you deny this you are using some private definition that is useless.

The assertion: “people use tool x for job y, therefore tool x is good for job y” does not stand on its own.

The STL’s iterators together with their collections and algorithms form a coherent set of tools for dealing with bags of data/objects. I don’t know why you’re challenging this. In practice the algorithms prior to C++11 are too inflexible or unweildy to be practical most of the time. How I would love it if this were not true.

If you have never seen memory going up unexpectedly then one of the following is likely true:

* you are not working on codebases with millions of lines
* you are not working on systems that have very flexible input sets and workloads

Probably both. Any system with a large enough size will have enough complexity to have unexpected bugs appear. And any system with flexible workloads selected by customers will have flaws exposed by those workloads. I get the joy of both.

Oh and another denial technique “despite the fact that you’ve done extensive research in an area of the language that I claim to not have experience dealing with, I’m going to suggest that you’re probably just not very good at it”.

I could say the same to you, but I’m an optimist and assuming that you’re not intentionally trolling.

C++ has manual memory management. If you deny this you are using some private definition that is useless.

C++ has lots of things. That doesn’t mean good code has to *use* it.

Good C++ code does not contain manual memory management, even though the C++ language makes it *possible* to use manual memory management.

If I see calls to `delete` or `free` then I know the C++ code has serious problems. If they are in destructors then a rare occurrence is okay, but more than that is a clear sign of bad code.

The STL’s iterators together with their collections and algorithms form a coherent set of tools for dealing with bags of data/objects. I don’t know why you’re challenging this

Because they can be used separately as well. Because I can use STL algorithms on a plain C array, or on any other collection type, via iterators.

In practice the algorithms prior to C++11 are too inflexible or unweildy to be practical most of the time. How I would love it if this were not true

I disagree, but that is subjective. I have coworkers who agree with you. It comes down to (1) familiarity with the available algorithms, (2) the complexity of the operation you’re performing and (3) your willingness to write a function object or a separate function. I think everyone can agree that there are cases (in C++98) where the STL algorithms are fine and useful, and situations where they’re too unwieldy. Where, between those two extremes, you choose to draw the line is obviously subjective.

If you have never seen memory going up unexpectedly then one of the following is likely true:

I have seen memory go up unexpectedly. But I have seen it happen in every language I’ve used, and I haven’t seen it happen *more often* in C++.

Nor have I found it harder to diagnose and fix in C++ than in other languages.

Probably both. Any system with a large enough size will have enough complexity to have unexpected bugs appear. And any system with flexible workloads selected by customers will have flaws exposed by those workloads. I get the joy of both.

Yes, so do I.

Oh and another denial technique “despite the fact that you’ve done extensive research in an area of the language that I claim to not have experience dealing with, I’m going to suggest that you’re probably just not very good at it”.

That is not denial, it is a simple inference.

I know for a fact that C++ can be used well. You describe a number of symptoms commonly associated with “bad” C++ code. And from that, I infer that “presumably, the C++ code you are dealing with is bad”.

Now, if you knew that C++ could be better, if you knew that the problems you were facing were due to the quality of the code base, and not the language, then you would obviously blame the code base and not the language.

So from this, I infer that you do not know better; that the code you are working with is in fact what you think C++ is like.

Now sure, I could be wrong. I haven’t seen the code you work with, and I haven’t seen the kind of code you write — nor have I extensively quizzed you on your knowledge of C++.

But these are reasonable inferences based solely on the information you have given.

If you want to talk about denial techniques, perhaps take a look at your own responses, and your own *refusal* to believe that there are people on this planet who experience fewer problems with C++ in large complex code bases than you do.

If you complain that following a cake recipe gives you burnt and inedible cake, and if I have followed the same recipe and gotten a delicious and moist cake out of it, then I am going to question whether you followed the recipe correctly. That is not trolling, and it is not denial.

If you complain that C++ *cannot* be used well to build reliable, efficient and maintainable applications at large scale, and if I have seen exactly that be done and in fact work with such an application every day, then I am going to question whether you and your workplace are using C++ correctly. That is not trolling, and it is not denial.

Yeah, this is pretty big news. I think before the licensing for UDK was 25% of earnings? After the app store takes their 30% that only leaves you with 45% left. So having the licensing go down to only 5% is big. As for $19 a month, that’s a lot better than Unity’s $75 per month, plus $75 for the ability to export to iOS and yet another $75 for Android.

I think it still just boils down to what features you want and what your preference in engine is. Both Unity and Unreal have free versions to get you started to see if game dev is your thing—

Whoa, wait…scratch that. All the UDK websites have been removed, even the download section. Did some looking on the UDK forums and it seems like they are dumping UDK. No free version of the engine (unless you got grandfathered in) so you HAVE to pay a subscription fee if you want to learn the Unreal engine from now on.

For people who can’t pay $19 a month to dabble with game dev (especially teenagers, or anyone super strapped for cash), they are still going to use Unity. Epic will probably get a ton of money from people who can pay it and want to learn their engine, but it looks like there’s now no free option for the Unreal engine. That kinda sucks.

Anyway, using a free version of the engine will let you know if you should even bother investing money into it. And once you’ve figured out which engine is right for you the cost won’t really matter that much. Doing the math, you’ll need to gross $30,000 off an Unreal game to match the $1,500 licensing fee for Unity Pro.

I could argue which is better financially in the long run, but if you don’t make anything then it doesn’t matter. The game engine that lets you make a game quickly and easily is the right game engine in the long run, especially at only 5% royalty. Which ever game engine you are the most familiar with is going to be the right one, so for those that get the free version of Unity because that’s all they can afford, and that’s what they learn, that’s what they are going to use.

Yeah there absolutely are people who wouldn’t want to spend (or couldn’t afford) the $19 per month when they’re not sure anything will ever come of it – people just starting out, hobbyist, wannabe indie devs etc. For those people using UE just got more expensive and they’re probably gonna choose Unity free now.

Epic probably realize here that 99% of people who tried UDK never made a game or a cent with it, but they want to monetize those people, but in the end they might just turn loads of people away from using UE.

A lot of things I really hate about console games are really Unreal Engine shortcomings. Unreal is at this point a “authoring tool”,… some use it as a tool to make something with the value of a ugly graffity in a wall, and other people go and use it to make stuff like TwoBrothers.

So in the end, maybe I should not hate Unreal, is just e empowering tool.

Is this the only business model for UE4 or is it in addition to the traditional lump sum and/or negotiated percentage approach? Because 5% of gross for a AAA game is a huge amount of money – millions and millions of dollars.Obviously this is at the extreme, but if CoD were built on this model they’d have to pay Epic on the order of $50m for each title in the series.

Actually, I’m pretty sure AAA games that negotiate with Epic have royalties of at least 5%, probably even up to 25%. I wish someone would leak the actual numbers, but my reason is that most games where the devs had to make their own engine, the engine itself took about half the man hours of the total project. Getting rid of all that dev time is worth the upfront money and royalties.

More than 25% of gross? Because that would mean they would lose money on every single retail copy without even spending anything other than licensing fees. Check the FAQ – this 5% isn’t coming out of the developer or even the publisher’s share – it’s coming out of total revenue “regardless of what company collects the revenue”.

Having said that, I’ve just spotted that the FAQ also says you can in fact negotiate custom licenses with Epic. Makes sense.

Who says over 25% of gross? It would only be up to 25% of gross. Publishers take home about 45% of a $60 retail game sale (on console anyway), so yes that would be about half of their cut. Even then I could still seeing a game company willing to do that. If a good, solid, mostly bug-free engine takes that much of dev time, and if a studio can instead put that same money into getting their game out sooner, not have to wait for their own in-house next-gen-looking engine to be finished, and reap the benefits of whatever new tech Epic comes out with during game development (for free) then yes, an executive would be willing to do put half of their profits into that.

The UDK royalty rate was previously at 25%, I imagine that would have been the highest royalty fee Epic would have ever given a AAA title, so in reality they would have negotiated somewhere between 25% and the 5% they are doing now.

My guess is that they have either always negotiated with AAA titles for 5% with a huge upfront fee, or 5% is a lot lower than it was before. Why? Because Unity is starting to look DAMN good, and if your game is going to make millions, not having to pay any royalty fee means you will make millions more. I mean, Unity 5 has global illumination. Freaking GI! It’s harder and harder to tell the difference between Unity and Unreal in terms of graphics now, and I’m sure Epic has already been loosing business to game makers deciding to use Unity for their games, especially in the mobile market. Even Blizzard decided to use Unity for Hearthstone. That says something.

This is very interesting. I always felt one of main drawbacks of using a closed engine was that these engines are made for a limited amount of gameplay verbs. Opening the source code allows one to either modify far deeper or, if need be, learn from what one sees and develop one’s own engine. I learned a lot about programming going through Source’s code, it has spurred me on in a way no self-learning could have done.

I actually think what would help the development of new games (not bigger-and-more-AAA and not indie-but-bigger-and-more-AAA) is a lot of open source code. Engines should be more like tools, less like lifestyles, and being able to develop the code needed for new forms of gameplay should be part of a developer’s capacities.

But that is my utopic dream, for the average ‘I want to make my own shooting game with my own type of space swords’ developer that has little meaning.

Well, open-source components exist. Irrlicht’s documentation isn’t terrible. You can rattle things off like CrystalSpace, OGRE, Bullet (physics), ODE (also physics), and in the 2D space Box2D (did I mention physics?), SDL, SFML…and some higher-level things like Love2D if you want more than just C/C++ libraries.

I’m not aware of anything good and open-source for networking (as in, helping you to get game state replication right, not just the cross-platform lobbing of packets back and forth).

I do not see how this relates to my comment, since some people do want to make multiplayer games, but I currently am not aware of good open-source higher-conceptual-level library support for this, in the same way Irrlicht will give you not only an OpenGL context but a whole scene graph.

Oh, absolutely! I did not mean to sound like I thought there was no open source code what-so-ever. I am happy there is and the examples you give are but the tip of the iceberg.

The ‘scene’ for libraries is (depending on your language) really good for many things but a few libraries does not an engine make. I think we would benefit from having more games open source to other developers as a whole, not just components.

But let me not repeat such statements without mentioning the libraries that do exist, indeed.

Yeah…that’s one thing id was good at. Quake source has lead to quite a few people dabbling with hobby FPSes, with the odd experiment that turned out neat like that one where you had to direct magnetic rockets.

Unfortunately I suspect a lot of actual complete game code isn’t wonderful for modification or education for lack of clear design and documentation.

Very this. I’ve played Unreal Engine games that ran and played like crap because the developers didn’t optimize right and had a lot of unfixed bugs. An engine doesn’t make a game better, it just gives you some things for free (importing art assets, rendering pipeline, etc.), but it can not give you everything for free. If it did you would have things like incredibly inefficient garbage collectors. The devs just have to do some things on their own.

I checked this with the Epic folks, and it’s entirely within their licensing agreement to pay the fee for one month and then play around with the source code/tools as long as you want if you’re not making money with it. You’ll still have access to the documentation and such, you just won’t be able to pull in new engine changes without an active subscription.