Yeah, no freaking way this gets anywhere near a July 27th release. The whole schedule is bullshit, really. On May 26th, I wrote an article based on leaks from sources who told me the live schedule was pure bullshit.

« Last Edit: June 09, 2017, 03:23:13 PM by dsmart »

Logged

Star Citizen isn't a game. It's a TV show about a bunch of characters making a game. It's basically "This is Spinal Tap" - except people think the band is real.

Either all you morons responding are literally IQ below 20 or you actually just are white knighting so hard you can't even see it. I AM NOT TALKING ABOUT THE FUCKING SCHEDULE. I am TALKING about the fucking LIES said last year that they were giving 3.0 by dec. 19. Which was an OBVIOUS lie. Then the fact they said JUNE. Which they OBVIOUSLY had the plan to release it at gamescom. Its fucking stupid of anyone to just think ooooh, wow, gee what a coincidence!!!! I mean give me a god damn break. For real. Stop being moronic. Schedule or NOT, they just PUT whatever they want is delayed so WHOA, MAGIC, it will be JUST IN TIME FOR GAMESCOM. FFS we all said it 1st

Im with ellindar.How was 3.0 supposed to release end of last year given how many things are obviously not ready and features are slipping?You'd have to be a liar or be totally incompetent to give these kinds of estimates. My guess is that CR is both. Lying straight up about release dates to get more people on board and funding then slowly spacing it out and buying time as we are seeing here. And also completely detached from technical side of things - promising things that are straight up impossible then gutting stuff out.I think there has been a ton of mismanagement for this project. And im losing hope fast. If 3.0 doesnt deliver something substantial im bailing. I only pledged a few hundred so whatever.Also dont give a shit about all the cultists who pledged thousands and are too deep in the hole to have any reasonable discussion.

Logged

Star Citizen isn't a game. It's a TV show about a bunch of characters making a game. It's basically "This is Spinal Tap" - except people think the band is real.

So we've been told many times that the game's engine ('Star Engine') has been changed a lot from the original CryEngine (/Lumberyard) base, but the differences listed are usually to do with physics and gameplay features. My question is: What about the rendering/graphical side? How much has this been changed from the original CryEngine?

Quote

That's a mighty big question! Here's a quick list of the main features, but I'm sure we'll have forgotten some stuff!

Object space shader damage (allows 4 different types of damage to be permanently inflicted on ships, including cutting holes, and blended seamlessly into the base shading)

Real time environment-probe capture and compression (avoids needing to bake probes in space and on planets)

Image based lens flares (use entire source image to simulate 4 different physically based lens distortions per colour channel on up to 20 individual elements)

Human eye exposure simulation (capture histogram of light intensity from both screen and surroundings, isolate range of light we intend to focus on, simulate both pupil and photo-pigment reactions for quick and slow reactions)

Major improvements to planar lights (far more physical basis now which results in major quality improvements)

Intelligent mesh-merging system (repeatedly searches for best bang-for-buck mesh merge opportunity in a scene until we hit a memory limit)

GPU Particle System (built from the ground up for efficiency, distinct from Lumberyards and CryEngine's GPU particle systems)

Various improvements to transparency sorting (generalized system, allow depth of field and motion blur to not effect nearby in-focus objects, order independent transparency for specific shaders such as hair)

Artist friendly profiler (captures statistics per art-team, and per area of the level allowing accurate breakdowns and quick diagnosing of performance issues)

Physically based atmospheric scattering

Hierarchical object management (efficient searches and culling, local coordinate frames for things like ships inside ships on planets which are rotating etc)

On top of this there's procedural asteroids and the huge amount of tech for procedural planets, but these strictly speaking aren't so much part of the renderer but higher level features that feed content to the renderer.

Ali Brown

« Last Edit: June 15, 2017, 08:37:31 AM by dsmart »

Logged

Star Citizen isn't a game. It's a TV show about a bunch of characters making a game. It's basically "This is Spinal Tap" - except people think the band is real.

The hilarious part isn't that they been touting this for the better part of a year, or that it was on the 3.0 schedule, but was subsequently removed and no longer appears anywhere between 3.0 - 3.2; but the fact that they're highlighting it as something that's so overly critical, that it warrants a dev video about it. You know why that is? Because they made a big deal out of it as a way to hand wave the game's piss-poor performance. Some backers latched on this sleight of hand, foolishly thinking that it was in fact going to solve their FPS issues. In fact, take a look at my previous post above which includes a recent Twitter post.

It's all rubbish. It's them preparing the backers for the same piss-poor networking layer in the upcoming - and already doomed - 3.0 build. The backend benefits are going to be negligible at best; and it's certainly not going to mean that much to client-side FPS performance, let alone being able to cram more than 8 clients in an instance before the server coughs up moth balls and croaks. And aside from all that, this is something that they should have done right from the start, or around the time when they decided they were building an MMO after all.

This isn't even a case of putting the cart before the horse. There's just no cart. It's a JPEG of a cart.

Quote

Serialized variables is also a cornerstone of building a persistent universe, it’ll require multiple servers communicating with each other*

- This means several servers can be aware of an entity all at the same time, how they decide which one gets the final say is using tokens

- A token can only be held by one computer at a time, this means by linking serialized variables and tokens they’ll be able to transfer authority from one server to another as quickly as flicking a switch

I've seen the video..So basically they have centralized the serialization code (using an observer or a default setter) to serialize and propagate a single variable change?

In wich world this is a revolution? And why is done now and not originally?

And the legacy code of ALL the components and Entities must be refactored to use this new system? (bugs alert! because since nobody has cared before if a variable is synchronized or not they will have to serialize all o risk some code to use an old value..)

Okay, the first thing he's doing is mush-mouthing the concept of 'entities' (or objects) with the idea that all the 'entities' (or objects) need to be networked. That's what we call a 'sweeping generalization', or a very reductive model. It is vaguely meaningless.

Handwave the backend communication, that's all Austin...

Client/Server communication, alright...

Okay, he's describing a problem of bandwidth and fidelity, where bandwidth is the limitation in supplying bits to the client/server and fidelity is the detail of the 'model' that it's trying to sling around. What they're doing is trying to offload the problem of a lack of awareness in the programmers into a formalized process.

Network programming is not *very hard*, and while it takes years to learn the ins and outs, stay the fuck out of the OSI model, it's layered for a reason. It's certainly _esoteric_, but it's entirely logical and while the scale can blow your mind, it's patterned behavior that just iterates really quickly.

huh, API? Why is that man using an OO analogy? Why is he refusing to call it an interface?

Oh Application _Programmer_ Interface, not Application _Program_ Interface, like the rest of the world.

None of this is networking. He's completely handwaving anything to do with networking.

Every 'entity has it's own table' is a key-value store. This is how anything persists; mysql, SQL, NoSQL, Mongo, SQLite, csv, struct. This is not complex.

Updating single values within rows usually requires re-writing the row, although you can serialize a value ad infinitum; overhead comes in how it's stored.

Sending the full entity states periodicially isn't required if you hash the entity table and look for a hash comparison. eg. The md5 hash for 'I am a delicious hawaiian pizza' is always '9b019256aabd5ae063661e6b5b78b7db'. Hash your table, send a comparison, check it, if it differs, update the full state.

Hell, I used to use a system of grouped states back in the 1990s; hashed the full store and three sub-groups because bandwidth was even tighter back then.

"We have not solved the bandwidth problem."

No shit, you still have *n problem unless you're loading spreading and doing the backbone over extremely low latency connections, which Amazon is not going to help with.

OMIGOD, they discovered key updates.

HAH subgrouping.

Wait, so their big idea is to binary-pack a key-value pair to send via UDP? That puts them back in the same place they were; TCP would indicate that a value was received, but not changed. I'm not seeing how this isn't entirely circular back to the start of the piece apart from the binary pack?

Lol at the macros being the timesaver.

Oh, dear god, previously we would have needed functions in each class, or read the chapter on polymorphism.

Admission that there's no unit testing that would stop a programmer 'forgetting' a variable.

The API is an _abstraction_ of the concrete class. I'm hoping C++ programmers can correct my shit, because there's no actual networking in this piece other than an acknowledgment that they're running into problems with physics. This is normally an _interface_ that strips down a more anal definition or model into a more bite-sized chunk so you don't spin cycles on defining the color of something if you only need it's speed. It's a way of stopping polymorphism hell.

Okay, their network engineer is describing object oriented programming. He's also tacitly admitting that they have a problem with key/value updates on an atomic level because there is finite bandwidth involved. One of the reasons why this is a problem is because there are two types of packet in the world; TCP and UDP.

UDP is just a stream of shit. It's thrown at the client and can arrive in any order, or not arrive at all. The sending side doesn't care, it's your problem to store and deal with the stream. This tends to get used with games to do positional updates, which is why tight bandwidth or throttling causes people to jump around and rubberband. UDP packets have a particular size and you want to stay inside that, or the client has to deal with trunca

TCP is pendantic. It has to be received, and received in the right order. A client will ask for a sequence again if it arrives out of order, and if it misses a packet, it'll do the same. Downloading files works like this, because getting chunks of file in the wrong order will make it useless.

if you're firing key/value updates with UDP, then some of them may not get through. Firing with TCP means they get through, eventually. This is why there is an older model of using UDP to do updates, then having TCP rebuild the key/value store. Small, frequent updates of vitals, with larger sync updates for abstracts.

He is _completely right_ about the challenges they still face; the number, frequency and priority of the messaging are _the things that will kill bandwidth_ and limit their instance occupancy, particular unless they shard that server. They're looking at packet size x packet frequency x packet quantity x player and will shag themselves over if they don't build in some of the things that Eve has been doing to calm the packetstorm.

very tl;dr - They've discovered how to do __get and __set over the network between the store and the client; news at 11. Get a goddamned refund.

« Last Edit: June 16, 2017, 11:19:17 AM by dsmart »

Logged

Star Citizen isn't a game. It's a TV show about a bunch of characters making a game. It's basically "This is Spinal Tap" - except people think the band is real.

Pretty much spot on description from the SA forum, hell even The Sims Online used an API in the exact same way between the clients and server. If you want a real challenge try doing that in an open world online racing game. As he mentioned in his post, timing and order are everything.

These are not new techniques, in fact I would say it's a little outdated given languages like Go handle this a lot better.

I found the token idea enlightening and remarkable, it's a very interesting direction to go in too. Constructively I see three potential pitfalls:

1) Taking into account tokens on AWS - depending on the size of their EC2 instances that it becomes a very difficult to filter out the noise between a MA (master server) and SA (listening servers), if you have five servers, four being listening servers then by definition those servers will be getting traffic from clients (except for the master server if you have any sense), if the token switches that means that server will have to drop all client data and then dynamically reallocate the clients to the server that has passed on its token. If all five servers are sending and receiving traffic then the master effectively becomes swamped with clients at the same time as losing its token = chaos. TL;DR if they do use tiered servers with tokens and dynamically switching it will be very difficult to maintain consistency, if they don't then they will run into bandwidth issues anyway.

2) Why are they attempting to reinvent the wheel? There are already third party engine modules of CryEngine that handle this already in a pre-built form? This means a lot of time has been used for something that has already existed off the shelf for ages, if it doesn't do what they want it to do they could very easily modify them, they're open source or licensed models provide the source.

3) This is adding another layer to their engine, at this point with so many layers how will they debug? They mentioned it in the video but now you run into the issue that the client (or one of the clients) may not necessarily know which server has the token (or if there are connectivity issues) in which case it becomes more difficult to troubleshoot and diagnose. This is not bad within itself if you have a custom toolset for debugging issues written for your platform, but again you're adding time creating one. Thus more code to maintain (refactor as CIG likes to say) from revision to revision of the custom written network stack as part of the custom written modules as part of their bespoke engine (Star Engine or Lumberyard with heavy customisation). More techdebt.

These are not new techniques, in fact I would say it's a little outdated given languages like Go handle this a lot better.

Stop spreading FUD. Clearly you know nothing about game development.

Quote

1) Taking into account tokens on AWS - depending on the size of their EC2 instances that it becomes a very difficult to filter out the noise between a MA (master server) and SA (listening servers), if you have five servers, four being listening servers then by definition those servers will be getting traffic from clients (except for the master server if you have any sense), if the token switches that means that server will have to drop all client data and then dynamically reallocate the clients to the server that has passed on its token. If all five servers are sending and receiving traffic then the master effectively becomes swamped with clients at the same time as losing its token = chaos. TL;DR if they do use tiered servers with tokens and dynamically switching it will be very difficult to maintain consistency, if they don't then they will run into bandwidth issues anyway.

Yeah but, assuming they know what they're doing, they may end up sticking a proxy server somewhere in the mix. When we built our WSG framework, these are some of the things we - like most devs building massive online games - had to take into consideration.

Quote

2) Why are they attempting to reinvent the wheel? There are already third party engine modules of CryEngine that handle this already in a pre-built form? This means a lot of time has been used for something that has already existed off the shelf for ages, if it doesn't do what they want it to do they could very easily modify them, they're open source or licensed models provide the source.

CryEngine doesn't provide adequate support for this; and is mostly entity based. The last time I checked, even the LY implementation didn't delve much farther down the rabbit hole. This is probably why they have to do this now; especially given that the game croberts is dreaming up, is simply not going to work with the baseline CE3/LY implementation.

Quote

3) This is adding another layer to their engine, at this point with so many layers how will they debug? They mentioned it in the video but now you run into the issue that the client (or one of the clients) may not necessarily know which server has the token (or if there are connectivity issues) in which case it becomes more difficult to troubleshoot and diagnose. This is not bad within itself if you have a custom toolset for debugging issues written for your platform, but again you're adding time creating one. Thus more code to maintain (refactor as CIG likes to say) from revision to revision of the custom written network stack as part of the custom written modules as part of their bespoke engine (Star Engine or Lumberyard with heavy customisation). More techdebt.

It's a lot worse than that because they're doing it - at this point - in development where they have a ton of things that would need to be changed. And those things are going to not only cause delays, but are going to break. In fact, this is why the 3.0 schedule keeps slipping, and this portion of the networking layer revision was completely removed from the schedule. They've bitten off more than they can chew, and I have every reason to believe that it's all R&D, and that at some point they're going to abandon it, and stick with what they have in place.

Logged

Star Citizen isn't a game. It's a TV show about a bunch of characters making a game. It's basically "This is Spinal Tap" - except people think the band is real.

The fact that the "aim date" still remains July 27, 2017, even as things are delayed, removed (TBD), means that they're probably going to ship 3.0 with some items removed, while pushing them either into 3.0x or 3.1 etc. It's like 2.0 all over again. Also, Gamescom 2017 is coming up in Aug, and CitizenCon 2017 is Oct 27th. Both in Germany. So either way, there is going to be some version of a 3.0 build at one of these two events (fundraising!).

- New Message Queue has "a number of issues noted" (no longer has an ETA)- Repair - "Code Complete Bugfixing to follow as needed" (no longer has an ETA)

So...

- 4 reworks of things already finished, but still not actually completely "finished"- a new golf cart marked complete even though you cant transport it or use it on a planet- and whatever the fuck "diffusion subset" means. - Actual gameplay features 0

« Last Edit: June 17, 2017, 07:46:36 AM by dsmart »

Logged

Star Citizen isn't a game. It's a TV show about a bunch of characters making a game. It's basically "This is Spinal Tap" - except people think the band is real.

It seems like it was just yesterday when I was saying that they were never - ever - going to be able to build the world scope they promised. That aside from the fact that their tech requires them to build these moons/planets manually (no procedural tech to automate the process).

Now some hardcore backers who already know the original claim was bullshit to begin with, are saying that they knew it was bullshit, that they would be happy to get just some of them etc.

Oh how far we've come.

A company with over $151m can't build tools to create a procedurally generated world. Meanwhile, my Battlecruiser/Universal Combat, games built over three decades ago, as well as current games like Elite Dangerous, Infinity Battlescape, Dual Universe, all have that tech in some form or another.

It's amazing how far we've come from back in Aug 2016 when 3.0 was coming on or before Dec 19th. Now it's completely off the radar.

« Last Edit: June 19, 2017, 12:40:24 PM by dsmart »

Logged

Star Citizen isn't a game. It's a TV show about a bunch of characters making a game. It's basically "This is Spinal Tap" - except people think the band is real.

"Grab a ticket to ensure your seat at the show. The 650 tickets for Capitol Theater are €50 each and will go on sale with the following format:

Saturday 1st July 7PM CEST: 150 Tickets available to Concierge and Subscribers only.Saturday 1st July 11PM CEST: 150 Tickets available to Concierge and Subscribers only.Sunday 2nd July 7PM CEST: 150 Tickets, now available to all backers.Sunday 2nd July 11PM CEST: The remaining 200 Tickets available to all backers."