Reply To: Star Citizen – General Discussions

Forums Main Star Citizen – General Discussions Reply To: Star Citizen – General Discussions

#3570
dsmart
Keymaster

    Excerpt from my Condition Red blog related to networking instances.

    DEATH BY A THOUSAND CUTS

    For Star Citizen, the elephant in the room in terms of tech, is this notion that somehow a twitch-based game designed to be instanced, and which can’t even get more than 10 clients in a session without very bad things happening, is going to turn into an MMO. But back in Nov 2012 (when he was seeking funding for the project) when Chris Roberts wrote this missive about multiplayer and instancing, I have absolutely no doubt in my mind that this guy – who hadn’t made a game in almost 15 years at the time –  really believed that what he was writing and dreaming about, was in fact possible. Hint: it’s not. Like over 90% (at last count) of everything he has said/promised about this project in order to get funding, it’s pure and utter horse shit. And back in July 2015, one of the devs actually added his own thoughts which then made it painfully clear that not only were they winging it – which is the basis for R&D btw – but that they also had absolutely no clue how they were going to actually do it.

    As of this writing, not much has changed since then; neither in the underlying network architecture, nor the instancing part of it.

    As an experienced software engineer, I can tell you – flat out – that inter-instance communication described in this manner – and for the game pitched – is not only improbable, but it’s also the sort of thing that fairy dust is made of. And we’re not talking about the ability for a database in one server instance to talk to another database (e.g. user) in another instance. That’s pretty trivial (we’ve done just that in Line Of Defense btw) and rudimentary.  No, we’re talking about the ability for one game instance (A) with players, to communicate with another game instance (B) that also has players. As that is the only way that you’re ever going to get Tom on A to see/communicate with Harry on B. Before you even go that far, know this, in order for that to even work, you need to have a unified and persistent universe that acts as the “play” area for Tom and Harry.

    Before you say Eve Online has done it; don’t – they haven’t. If you’re a programmer, go ahead and read up on the EO architecture (12) – which btw has been drastically improved upon over the years. That EO bespoke architecture was built from the ground up as part of the engine and for a specific game. A game that’s not twitch-based or anywhere near the fidelity of the seamless architecture that Star Citizen is shooting for.

    Simply put, without a seamless inter-instance communication backend, there is no Star Citizen MMO. Like ever. And while Chris was flat out of his depths and just making shit up, Alex on the other hand outlined how it could be done. Theoretically. See the difference between those two accounts of the same thing? While you’re at it, this is the list of games made with CryEngine. Count the number of standard MMO games which have actually been completed and released.

    As I write this blog in the middle of May 2016, not only do they not have a persistent universe to speak of, but they still have serious issues with instances hosting more than 10 clients. Not only that, as an instanced game, the chances of you and your buddies to be in the same instance are next to impossible. This is not a game whereby you fire up a server browser, join a server, then tell your friends to come to that server before it fills up. Nor is it a game whereby you can spin up your own private server – which they also promised btw.

    The sad part of all this? They were never supposed to be building an MMO to begin with. Somewhere along the line, despite saying it wasn’t an MMO, Chris decided they were going to build one after all. Just like that.

    When Line Of Defense was designed, right off the bat we knew what our networking architecture was going to be like. We also knew that we wanted to have the flexibility of having either a standard MMO architecture, or a standard server browser based option for consoles – in the event that I allowed players to host private servers. And the world – in both cases – would be 100% persistent. In fact, it was designed in such a way that redundancy was key. We have client limits not only based on specific scenes, but also on an entire cluster which runs a single world. We did this so that if one scene (e.g. Heatwave on the planet) in the world goes down, it doesn’t take the whole game world/cluster with it. Instead, everyone on Heatwave would be kicked out and they could immediately rejoin the game and go to another scene (e.g. Frostbite) while Heatwave came back up. And all the scene links (via jumpgate and DJP) are intelligent enough to prevent access to a dead scene; while allowing it as soon as the scene was back up. So if you are in Frostbite or in Lyrius space, you can’t get to Heatwave if it was down; but you have the rest of the game world to play in.

    So essentially, unless a cluster of servers running a world of 13 (4 planets, 4 space, 4 stations, 1 carrier) scenes suffers a catastrophic collapse, there will never be a case whereby people can’t connect to and play the game. And the beauty of it is that we can spin-up entire clusters as-needed.

    And we have hardware servers – not cloud instances (Amazon | Google) – because not only is the game not instanced, it was 100% persistent right off the bat. We just built a “game” on top of it. We didn’t try to shoehorn persistence as an afterthought, long after critical engine work already done, would make it an insurmountable task. Further, hosting our own servers for a twitch based game, makes the most economical sense because for these types of games, because leasing or doing server co-lo, ends up being cheaper and offers the most flexibility.

    This design also means that you and your friends can always meet anywhere in the game world; and even if a cluster is at peak – or is down – you can always join up on another cluster because for LOD, “server transfers” are a non-issue: you can take your character to any server cluster, and at any time. All you have to do is logout, and log back in. Boom! You’re playing.

    These are all the reasons why, when I designed the game world, I partitioned it as I did. I opted for redundancy and up time, over fidelity and seamless (oxymoron) bullshit. And if the game does well, not only does this design allow us to add more scenes to the current Lyrius planet, but also any planet or any space sector. This would also allow us to eventually build out the entire game world that the IP (used in my Battlecruiser/Universal Combat/All Aspect games) is based on, while later adding staple features from my previous games, such as trading and mining, with the addition of capital (transport, cruiser, carrier) ships needed for those features.

    First rule of game development: choose or build an engine specific to the game you’re making; not the other way around.