Of all the practical applications that come to mind for hosting, my personal favorites are those related directly to video games. As a lifelong gamer, I learned just about everything I know about technology out of a motivation that was at least tangentially related to playing the latest games. When I first started working in the industry, my passion for games became an easy avenue for me to learn about our various servers and related technology. And because I had grown up in an era when online gaming was still in its infancy, I was already acutely aware from a user’s perspective about just how vital good networking and dedicated hardware on the host’s end were to an enjoyable multiplayer experience.
Dedicated servers–both physical and virtualized–have not always been as readily available or as affordable as they are today, which meant that game developers had to get creative about how they connected players together over the internet. In fact, networking computers of all kinds in real time over great distances was never a challenge exclusive to gamers and developers, and the networking methods and technology that have been implemented in gaming are largely the same as those found in other applications. But for me at least, and I’m sure my fellow gamers, multiplayer gaming has always provided me with the most tangible insight into the strengths and weaknesses of those methods.
If you play any modern online multiplayer games, there’s a good chance you’ve heard the terms “P2P” (peer-to-peer) and “dedicated servers” a lot, and you’ve probably heard that the latter of the two is the better option. The truth isn’t quite so simple, however, as each present different challenges and benefits, and it’s worth looking at both to better understand why a developer might choose one over the other.
For developers, deciding which networking strategy to use for a multiplayer game frequently comes down to a balance between monetary budget and desired player experience, and we’ll see later why peer-to-peer networking can be significantly less expensive than dedicated. First, it’s important to understand what the impact of the chosen networking method is for the end users, the people who are playing the game.
Lag, latency, and tickrates are terms that come up frequently as gamers and developers alike attempt to quantify the online experiences that people have during an online match. Lag is frequently the result of the other two, and while most online gamers have likely used the term itself and can easily intuit the meaning of the word, it’s worth looking closer at what actually causes a game to feel “laggy.”
Latency (i.e., the amount of time that it takes for a client machine to send and receive data to and from the host) can be influenced by a number of factors, and the distance of the host to the clients is one of the most notable. If the host machine is significantly closer to one player than it is to the others, that player is likely to have lower latency and therefore a more responsive gaming experience.
The tickrate of an online game is less organic than latency and is specifically defined by the game developer. In the simplest terms, the tickrate of a game is the regularity with which the host machine communicates with each of its clients. As opposed to singleplayer gaming, in which a single machine is responsible for computing and tracking any number of gameplay variables (e.g., player location, enemy location, projectile trajectory, etc.), multiplayer gaming is significantly more complex.
While each player’s client machine is effectively tracking the same kinds of variables and rendering them graphically upon the player’s monitor or television, it is simultaneously communicating those variables to the host, which is also receiving this communication from all the other clients in the match. It’s the host’s job to ensure that all players remain in sync, and so it attempts to reconcile all the player data it receives and, if necessary, correct their variables at regular intervals so that all players remain on the same page. The regularity at which this process occurs is the host’s tickrate.
A low tickrate means that host is syncing up all players with one another less frequently, which increases the likelihood for discrepancies to occur between what the player expects to happen and what the game determines has happened. This is visible (and frustrating) in online shooters when a player fires their weapon at an enemy and perceives that the shot connects, but the enemy player takes no damage. What’s really happening (for the sake of explanation, the truth of the matter is that the problem may be far more complex) in these situations is that while the attacking player’s machine is rendering the enemy’s position in a certain part of the game map, the host machine knows that the enemy has since changed position and has not yet relayed that information back to the client.
Higher tickrates mean that players are more regularly brought in sync with one another but at a greater cost in terms of bandwidth for the host and all of its clients. For example, Overwatch, the popular shooter by Blizzard Entertainment, received an update not long after it launched last year that allowed players to opt into host servers with their tickrates set to 60hz, doubling the standard 30hz that was present at the game’s launch. The gaming experience for players who opted into it–and who had internet connections that were capable of supporting the increase–experienced more responsive actions in the game, but many reported (as you might expect) that they’re data transfer doubled as a result.
When a gamer experiences lag in-game, a combination of high latency and/or a low host tickrate might be to blame, assuming that the gamer’s machine is powerful enough to render what they’re playing at the target frame rate. As they’re designing a multiplayer experience for their players, game designers consider both of these factors and choose a networking approach to take. Increasing the tickrate of their hosts means significantly increasing the load those servers must bear both in terms of bandwidth and computational horsepower, and reducing an average player’s latency means ensuring that they’re geographically near enough to the host to ensure a speedy connection.
Peer-to-peer networking offers a number of benefits to developers and, potentially, for players in terms of average latency and in the overall cost of maintaining an online game.
There are no dedicated hosts in peer-to-peer networking. Instead, one of the player’s machines that enters into a match serves the role of both host and client. The host might be chosen via algorithm ingattempt to target an optimal experience for the other players in the game, or it might be more randomized. In some cases, the host duties may migrate from one player to another during the course of a match to help ensure an experience that feels fair for everyone involved.
It’s important to consider that because the responsibilities of hosting the game fall to one client at any given time, the host-client is likely to experience performance in the game that feels much more responsive than it does for other players because there’s effectively no latency in the host’s connection to itself and it’s the first machine to see the most up-to-date and in-sync version of the match’s consolidated player data. The host-client player might see his or her actions (like a shotgun blast or a sniper headshot) connect with the enemy more consistently as a result.
Perhaps the greatest benefit afforded by peer-to-peer networking is its low cost to support and maintain. A game’s developer or publisher need not worry about maintaining an expensive network of servers for online multiplayer because the players take care of that themselves, and that’s something that benefits players and producers alike. If a multiplayer game’s longevity depends on its publisher’s ability to fund dedicated servers, it’s likely the game will eventually outlast its profitability, which means that the game will inevitably be taken offline at some point (at least officially). That’s less likely to be the case for peer-to-peer networked games, and it’s for this reason that even decade-old games are able to continue operating long after they’ve outgrown their popularity. Halo 2 on the original Xbox is one such example, which served an active community of online players until Microsoft eventually pulled the plug on their entire online service, the original Xbox Live.
Peer-to-peer networking has been especially commonplace in online console gaming until relatively recently, particularly following the launch of Halo 2, which largely standardized online matchmaking for console gamers when it launched in 2004. Today it’s still found in a large number of online games on both PC and console. Destiny, the MMO-like shooter by Bungie, utilizes peer-to-peer networking for its PVP (player versus player) game modes, and Mass Effect: Andromeda recently launched with peer-to-peer networking in place for it’s cooperative-style online multiplayer.
That said, dedicated servers are becoming increasingly commonplace in online multiplayer games. It’s important to note that this approach to networking isn’t at all new to gaming, although the expectation that game producers should provide most or all of the dedicated hardware is. Historically, many of the most popular online games provided support for player-owned dedicated servers, which often could be browsed either by a third-party client or within the game itself by players seeking a suitable match. Many of these games even offered support for community-created modifications that could be enabled by the server administrator for very customized play experiences. Over the years, Nethosting has provided a number of player-administrated game servers for titles like Half-life and Minecraft as well as for third-party gaming services like Teamspeak.
By and large, dedicated servers are considered the superior approach to online multiplayer networking by the gaming community because they offer a more reliable and fairer experience for all players. There is no inherent host bias because the host machine is not also a player, and the server’s resources are entirely and equally available to all players. It’s a level playing field, which is essential to highly competitive players.
The high cost to maintain and support this infrastructure means, unfortunately, that dedicated servers have historically been a feature limited largely to triple-A game producers. That said, modern cloud computing tech and practices have come a long way in offsetting these costs. In an OpenStack-based environment, for example, virtual dedicated servers can be spun up and down dynamically to meet the particular need or load of an application such as online gaming. This potentially means that game producers can offset the overall cost of their dedicated server environment by, say, paying for only what resources they need at any given time.
Even still, even top-tier devs typically reserve dedicated-server-based networking for fast-paced, highly competitive games like first-person shooters and MOBA (multiplayer online battle arena) games, where it’s most essential that every online match feels as fair as possible for all players. Blizzard’s Overwatch employs dedicated servers, for example, and supports tick rates up to 60hz so that there’s relatively little disconnect between what the player perceives happening on their screens and what the server actually detects. And in some cases, developers may even incorporate a hybrid solution. Where Destiny (as we mentioned before) utilizes peer-to-peer networking in its competitive game modes, its vast online game world relies on the availability of dedicated hardware that stores player inventories, character data, and PVE (players versus environment) game modes.
As a player, having a basic understanding of these networking models for online multiplayer games can be helpful when trying to troubleshoot poor performance in a game or simply to better understand just how sophisticated modern multiplayer games are. For developers and producers, understanding the benefits and weaknesses of each model is essential, and it ultimately comes down to a question of what the target experience is for each of your game’s players.