Google’s Plan to Overcome Stadia Latency Issues May Involve Playing the Game for You
One of the problems facing Google Stadia — beyond Google’s habit of slaughtering every project that doesn’t become an instant, industry-dominating hit — is that there’s an intrinsic latency baked-in to using it. There’s simply no way to send data over hundreds to thousands of miles and not have intrinsic latency attached. This may not be a problem if you live near a Google Stadia datacenter, but it’ll definitely impact the experience to some degree for everyone who doesn’t.
According to Google, it has a plan to beat this problem, an approach that will allow it to actually make games faster remotely than they can be locally.
How will they achieve this? By using extensive machine prediction to model what players are going to do, and then doing it for them, before they actually start doing it. This “negative latency” will be governed using machine prediction. In some cases, the game might accelerate displayed FPS to reduce lag between button inputs and displayed outcomes. In others, it may predict button-presses before the player actually commits to them.
It’s not clear if this technology is functional yet. Tests of Google Stadia have often identified latency as an issue, including this from PC Gamer from back in March. But while this may help make Stadia successful, there are serious questions about whether this approach is suitable for every type of game.
There doesn’t seem to be any additional information about this aspect of Stadia available, but there are some questions I’d like to know the answers to, including:
- When would Stadia attempt to guess at player input? Which games would specifically use this feature? How and where would it be used? Does the game only predict movement, or does it also predict actions?
- Is this feature intended to be used in multi-player gaming?
- What happens when Stadia mispredicts player input? How does the game compensate for a missed guess without introducing additional latency?
- Do games allow a false positive — a prediction of a “success” that the player doesn’t achieve — to stand, or does the game throw out this prediction? When this occurs, how does it impact latency and gameplay?
- Does this require support at the game engine level, or is this handled entirely via Google Stadia’s ‘special sauce?’
At the high level, this idea sounds similar to what a CPU does when it . It may be possible to run a game forward very slightly and send the expected outcome of a player activity first. If the player doesn’t take that action, the game can send the less-expected code at the “normal” time. I don’t know if this is an accurate description of how the system will work, but even if it’s possible, how well it works will depend on how effectively Google can predict player actions.
But there definitely seems to be an intrinsic tension here. Minimizing latency requires Stadia to predict outcomes within a larger window of time. Respecting player choice and allowing the player to be meaningfully in control of their own actions requires that you predict outcomes within a smaller window of time (or that you have better tricks for hiding the impact of what you’re doing). Some games would seem to lend themselves more readily to this kind of input than others. You could analyze player performance in Beat Saber, “assume” whether a person will hit the notes based on their previous performance in a song, and then make assumptions about their likely performance based on this metric. It seems much harder to calculate whether Player 1 will hit Player 2 if Player 2 suddenly appears around a corner in a multiplayer match, and the impact of getting the prediction wrong could result in one player being hit by a lag spike because Stadia made the wrong call.
Predicting inputs in a single-player game might be less of an issue for game balance, but pulling this kind of stunt in multiplayer could break titles. Multiplayer, however, is precisely where the impact of lag is most likely to be felt, and where people are most likely to want this kind of solution. And the question of how Google handles prediction in these cases is important, assuming it uses the tech in this instance at all.
I’m curious about the technology Google has developed for Stadia, but the company has a very heavy lift ahead of it. Google has blithely dismissed data caps as a limiting factor for adoption, despite fielding a service that chews through more bandwidth than any streaming service. It asks players to pay full price for titles they do not own in any permanent fashion. It asks players to make this commitment despite its own well-demonstrated tendency to kill beloved products and services. It’s making claims about how ‘negative latency’ will supposedly solve game-streaming issues, despite the intrinsic difficulty of seamlessly predicting player input without a hiccup.
Google Stadia might revolutionize home streaming, but at this juncture, it seems far more likely to wind up dead in a Google graveyard in 2-5 years. Until the company specifically promises that players will have the option to take the goods they’ve purchased elsewhere, I’d avoid the service. When I last wrote about the issue of Stadia longevity, on July 19, 2019, there were . On October 9, there are in the Google Cemetery. The average Google product lives about four years, so if Stadia can make it to 2024, it may be worth considering. Alternately, Google can promise players that it will make Steam / Epic / GoG codes available for any games they purchase if it decides to close its servers, or commit in-writing to keeping servers open for a minimum length of time, thus allowing people to decide if they want to buy-in with a service guarantee. It seems unlikely that the company will do either.
Now Read: