mirror of
https://github.com/genxium/DelayNoMore
synced 2025-04-11 01:21:15 +00:00
Compare commits
3 Commits
Author | SHA1 | Date | |
---|---|---|---|
|
7e89d703ba | ||
|
dc1e6d3e09 | ||
|
28e5c18f00 |
@ -1,6 +1,8 @@
|
||||
Please refer to [DelayNoMoreUnity](https://github.com/genxium/DelayNoMoreUnity) for a Unity rebuild with .net backend.
|
||||
|
||||
# Preface
|
||||
|
||||
This project is a demo for a websocket-based rollback netcode inspired by [GGPO](https://github.com/pond3r/ggpo/blob/master/doc/README.md).
|
||||
This project is a demo for a websocket-based rollback netcode inspired by [GGPO](https://github.com/pond3r/ggpo/blob/master/doc/README.md).
|
||||
|
||||
[Demo recorded over INTERNET (Phone-Wifi v.s. PC-Wifi UDP holepunched) using an input delay of 6 frames](https://pan.baidu.com/s/1UArwqDShLoPjYppjjqsTqQ?pwd=10wc), and it feels SMOOTH when playing!
|
||||
|
||||
@ -23,7 +25,7 @@ As lots of feedbacks ask for a discussion on using UDP instead, I tried to summa
|
||||
# Notable Features
|
||||
- Backend dynamics toggle via [Room.BackendDynamicsEnabled](https://github.com/genxium/DelayNoMore/blob/c582071f4f2e3dd7e83d65562c7c99981252c358/battle_srv/models/room.go#L147)
|
||||
- Recovery upon reconnection (only if backend dynamics is ON)
|
||||
- Automatically correction for "slow ticker", especially "active slow ticker" which is well-known to be a headache for input synchronization
|
||||
- Automatic correction for "slow ticker", especially "active slow ticker" which is well-known to be a headache for input synchronization
|
||||
- Frame data logging toggle for both frontend & backend, useful for debugging out of sync entities when developing new features
|
||||
|
||||
_(how input delay roughly works)_
|
||||
|
@ -1003,7 +1003,7 @@ fromUDP=${fromUDP}`);
|
||||
const inputFrameId = inputFrame.inputFrameId;
|
||||
const peerEncodedInput = (true == fromUDP ? inputFrame.encoded : inputFrame.inputList[peerJoinIndex - 1]);
|
||||
if (false == self.allowRollbackOnPeerUpsync && inputFrameId <= renderedInputFrameIdUpper) {
|
||||
// [WARNING] Avoid obfuscating already rendered history, even at "inputFrameId == renderedInputFrameIdUpper", due to the use of "INPUT_SCALE_FRAMES" some previous render frames might already be rendered with "inputFrameId"!
|
||||
// [WARNING] Avoid obfuscating already rendered history, even at "inputFrameId == renderedInputFrameIdUpper", due to the use of "INPUT_SCALE_FRAMES" some previous render frames might've already been rendered with "inputFrameId"!
|
||||
continue;
|
||||
}
|
||||
if (inputFrameId <= self.lastAllConfirmedInputFrameId) {
|
||||
@ -1022,12 +1022,11 @@ fromUDP=${fromUDP}`);
|
||||
self.lastIndividuallyConfirmedInputList[peerJoinIndex - 1] = peerEncodedInput;
|
||||
}
|
||||
effCnt += 1;
|
||||
// the returned "gopkgs.NewInputFrameDownsync.InputList" is immutable, thus we can only modify the values in "newInputList" and "newConfirmedList"!
|
||||
// the returned "gopkgs.NewInputFrameDownsync.InputList" is immutable, thus we can only modify the value in "newInputList"!
|
||||
const existingInputList = existingInputFrame.GetInputList();
|
||||
let newInputList = existingInputFrame.GetInputList().slice();
|
||||
newInputList[peerJoinIndex - 1] = peerEncodedInput;
|
||||
let newConfirmedList = (existingConfirmedList | peerJoinIndexMask);
|
||||
const newInputFrameDownsyncLocal = gopkgs.NewInputFrameDownsync(inputFrameId, newInputList, newConfirmedList);
|
||||
const newInputFrameDownsyncLocal = gopkgs.NewInputFrameDownsync(inputFrameId, newInputList, existingConfirmedList);
|
||||
//console.log(`Updated encoded input of peerJoinIndex=${peerJoinIndex} to ${peerEncodedInput} for inputFrameId=${inputFrameId}/renderedInputFrameIdUpper=${renderedInputFrameIdUpper} from ${JSON.stringify(inputFrame)}; newInputFrameDownsyncLocal=${self.gopkgsInputFrameDownsyncStr(newInputFrameDownsyncLocal)}; existingInputFrame=${self.gopkgsInputFrameDownsyncStr(existingInputFrame)}`);
|
||||
self.recentInputCache.SetByFrameId(newInputFrameDownsyncLocal, inputFrameId);
|
||||
|
||||
@ -1048,6 +1047,15 @@ fromUDP=${fromUDP}`);
|
||||
self.networkDoctor.logPeerInputFrameUpsync(batch[0].inputFrameId, batch[batch.length - 1].inputFrameId);
|
||||
}
|
||||
if (true == self.allowRollbackOnPeerUpsync) {
|
||||
/*
|
||||
[WARNING]
|
||||
|
||||
Deliberately NOT setting "existingInputFrame.ConfirmedList = (existingConfirmedList | peerJoinIndexMask)", thus NOT helping the move of "lastAllConfirmedInputFrameId" in "_markConfirmationIfApplicable()".
|
||||
|
||||
The edge case of concern here is "type#1 forceConfirmation". Assume that there is a battle among [P_u, P_v, P_x, P_y] where [P_x] is being an "ActiveSlowerTicker", then for [P_u, P_v, P_y] there might've been some "inputFrameUpsync"s received from [P_x] by UDP peer-to-peer transmission EARLIER THAN BUT CONFLICTING WITH the "accompaniedInputFrameDownsyncBatch of type#1 forceConfirmation" -- in such case the latter should be respected -- by "conflicting", the backend actually ignores those "inputFrameUpsync"s from [P_x] by "forceConfirmation".
|
||||
|
||||
However, we should still call "_handleIncorrectlyRenderedPrediction(...)" here to break rollbacks into smaller chunks, because even if not used for "inputFrameDownsync.ConfirmedList", a "UDP inputFrameUpsync" is still more accurate than the locally predicted inputs.
|
||||
*/
|
||||
self._handleIncorrectlyRenderedPrediction(firstPredictedYetIncorrectInputFrameId, batch, fromUDP);
|
||||
}
|
||||
},
|
||||
|
@ -100,7 +100,7 @@ bool RecvRingBuff::pop(RecvWork* out) {
|
||||
2. If "0 >= oldCnt", we need guard against another "pop" to avoid over-popping.
|
||||
*/
|
||||
if (0 >= oldCnt) {
|
||||
// "pop" could be accessed by either "GameThread/pollUdpRecvRingBuff" or "UvRecvThread/put", thus we should be proactively guard against concurrent popping while "1 == cnt"
|
||||
// "pop" could be accessed by either "GameThread/pollUdpRecvRingBuff" or "UvRecvThread/put", thus we should be proactively guarding against concurrent popping while "1 == cnt"
|
||||
++cnt;
|
||||
return false;
|
||||
}
|
||||
|
Loading…
x
Reference in New Issue
Block a user