Skip to content

Conversation

@copperpixel
Copy link
Contributor

@copperpixel copperpixel commented Dec 30, 2025

Description

This PR makes NPC NextBots such as tf_zombie and MvM/Halloween bosses respect cl_interp_npcs value for interpolation rather than using base cl_interp. Previously this ConVar would've only affected HL1, HL2 and ASW NPCs.

The upside to this change is that it lets you keep lower interp for players while being able to give NPCs a higher value so their animations won't be laggy. Old behavior can be restored by setting cl_interp_npcs lower than cl_interp which will make it use the base value again.

cl_interp_npcs ConVar definition has been also updated to be more inline with cl_interp.

Both of the comparison videos below were recorded with cl_interp_ratio 2;cl_interp 0.03;cl_interp_npcs 0.1.

Before:

cl_interp_nb_before.webm

After:

cl_interp_nb_after.webm

@wgetJane
Copy link
Contributor

wgetJane commented Dec 31, 2025

can you check if it gets lagcomped correctly? set it to 0.5 secs of interp and shoot at a moving target with sv_showlagcompensation set to 1

@ficool2
Copy link
Contributor

ficool2 commented Dec 31, 2025

can you check if it gets lagcomped correctly? set it to 0.5 secs of interp and shoot at a moving target with sv_showlagcompensation set to it 1

Only players are lag compensated, adding lag comp for them should be another PR

@copperpixel
Copy link
Contributor Author

copperpixel commented Dec 31, 2025

Replying to #1730 (comment)

this^. I'm gonna address that later by pulling lag compensation system changes from Alien Swarm.

@wgetJane
Copy link
Contributor

Only players are lag compensated, adding lag comp for them should be another PR

i wasn't aware that skeletons weren't lagcomped (i dont really play halloween stuff much)

if they were made to be lagcomped, afaik the game currently wont take cl_interp_npcs into account

this is from different source engine game (gmod) but i think the same behaviour will apply:

cl_interp 0.5
cl_interp_npcs 0

simplescreenrecorder-2022-05-17_23.00.01.mp4

cl_interp 0
cl_interp_npcs 0.5

simplescreenrecorder-2022-05-17_23.02.52.mp4

@copperpixel
Copy link
Contributor Author

copperpixel commented Dec 31, 2025

Replying to #1730 (comment)

Fixing this just requires some refactoring in CLagCompensationManager::StartLagCompensation, replace the compute target time logic with something like this:

	// Get true latency
	float flLatency = 0.f;
	INetChannelInfo *nci = engine->GetPlayerNetInfo( player->entindex() );

	if ( nci )
	{
		// add network latency
		flLatency = nci->GetLatency( FLOW_OUTGOING );
	}

	auto lambdaCalcTargetTick = [ & ]( int nLerpTicks )
	{
		// correct is the amout of time we have to correct game time
		float correct = flLatency;

		// add view interpolation latency see C_BaseEntity::GetInterpolationAmount()
		correct += TICKS_TO_TIME( nLerpTicks );

		// check bouns [0,sv_maxunlag]
		correct = clamp( correct, 0.0f, sv_maxunlag.GetFloat() );

		// correct tick send by player 
		int targettick = cmd->tick_count - nLerpTicks;

		// calc difference between tick send by player and our latency based tick
		float deltaTime = correct - TICKS_TO_TIME( gpGlobals->tickcount - targettick );

		if ( fabs( deltaTime ) > 0.2f )
		{
			// difference between cmd time and latency is too big > 200ms, use time correction based on latency
			// DevMsg("StartLagCompensation: delta too big (%.3f)\n", deltaTime );
			targettick = gpGlobals->tickcount - TIME_TO_TICKS( correct );
		}

		return targettick;
	};

	float flBaseTargetTime = TICKS_TO_TIME( lambdaCalcTargetTick( TIME_TO_TICKS( player->m_fLerpTime ) ) );
	float flNpcTargetTime  = TICKS_TO_TIME( lambdaCalcTargetTick( TIME_TO_TICKS( player->m_fNpcLerpTime ) ) );

And make it use flNpcTargetTime when the compensated entity is an NPC.

CBasePlayer needs to be also extended with m_fNpcLerpTime member which should contain the clamped value of the client's cl_interp_npcs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants