Skip to content

(breaking change)feat: Support element type-wise bias in property fitting#5322

Open
Chengqian-Zhang wants to merge 10 commits intodeepmodeling:masterfrom
Chengqian-Zhang:0316_property_bias
Open

(breaking change)feat: Support element type-wise bias in property fitting#5322
Chengqian-Zhang wants to merge 10 commits intodeepmodeling:masterfrom
Chengqian-Zhang:0316_property_bias

Conversation

@Chengqian-Zhang
Copy link
Collaborator

@Chengqian-Zhang Chengqian-Zhang commented Mar 17, 2026

Summary

Introduced the distinguish_types option for property fitting to support more flexible atom contribution calculations.

  • distinguish_types=True(Default) has consistent behavior with energy fitting(if the property is extensive), calculating individual atom contributions based on each specific atom type.
  • distinguish_types=False treats atom contributions as element-agnostic. This is particularly useful for element extrapolation scenarios where the model needs to generalize across different chemical species.

Key changes

  • deepmd/pt/model/task/property.py:
    • Add distinguish_types parameter.
    • Change the serialization version from 5 to 6.
  • deepmd/pt/utils/stat.py:
    • Refactored output_std: Replaced from the constant "ones" padding with actual standard deviation values(required by function apply_out_stat in deepmd/pt/model/atomic_model/property_atomic_model.py)
    • Optimized the RMSE logging/printing logic for better clarity.
  • deepmd/utils/out_stat.py:
    • Updated compute_stats_from_redu to include the intensive parameter.

Summary by CodeRabbit

  • New Features

    • Added a distinguish_types option for property fitting and an intensive mode for statistics computation.
  • Bug Fixes

    • Output statistics can be applied per atom type when enabled; global standard deviations are now correctly broadcast to per-type shapes.
    • Normalization and RMSE reporting adjusted to handle intensive vs per-atom modes.
  • Tests

    • Added/updated tests covering intensive statistics and updated expected statistic values.

@Chengqian-Zhang Chengqian-Zhang marked this pull request as draft March 17, 2026 07:50
@dosubot dosubot bot added the new feature label Mar 17, 2026
@Chengqian-Zhang Chengqian-Zhang marked this pull request as ready for review March 17, 2026 09:50
@Chengqian-Zhang Chengqian-Zhang changed the title feat: Support element type-wise bias in property fitting feat (breaking change): Support element type-wise bias in property fitting Mar 17, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 17, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a boolean distinguish_types flag to PropertyFittingNet, threads it through atomic models and stat utilities, bumps serialization version, changes per-key bias/std indexing to be per-type when enabled, and introduces an intensive mode affecting natoms normalization in output-stat computations.

Changes

Cohort / File(s) Summary
Property Fitting Net
deepmd/dpmodel/fitting/property_fitting.py, deepmd/pt/model/task/property.py
Add distinguish_types: bool = True to constructors, store as self.distinguish_types, add get_distinguish_types() getter, include in serialize(), bump serialization version to 6, and update deserialize() version check.
Atomic Model Implementation
deepmd/dpmodel/atomic_model/property_atomic_model.py, deepmd/pt/model/atomic_model/property_atomic_model.py
get_compute_stats_distinguish_types() now delegates to the fitting net; apply_out_stat() selects bias per atom-type (out_bias[kk][atype]) when distinguish_types is true, otherwise uses global index.
Stat Computation Utilities
deepmd/dpmodel/utils/stat.py, deepmd/pt/utils/stat.py, deepmd/pd/utils/stat.py
Change _post_process_stat_ to broadcast global std across types by tiling when shapes mismatch; thread an intensive flag through _compute_output_stats_global, altering natoms normalization, reconstruction/unbias logic and RMSE computation/logging.
Out-stat / Least-squares
deepmd/utils/out_stat.py
Add intensive: bool = False parameter to compute_stats_from_redu; when intensive is True normalize natoms by row sums before least-squares, changing resulting bias/std.
Config / Argcheck
deepmd/utils/argcheck.py
Add optional argument distinguish_types (bool, default True) to fitting_property() descriptor and supporting docstring.
Tests
source/tests/common/test_out_stat.py, source/tests/*/model/test_atomic_model_*_stat.py
Add intensive-path test for compute_stats_from_redu; adjust multiple tests' expected std/bias values and initialize additional expected std slices to reflect per-type broadcasting and distinguish_types behavior.

Sequence Diagram

sequenceDiagram
    participant Config as Config\n(ARGs)
    participant FittingNet as PropertyFittingNet
    participant Atomic as AtomicModel
    participant Stat as StatUtils

    Config->>FittingNet: Initialize(distinguish_types)
    Note over FittingNet: store & serialize distinguish_types (`@version`=6)
    Atomic->>FittingNet: get_distinguish_types()
    FittingNet-->>Atomic: return bool

    alt distinguish_types == true
        Atomic->>Stat: apply_out_stat per-type bias (out_bias[kk][atype]) + out_std
    else
        Atomic->>Stat: apply_out_stat global bias (out_bias[kk][0]) + out_std
    end

    Stat->>Stat: compute/unbias/reconstruct (intensive toggles natoms normalization)
    Stat-->>Atomic: adjusted outputs (bias/std applied)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • njzjz
  • wanghan-iapcm
  • iProzd
🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 60.61% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main feature: adding support for element type-wise (distinguishing atom types) bias in property fitting, which aligns with the extensive changes across multiple files introducing the distinguish_types parameter.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
deepmd/dpmodel/utils/stat.py (1)

481-487: ⚠️ Potential issue | 🟠 Major

Use the intensive reduction for model_pred too.

When intensive=True, the regression now works on type fractions, but model_pred_g is still formed a few lines above with a raw np.sum(..., axis=1). In change-by-statistic mode that subtracts/adds an extensive prediction against an intensive label, so the fitted delta bias is off by the frame atom count. Apply the same intensive reduction to model_pred before both stats_input construction and this reconstruction path.

Also applies to: 497-506

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/dpmodel/utils/stat.py` around lines 481 - 487, The model currently
sums per-atom predictions into model_pred_g with a raw np.sum(..., axis=1), but
when intensive=True the rest of the stats pipeline works on per-type fractions;
therefore apply the same intensive reduction used for stats_input to model_pred
(model_pred_g) so it becomes intensive (e.g., divide/type-normalize or use the
same per-type reduction function) before constructing stats_input and likewise
apply the intensive reduction in the reconstruction path where
bias_atom_e/std_atom_e are computed (the block that calls
compute_stats_from_redu and the earlier block that forms model_pred_g and
stats_input). Ensure you reuse the same reduction logic/utility used for
stats_input so both the stats construction and the bias/std reconstruction use
identical intensive aggregation when intensive=True.
deepmd/pt/utils/stat.py (1)

517-523: ⚠️ Potential issue | 🟠 Major

Normalize model_pred the same way as the intensive label.

This path now regresses on type fractions, but the global model_pred is still built from a raw atomic sum. In change-by-statistic mode that means the PT backend subtracts/adds an extensive prediction against an intensive label, so the learned delta bias is scaled incorrectly. Reduce model_pred with the same intensive rule before both the bias solve and this reconstruction.

Also applies to: 531-540

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/pt/utils/stat.py` around lines 517 - 523, The model_pred used in the
bias solve and subsequent reconstruction is still an extensive raw atomic sum
while the regression now targets intensive (type-fraction) labels; before
calling compute_stats_from_redu for both the bias solve (the call that sets
bias_atom_e/std_atom_e) and the reconstruction block (also at ~531-540),
reduce/normalize model_pred with the same intensive rule used for labels
(respecting the intensive flag and merged_natoms/assigned_atom_ener logic) so
that the inputs to compute_stats_from_redu and the later reconstruction use the
intensive-scaled model_pred; update the code paths that build model_pred (and
any variables passed as stats_input) to apply that reduction whenever intensive
is True so the change-by-statistic delta is scaled correctly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@deepmd/dpmodel/atomic_model/property_atomic_model.py`:
- Around line 54-59: The condition incorrectly uses the bound method
self.get_compute_stats_distinguish_types without calling it and the per-type std
is not used; change the branch to call
self.get_compute_stats_distinguish_types() and, inside the loop over
self.bias_keys in the distinguished branch (where atype is available), index
out_std by atype (use out_std[kk][atype]) to match out_bias[kk][atype], so each
ret[kk] uses the per-type std and bias.

In `@deepmd/dpmodel/fitting/property_fitting.py`:
- Around line 93-99: Change the constructor parameter distinguish_types to be
optional (bool | None = None) in PropertyFitting (the __init__ where
task_dim/intensive/distinguish_types are set and the related block at lines
~136-147) so we can detect "missing" vs explicit True/False; set
self.distinguish_types to the explicit boolean when provided, but during
deserialization of older payloads (the load/from_dict/from_json routine that
handles payload versions) explicitly backfill distinguish_types=False for pre-v6
payloads; also update deepmd/utils/argcheck.py to use the same optional
default/behavior so both creation and arg checking are aligned.

In `@deepmd/pt/model/atomic_model/property_atomic_model.py`:
- Around line 52-57: The condition currently checks the bound method object
instead of calling it and uses a hard-coded out_std[kk][0] even when applying
per-type bias; change the predicate check to call
self.get_compute_stats_distinguish_types() and make the std scaling type-aware
in the true branch by using out_std[kk][atype] (while keeping the false branch
using out_std[kk][0] and out_bias[kk][0]). Update the loop over self.bias_keys
where ret[kk] is computed to reference atype for both out_std and out_bias when
the predicate returns True, and keep the existing [0] indexing for both when the
predicate returns False.

In `@deepmd/utils/out_stat.py`:
- Around line 50-51: Before performing the intensive normalization when
intensive is True, compute totals = np.sum(natoms, axis=1) and check for any
zero entries; if any zero-total frames exist, raise a ValueError listing the
offending frame indices (e.g., using np.where(totals == 0)[0]) instead of
dividing by zero, otherwise proceed with natoms = natoms / totals[:, None] so
broadcasting is explicit; reference natoms and the intensive branch in
out_stat.py.

In `@source/tests/common/test_out_stat.py`:
- Around line 145-169: The test test_compute_stats_from_redu_intensive is
feeding an extensive label (self.output_redu) into the intensive path; change
the label passed to compute_stats_from_redu to an actual intensive target (e.g.,
self.output.mean(axis=1) or self.output_redu / self.natoms.sum(axis=1,
keepdims=True)) when calling compute_stats_from_redu(..., intensive=True), and
update the assertions to compare the recovered bias against self.mean (or the
corresponding intensive reference) rather than the huge coefficients currently
asserted.

---

Outside diff comments:
In `@deepmd/dpmodel/utils/stat.py`:
- Around line 481-487: The model currently sums per-atom predictions into
model_pred_g with a raw np.sum(..., axis=1), but when intensive=True the rest of
the stats pipeline works on per-type fractions; therefore apply the same
intensive reduction used for stats_input to model_pred (model_pred_g) so it
becomes intensive (e.g., divide/type-normalize or use the same per-type
reduction function) before constructing stats_input and likewise apply the
intensive reduction in the reconstruction path where bias_atom_e/std_atom_e are
computed (the block that calls compute_stats_from_redu and the earlier block
that forms model_pred_g and stats_input). Ensure you reuse the same reduction
logic/utility used for stats_input so both the stats construction and the
bias/std reconstruction use identical intensive aggregation when intensive=True.

In `@deepmd/pt/utils/stat.py`:
- Around line 517-523: The model_pred used in the bias solve and subsequent
reconstruction is still an extensive raw atomic sum while the regression now
targets intensive (type-fraction) labels; before calling compute_stats_from_redu
for both the bias solve (the call that sets bias_atom_e/std_atom_e) and the
reconstruction block (also at ~531-540), reduce/normalize model_pred with the
same intensive rule used for labels (respecting the intensive flag and
merged_natoms/assigned_atom_ener logic) so that the inputs to
compute_stats_from_redu and the later reconstruction use the intensive-scaled
model_pred; update the code paths that build model_pred (and any variables
passed as stats_input) to apply that reduction whenever intensive is True so the
change-by-statistic delta is scaled correctly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 67e04055-7dc9-448e-aece-69b8d660d36c

📥 Commits

Reviewing files that changed from the base of the PR and between 4ce5c73 and a1e2642.

📒 Files selected for processing (9)
  • deepmd/dpmodel/atomic_model/property_atomic_model.py
  • deepmd/dpmodel/fitting/property_fitting.py
  • deepmd/dpmodel/utils/stat.py
  • deepmd/pt/model/atomic_model/property_atomic_model.py
  • deepmd/pt/model/task/property.py
  • deepmd/pt/utils/stat.py
  • deepmd/utils/argcheck.py
  • deepmd/utils/out_stat.py
  • source/tests/common/test_out_stat.py

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
deepmd/pt/model/atomic_model/property_atomic_model.py (1)

52-57: ⚠️ Potential issue | 🟠 Major

Use type-aware out_std in the distinguish-types branch.

In the True branch, bias is type-indexed (out_bias[kk][atype]) but std is still fixed to type 0 (out_std[kk][0]). This can silently ignore per-type std scaling.

Suggested fix
         if self.get_compute_stats_distinguish_types():
             for kk in self.bias_keys:
-                ret[kk] = ret[kk] * out_std[kk][0] + out_bias[kk][atype]
+                ret[kk] = ret[kk] * out_std[kk][atype] + out_bias[kk][atype]
         else:
             for kk in self.bias_keys:
                 ret[kk] = ret[kk] * out_std[kk][0] + out_bias[kk][0]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/pt/model/atomic_model/property_atomic_model.py` around lines 52 - 57,
The branch inside get_compute_stats_distinguish_types() incorrectly uses
out_std[kk][0] while indexing out_bias by atype, causing per-type std to be
ignored; update the distinguish-types branch so each kk uses out_std[kk][atype]
(matching out_bias[kk][atype]) when computing ret[kk] = ret[kk] *
out_std[kk][atype] + out_bias[kk][atype] so per-type scaling is applied
consistently for bias_keys.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@deepmd/pt/model/atomic_model/property_atomic_model.py`:
- Around line 52-57: The branch inside get_compute_stats_distinguish_types()
incorrectly uses out_std[kk][0] while indexing out_bias by atype, causing
per-type std to be ignored; update the distinguish-types branch so each kk uses
out_std[kk][atype] (matching out_bias[kk][atype]) when computing ret[kk] =
ret[kk] * out_std[kk][atype] + out_bias[kk][atype] so per-type scaling is
applied consistently for bias_keys.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 3e24f042-92e8-4de4-9ba6-5818891b4d97

📥 Commits

Reviewing files that changed from the base of the PR and between a1e2642 and f88ab82.

📒 Files selected for processing (2)
  • deepmd/dpmodel/atomic_model/property_atomic_model.py
  • deepmd/pt/model/atomic_model/property_atomic_model.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • deepmd/dpmodel/atomic_model/property_atomic_model.py

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
deepmd/pd/utils/stat.py (2)

156-159: Consider adding a shape assertion to guard against unexpected std shapes.

The tiling logic assumes out_std[kk] has shape (1, ...) or is otherwise broadcastable. If out_std[kk] unexpectedly has a partial type dimension (e.g., shape (k, odim) where 1 < k < ntypes), np.tile would produce an incorrect shape (k * ntypes, odim) instead of (ntypes, odim).

🛡️ Proposed defensive assertion
         else:
             ntypes = vv.shape[0]
+            # Expect out_std to be global (no type dim) or have leading dim of 1
+            assert out_std[kk].shape[0] == 1 or out_std[kk].ndim < vv.ndim, (
+                f"Unexpected std shape {out_std[kk].shape} for bias shape {vv.shape}"
+            )
             reps = [ntypes] + [1] * (vv.ndim - 1)
             new_std[kk] = np.tile(out_std[kk], reps)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/pd/utils/stat.py` around lines 156 - 159, Add a defensive shape check
before tiling in the block that computes new_std[kk]: verify the leading
dimension of out_std[kk] is either 1 or equals ntypes (where ntypes =
vv.shape[0]) and raise a clear AssertionError or ValueError if it is not; if it
is 1, proceed to construct reps and call np.tile as currently done, and if it
already equals ntypes skip tiling and assign directly to new_std[kk] (use the
symbols out_std, new_std, vv, ntypes, reps, np.tile to locate the code).

140-143: Type hints do not match actual usage.

The function signature declares paddle.Tensor parameters and return types, but the function actually operates on dict[str, np.ndarray] (as seen from the call at line 513 and the use of np.tile at line 159).

📝 Suggested type hint correction
 def _post_process_stat(
-    out_bias: paddle.Tensor,
-    out_std: paddle.Tensor,
-) -> tuple[paddle.Tensor, paddle.Tensor]:
+    out_bias: dict[str, np.ndarray],
+    out_std: dict[str, np.ndarray],
+) -> tuple[dict[str, np.ndarray], dict[str, np.ndarray]]:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/pd/utils/stat.py` around lines 140 - 143, The _post_process_stat
function currently types its parameters and return as paddle.Tensor but actually
accepts and returns dict[str, np.ndarray] (it uses np.tile and is called with
dicts), so change the signature of _post_process_stat(out_bias, out_std) to use
dict[str, np.ndarray] for both parameters and the returned tuple; update any
imports or typing annotations accordingly and ensure callers (the code that
constructs/passes those dicts) and internal usages (e.g., np.tile) remain
consistent with the new dict[str, np.ndarray] types.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@deepmd/pd/utils/stat.py`:
- Around line 156-159: Add a defensive shape check before tiling in the block
that computes new_std[kk]: verify the leading dimension of out_std[kk] is either
1 or equals ntypes (where ntypes = vv.shape[0]) and raise a clear AssertionError
or ValueError if it is not; if it is 1, proceed to construct reps and call
np.tile as currently done, and if it already equals ntypes skip tiling and
assign directly to new_std[kk] (use the symbols out_std, new_std, vv, ntypes,
reps, np.tile to locate the code).
- Around line 140-143: The _post_process_stat function currently types its
parameters and return as paddle.Tensor but actually accepts and returns
dict[str, np.ndarray] (it uses np.tile and is called with dicts), so change the
signature of _post_process_stat(out_bias, out_std) to use dict[str, np.ndarray]
for both parameters and the returned tuple; update any imports or typing
annotations accordingly and ensure callers (the code that constructs/passes
those dicts) and internal usages (e.g., np.tile) remain consistent with the new
dict[str, np.ndarray] types.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 46e28ca9-002d-4a7c-88c7-734d7d066335

📥 Commits

Reviewing files that changed from the base of the PR and between f88ab82 and a100841.

📒 Files selected for processing (1)
  • deepmd/pd/utils/stat.py

@codecov
Copy link

codecov bot commented Mar 17, 2026

Codecov Report

❌ Patch coverage is 93.75000% with 4 lines in your changes missing coverage. Please review.
✅ Project coverage is 82.33%. Comparing base (b2c8511) to head (3bdde1d).

Files with missing lines Patch % Lines
...epmd/dpmodel/atomic_model/property_atomic_model.py 66.66% 2 Missing ⚠️
deepmd/dpmodel/utils/stat.py 88.88% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #5322      +/-   ##
==========================================
- Coverage   82.33%   82.33%   -0.01%     
==========================================
  Files         777      777              
  Lines       77814    77854      +40     
  Branches     3680     3678       -2     
==========================================
+ Hits        64068    64099      +31     
- Misses      12574    12581       +7     
- Partials     1172     1174       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
source/tests/pt/model/test_atomic_model_global_stat.py (1)

233-233: Consider adding spaces after commas for consistency with PEP 8.

The array literal lacks spaces after commas, which may be flagged by ruff format.

🔧 Suggested formatting
-        expected_std = np.array([[[0,1],[0,1]],[[1,1],[1,1]],[[0,0],[0,0]]]) # 3 keys, 2 atypes, 2 max dims.
+        expected_std = np.array([[[0, 1], [0, 1]], [[1, 1], [1, 1]], [[0, 0], [0, 0]]])  # 3 keys, 2 atypes, 2 max dims.

As per coding guidelines: "Always run ruff check . and ruff format . before committing changes or CI will fail."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/pt/model/test_atomic_model_global_stat.py` at line 233, The
array literal assigned to expected_std is missing spaces after commas and should
be formatted to comply with PEP 8/ruff (e.g., add spaces after each comma in the
np.array([...]) literal); update the expected_std definition to include spaces
after commas and then run `ruff format .` (or `ruff check .`) to ensure the file
passes style checks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@source/tests/consistent/model/test_property.py`:
- Around line 1208-1210: Update the misleading comment above the assertion that
checks dp_observed: change the comment referencing "both types appear observed"
to reflect that only type 0 is present so dp_observed should be ["O"]
(mentioning stats_distinguish_types=False and the dp_observed variable so you
can find the block in the test for the property model).

---

Nitpick comments:
In `@source/tests/pt/model/test_atomic_model_global_stat.py`:
- Line 233: The array literal assigned to expected_std is missing spaces after
commas and should be formatted to comply with PEP 8/ruff (e.g., add spaces after
each comma in the np.array([...]) literal); update the expected_std definition
to include spaces after commas and then run `ruff format .` (or `ruff check .`)
to ensure the file passes style checks.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: b5919558-d0ac-4257-af68-ad702ea15fb3

📥 Commits

Reviewing files that changed from the base of the PR and between a100841 and 583d37a.

📒 Files selected for processing (10)
  • deepmd/dpmodel/atomic_model/property_atomic_model.py
  • deepmd/pt/model/atomic_model/property_atomic_model.py
  • source/tests/common/dpmodel/test_atomic_model_atomic_stat.py
  • source/tests/common/dpmodel/test_atomic_model_global_stat.py
  • source/tests/common/test_out_stat.py
  • source/tests/consistent/model/test_property.py
  • source/tests/pd/model/test_atomic_model_atomic_stat.py
  • source/tests/pd/model/test_atomic_model_global_stat.py
  • source/tests/pt/model/test_atomic_model_atomic_stat.py
  • source/tests/pt/model/test_atomic_model_global_stat.py

@njzjz
Copy link
Member

njzjz commented Mar 18, 2026

pre-commit.ci autofix

@Chengqian-Zhang Chengqian-Zhang changed the title feat (breaking change): Support element type-wise bias in property fitting (breaking change)feat: Support element type-wise bias in property fitting Mar 18, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants