Skip to content

[5/6] Refactor __init__ to take DataFrames; add from_XXX classmethods#9

Open
alexanderbates wants to merge 5 commits intoDrugowitschLab:mainfrom
alexanderbates:pr5-fromXXX-constructors
Open

[5/6] Refactor __init__ to take DataFrames; add from_XXX classmethods#9
alexanderbates wants to merge 5 commits intoDrugowitschLab:mainfrom
alexanderbates:pr5-fromXXX-constructors

Conversation

@alexanderbates
Copy link
Copy Markdown

Refactor init to take DataFrames; add from_XXX classmethods

The internal data structure built by InfluenceCalculator is a sparse
PETSc matrix populated from an edge list, so the input format closest
to that representation is a pandas DataFrame edge list (plus optional
metadata DataFrame). init now takes those directly:

InfluenceCalculator(edgelist_df, meta_df=None, signed=False, ...)

Five classmethod loaders adapt other input formats to the DataFrame
init. Each one reads the format, then forwards every other kwarg
through **kwargs so the loaders do not have to repeat the constructor
signature:

  • from_sql(filename, **kwargs) -- SQLite (meta + edgelist_simple)
  • from_csv(edgelist_path, meta_path, ...)
  • from_parquet(edgelist_path, meta_path, ...) (requires pyarrow / fastparquet)
  • from_feather(edgelist_path, meta_path, ...) (requires pyarrow)
  • from_numpy(adjacency_matrix, neuron_ids=None, meta_df=None, **kwargs)

This is a breaking change for callers using the previous SQLite-only
init. Update those call sites to InfluenceCalculator.from_sql.

from_numpy converts non-zero entries of the adjacency matrix into a
synthesised edge list with a 'count' column and forwards through
init. Because count_thresh is applied to that synthesised column,
callers passing pre-normalised float weights should set count_thresh=0.

Three module-level validation helpers -- _validate_meta,
_validate_and_prepare_edgelist, and _check_parquet_available /
_check_feather_available -- enforce the column requirements with
descriptive ValueErrors that name the missing column and list the
columns the caller actually passed. When 'norm' is absent it is
computed from 'count' as count / sum(count) per post; when 'weight' is
present (instead of 'count') it is treated as a pre-normalised input
and count_thresh is bypassed. The redundant inline top_nt checks
inside _create_sparse_W are removed since validation runs upfront.

Tests are updated to drive the DataFrame init directly and add
format-equivalence checks (from_sql vs DataFrame, from_csv vs
DataFrame, from_numpy smoke). conftest.py exposes the bundled CSVs
four ways: as DataFrames, as filesystem paths via
importlib.resources.as_file(), and as a session-scoped temporary
SQLite database for from_sql.

…as kwargs

Replaces the hardcoded NEG_NEUROTRANSMITTERS module constant with two
explicit constructor arguments so that the library no longer pre-empts
the user's neurotransmitter sign assignment:

- inhibitory_nts: pre-neuron top_nt values to negate when signed=True
  (required when signed=True; raises ValueError otherwise).
- excluded_nts: pre-neuron top_nt values to drop entirely from W,
  independent of signed=True/False. Useful for transmitter classes
  whose net sign at a given target depends on the receptor mix and so
  cannot be assigned a single sign safely.

Adds lambda_max as a constructor argument (default 0.99 for backwards
compatibility). _normalize_W now always rescales to lambda_max exactly
rather than only capping when the natural eigenvalue exceeds it, so the
parameter is a true control knob over leading-mode amplification rather
than just a stability ceiling. The amplification of the leading mode in
(I - W_rescaled)^-1 is 1 / (1 - lambda_max), so 0.99 gives ~100x and
0.5 gives ~2x.

Surfaces syn_weight_measure ('count' or 'norm') as a constructor
argument and changes the default from 'norm' to 'count'. Fixes a
pre-existing bug in _create_sparse_W: the signed=True path negated the
'count' column unconditionally, but the matrix was populated from the
column named by syn_weight_measure (default 'norm'), so the signed flag
silently produced the same matrix as signed=False. The negation now
applies to the column actually consumed. An inline comment notes that
flipping signs on 'norm' breaks the column-sums-to-1 interpretation, so
'count' is the more natural choice in signed mode.

Sign preservation: _build_influence_dataframe now keeps the real part
of the steady-state vector in signed mode rather than always taking the
magnitude, so net-inhibited targets carry a negative score.

Validates lambda_max in (0, 1) and syn_weight_measure in {'count',
'norm'}. When signed=True or excluded_nts is set, the SQLite meta
table must include a 'top_nt' column or _create_sparse_W raises.
calculate_influence now returns both the raw influence column and the
three log-compressed adjusted_influence columns by default.  Users can
compare adjusted vs unadjusted scores from a single call rather than
having to import adjust_influence and post-process the output
themselves; opt out with adjust=False.

The log compression is parameterised via two new kwargs on
calculate_influence: adjust_const (the exp(-c) junk-node floor /+c
shift, default 24) and adjust_signif (rounding, default 6).

adjust_influence is added as a module-level function so advanced
workflows can still post-process aggregated DataFrames (e.g. summing
per-(target_class, seed_class) across multiple seeds before log
compression in a worked example).  Its output is three columns:

- adjusted_influence = sign(x) * (log(max(|x|, exp(-const))) + const)
- adjusted_influence_norm_by_targets (divides by n_targets per group)
- adjusted_influence_norm_by_sources_and_targets (divides by
  n_sources * n_targets per group)

The function dispatches on the presence of 'target' and 'seed' columns:
when present it groups and sums per (target, seed); when absent it
treats each row as its own group, which is the case for the DataFrame
calculate_influence builds.  Sign is preserved, so signed-mode input
yields signed-mode output.
Replaces the legacy single-test scaffold (tests/test_InfluenceCalculator.py
plus toy_network_example.sqlite and an example notebook) with a focused
pytest suite that exercises the constructor surface introduced by the
recent parameter rework and the integration of adjust_influence into
calculate_influence:

- Constructor validation: signed=True without inhibitory_nts raises;
  lambda_max outside (0, 1) raises (parametrised over five values);
  unknown syn_weight_measure raises.
- Construction smoke: unsigned and signed builds, the 'norm'
  syn_weight_measure path, and excluded_nts dropping pre-neurons.
  Wrapped in pytest.importorskip so they skip cleanly on machines
  without PETSc/SLEPc rather than failing the suite.
- adjust_influence helper: column shape, log-+-const anchor at the
  strongest influence, exp(-const) floor mapping zero raw influence to
  zero adjusted, sign preservation in signed mode, and the two
  validation errors (both score columns present, no score column).
- calculate_influence integration: default returns adjusted columns
  alongside raw, adjust=False returns raw only, signed mode produces
  some net-negative downstream targets.
- Bundled-data sanity check on column presence and row counts.

Bundles the C. elegans hermaphrodite chemical connectome (300 cells,
3,539 edges, 20,672 synapses) under InfluenceCalculator/data/ as two
CSVs plus an importable wrapper (celegans_edgelist(),
celegans_meta()).  Provenance and citation BibTeX live in the module
docstring so help(InfluenceCalculator.data) surfaces them.  The CSVs
are an OpenWorm distribution extract (accessed February 2026)
aggregating White et al. 1986 and Cook et al. 2019 with WormAtlas /
CenGen annotations.

The conftest fixture builds a temporary SQLite database from the
bundled CSVs to drive InfluenceCalculator's still-SQLite-only
constructor.  Once the DataFrame / from_csv constructors land in a
later PR the fixture can collapse to a path handoff.
pyproject.toml:
- Bump setuptools requirement to >=77 so the SPDX-string license
  syntax (license = "BSD-3-Clause") from PEP 639 is accepted; recent
  setuptools warns against the dual-purpose license field used before.
- Bump requires-python to >=3.10 (matches the language features used
  internally and the lower bound of pandas / petsc4py wheels).
- Bump version to 0.2.0 to reflect the externalised neurotransmitter
  parameters, lambda_max, syn_weight_measure, sign-preserving signed
  mode, and adjust_influence integration.
- Refresh the description and project URLs (homepage, repository,
  issues, documentation now declared explicitly).
- Declare optional dependency extras (parquet, examples, test, dev)
  so a CI image can install just what it needs (pip install .[test])
  rather than dragging the worked-example matplotlib stack into a
  test-only build.
- Add [tool.setuptools.package-data] so the bundled
  InfluenceCalculator/data/*.csv files ship in the wheel.
- Add [tool.pytest.ini_options] testpaths so pytest discovers the
  suite without an explicit positional argument.

.gitignore: add the obvious development-time noise (__pycache__/,
.pytest_cache/, .venv/) and the Influence/ directory the legacy test
script wrote per-seed CSVs into.
The internal data structure built by InfluenceCalculator is a sparse
PETSc matrix populated from an edge list, so the input format closest
to that representation is a pandas DataFrame edge list (plus optional
metadata DataFrame).  __init__ now takes those directly:

    InfluenceCalculator(edgelist_df, meta_df=None, signed=False, ...)

Five classmethod loaders adapt other input formats to the DataFrame
__init__.  Each one reads the format, then forwards every other kwarg
through **kwargs so the loaders do not have to repeat the constructor
signature:

- from_sql(filename, **kwargs)             -- SQLite (meta + edgelist_simple)
- from_csv(edgelist_path, meta_path, ...)
- from_parquet(edgelist_path, meta_path, ...)  (requires pyarrow / fastparquet)
- from_feather(edgelist_path, meta_path, ...)  (requires pyarrow)
- from_numpy(adjacency_matrix, neuron_ids=None, meta_df=None, **kwargs)

This is a breaking change for callers using the previous SQLite-only
__init__.  Update those call sites to InfluenceCalculator.from_sql.

from_numpy converts non-zero entries of the adjacency matrix into a
synthesised edge list with a 'count' column and forwards through
__init__.  Because count_thresh is applied to that synthesised column,
callers passing pre-normalised float weights should set count_thresh=0.

Three module-level validation helpers -- _validate_meta,
_validate_and_prepare_edgelist, and _check_parquet_available /
_check_feather_available -- enforce the column requirements with
descriptive ValueErrors that name the missing column and list the
columns the caller actually passed.  When 'norm' is absent it is
computed from 'count' as count / sum(count) per post; when 'weight' is
present (instead of 'count') it is treated as a pre-normalised input
and count_thresh is bypassed.  The redundant inline top_nt checks
inside _create_sparse_W are removed since validation runs upfront.

Tests are updated to drive the DataFrame __init__ directly and add
format-equivalence checks (from_sql vs DataFrame, from_csv vs
DataFrame, from_numpy smoke).  conftest.py exposes the bundled CSVs
four ways: as DataFrames, as filesystem paths via
importlib.resources.as_file(), and as a session-scoped temporary
SQLite database for from_sql.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant