feat: crypto identity module (issue #8)#10
feat: crypto identity module (issue #8)#10arunabha003 wants to merge 3 commits intoStabilityNexus:mainfrom
Conversation
WalkthroughEstablishes initial project scaffolding for MiniChain, a Python blockchain implementation, with build configuration (Makefile, pyproject.toml), CI/CD workflow, package structure with CLI entrypoint, cryptographic utilities using Ed25519, comprehensive test coverage, and development documentation. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Suggested labels
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/ci.yml:
- Around line 1-29: Add a pip cache step to the CI job to speed up installs:
before the "Install dependencies" step (after "Setup Python") add a step that
uses actions/cache (e.g., actions/cache@v4) to cache pip's cache directory
(typically ~/.cache/pip) with a cache key based on runner/os, python-version and
a hash of your dependency file (pyproject.toml/requirements.txt/poetry.lock);
this will restore the cache when available and reduce the time spent in the
"Install dependencies" step.
In `@README.md`:
- Around line 51-71: Update the "Repository Layout" section in README.md to
include the new test file by adding an entry for tests/test_crypto.py under the
tests/ listing so the layout reflects the actual repository contents; edit the
block under the "Repository Layout" heading and insert "tests/test_crypto.py"
alongside the existing test_scaffold entry to keep the tree accurate.
In `@src/minichain/crypto.py`:
- Around line 21-24: The _require_nacl function currently detects PyNaCl
availability by checking "blake2b" in globals(), which can be spoofed; instead
add and use a dedicated sentinel (e.g., _NACL_AVAILABLE) set at import time and
have _require_nacl check that boolean and raise the same RuntimeError from
_NACL_IMPORT_ERROR if False; update the import block that currently sets
_NACL_IMPORT_ERROR/_NACL_AVAILABLE so the sentinel accurately reflects import
success and replace the globals() check in _require_nacl with a check of
_NACL_AVAILABLE.
- Around line 34-38: The derive_address function currently computes a full
blake2b digest then truncates it; change the call in derive_address(verify_key:
VerifyKey) to request exactly ADDRESS_LENGTH_BYTES by passing
digest_size=ADDRESS_LENGTH_BYTES to blake2b (keep _require_nacl and VerifyKey
usage unchanged) so the hash returns 20 bytes directly instead of computing and
slicing the 64-byte digest.
In `@tests/test_crypto.py`:
- Around line 40-47: The linter flags an unused local variable `signing_key` in
test_invalid_signature_is_rejected; rename it to `_signing_key` (or prefix with
an underscore) where generate_key_pair() is called so the variable is treated as
intentionally unused—update the declaration that currently reads `signing_key,
verify_key = generate_key_pair()` to `_signing_key, verify_key =
generate_key_pair()` while keeping the rest of the test logic using
sign_message, verify_signature and other_signing_key unchanged.
| name: CI | ||
|
|
||
| on: | ||
| push: | ||
| branches: [main] | ||
| pull_request: | ||
|
|
||
| jobs: | ||
| test: | ||
| runs-on: ubuntu-latest | ||
| steps: | ||
| - name: Checkout | ||
| uses: actions/checkout@v4 | ||
|
|
||
| - name: Setup Python | ||
| uses: actions/setup-python@v5 | ||
| with: | ||
| python-version: "3.11" | ||
|
|
||
| - name: Install dependencies | ||
| run: | | ||
| python -m pip install --upgrade pip | ||
| python -m pip install -e .[dev] | ||
| - name: Lint | ||
| run: make lint | ||
|
|
||
| - name: Run tests | ||
| run: make test |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding pip caching to speed up CI runs.
The workflow is functional but could benefit from caching pip dependencies to reduce install time on repeated runs.
♻️ Suggested improvement
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
+ cache: "pip"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| name: CI | |
| on: | |
| push: | |
| branches: [main] | |
| pull_request: | |
| jobs: | |
| test: | |
| runs-on: ubuntu-latest | |
| steps: | |
| - name: Checkout | |
| uses: actions/checkout@v4 | |
| - name: Setup Python | |
| uses: actions/setup-python@v5 | |
| with: | |
| python-version: "3.11" | |
| - name: Install dependencies | |
| run: | | |
| python -m pip install --upgrade pip | |
| python -m pip install -e .[dev] | |
| - name: Lint | |
| run: make lint | |
| - name: Run tests | |
| run: make test | |
| name: CI | |
| on: | |
| push: | |
| branches: [main] | |
| pull_request: | |
| jobs: | |
| test: | |
| runs-on: ubuntu-latest | |
| steps: | |
| - name: Checkout | |
| uses: actions/checkout@v4 | |
| - name: Setup Python | |
| uses: actions/setup-python@v5 | |
| with: | |
| python-version: "3.11" | |
| cache: "pip" | |
| - name: Install dependencies | |
| run: | | |
| python -m pip install --upgrade pip | |
| python -m pip install -e .[dev] | |
| - name: Lint | |
| run: make lint | |
| - name: Run tests | |
| run: make test |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/ci.yml around lines 1 - 29, Add a pip cache step to the CI
job to speed up installs: before the "Install dependencies" step (after "Setup
Python") add a step that uses actions/cache (e.g., actions/cache@v4) to cache
pip's cache directory (typically ~/.cache/pip) with a cache key based on
runner/os, python-version and a hash of your dependency file
(pyproject.toml/requirements.txt/poetry.lock); this will restore the cache when
available and reduce the time spent in the "Install dependencies" step.
| ## Repository Layout | ||
|
|
||
| ```text | ||
| .github/workflows/ci.yml | ||
| src/minichain/ | ||
| __init__.py | ||
| __main__.py | ||
| crypto.py | ||
| transaction.py | ||
| block.py | ||
| state.py | ||
| mempool.py | ||
| consensus.py | ||
| network.py | ||
| storage.py | ||
| node.py | ||
| tests/ | ||
| test_scaffold.py | ||
| issues.md | ||
| architectureProposal.md | ||
| ``` |
There was a problem hiding this comment.
Repository layout is missing tests/test_crypto.py.
The layout section should include the new test file added in this PR.
📝 Proposed fix
tests/
test_scaffold.py
+ test_crypto.py
issues.md🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` around lines 51 - 71, Update the "Repository Layout" section in
README.md to include the new test file by adding an entry for
tests/test_crypto.py under the tests/ listing so the layout reflects the actual
repository contents; edit the block under the "Repository Layout" heading and
insert "tests/test_crypto.py" alongside the existing test_scaffold entry to keep
the tree accurate.
| def _require_nacl() -> None: | ||
| if "blake2b" not in globals(): | ||
| msg = "PyNaCl is required for minichain.crypto. Install with: pip install PyNaCl" | ||
| raise RuntimeError(msg) from _NACL_IMPORT_ERROR |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
_require_nacl check is slightly fragile.
Checking "blake2b" not in globals() works but could be defeated if a same-named symbol is introduced. A more robust approach would be to check for a dedicated sentinel (e.g., a _NACL_AVAILABLE boolean). This is minor given the current module scope.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/minichain/crypto.py` around lines 21 - 24, The _require_nacl function
currently detects PyNaCl availability by checking "blake2b" in globals(), which
can be spoofed; instead add and use a dedicated sentinel (e.g., _NACL_AVAILABLE)
set at import time and have _require_nacl check that boolean and raise the same
RuntimeError from _NACL_IMPORT_ERROR if False; update the import block that
currently sets _NACL_IMPORT_ERROR/_NACL_AVAILABLE so the sentinel accurately
reflects import success and replace the globals() check in _require_nacl with a
check of _NACL_AVAILABLE.
| def derive_address(verify_key: VerifyKey) -> str: | ||
| """Derive a 20-byte address from a verify key as lowercase hex.""" | ||
| _require_nacl() | ||
| digest = blake2b(verify_key.encode(), encoder=RawEncoder) | ||
| return digest[:ADDRESS_LENGTH_BYTES].hex() |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🌐 Web query:
PyNaCl nacl.hash.blake2b digest_size parameter documentation
💡 Result:
In PyNaCl’s one-shot API nacl.hash.blake2b(data, digest_size=..., ...), the digest_size parameter is:
- Type:
int - Meaning: requested digest length in bytes
- Default:
BLAKE2B_BYTES(libsodium’s defaultcrypto_generichash_BYTES) - Constraint: must be ≤
BLAKE2B_BYTES_MAX(and PyNaCl also exposesBLAKE2B_BYTES_MINas the minimum allowed size constant). [1]
Related (stateful) API: nacl.hashlib.blake2b(..., digest_size=BYTES, ...) is designed to be largely compatible with Python’s hashlib.blake2b, and exposes MAX_DIGEST_SIZE as “the maximum allowed value of the requested digest_size”. [2]
Sources: [1] [2]
Use digest_size parameter to request exactly 20 bytes instead of truncating the full 64-byte hash.
nacl.hash.blake2b supports a digest_size parameter that allows specifying the exact output length. Requesting only 20 bytes is cleaner and avoids computing unnecessary bytes.
♻️ Proposed refactoring
def derive_address(verify_key: VerifyKey) -> str:
"""Derive a 20-byte address from a verify key as lowercase hex."""
_require_nacl()
- digest = blake2b(verify_key.encode(), encoder=RawEncoder)
- return digest[:ADDRESS_LENGTH_BYTES].hex()
+ digest = blake2b(
+ verify_key.encode(),
+ digest_size=ADDRESS_LENGTH_BYTES,
+ encoder=RawEncoder,
+ )
+ return digest.hex()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def derive_address(verify_key: VerifyKey) -> str: | |
| """Derive a 20-byte address from a verify key as lowercase hex.""" | |
| _require_nacl() | |
| digest = blake2b(verify_key.encode(), encoder=RawEncoder) | |
| return digest[:ADDRESS_LENGTH_BYTES].hex() | |
| def derive_address(verify_key: VerifyKey) -> str: | |
| """Derive a 20-byte address from a verify key as lowercase hex.""" | |
| _require_nacl() | |
| digest = blake2b( | |
| verify_key.encode(), | |
| digest_size=ADDRESS_LENGTH_BYTES, | |
| encoder=RawEncoder, | |
| ) | |
| return digest.hex() |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/minichain/crypto.py` around lines 34 - 38, The derive_address function
currently computes a full blake2b digest then truncates it; change the call in
derive_address(verify_key: VerifyKey) to request exactly ADDRESS_LENGTH_BYTES by
passing digest_size=ADDRESS_LENGTH_BYTES to blake2b (keep _require_nacl and
VerifyKey usage unchanged) so the hash returns 20 bytes directly instead of
computing and slicing the 64-byte digest.
| def test_invalid_signature_is_rejected() -> None: | ||
| signing_key, verify_key = generate_key_pair() | ||
| other_signing_key, _ = generate_key_pair() | ||
| message = b"minichain-message" | ||
|
|
||
| wrong_signature = sign_message(message, other_signing_key) | ||
|
|
||
| assert not verify_signature(message, wrong_signature, verify_key) |
There was a problem hiding this comment.
Unused variable signing_key flagged by linter.
Ruff reports signing_key on line 41 is unused. Prefix with underscore per convention.
📝 Proposed fix
def test_invalid_signature_is_rejected() -> None:
- signing_key, verify_key = generate_key_pair()
+ _signing_key, verify_key = generate_key_pair()
other_signing_key, _ = generate_key_pair()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_invalid_signature_is_rejected() -> None: | |
| signing_key, verify_key = generate_key_pair() | |
| other_signing_key, _ = generate_key_pair() | |
| message = b"minichain-message" | |
| wrong_signature = sign_message(message, other_signing_key) | |
| assert not verify_signature(message, wrong_signature, verify_key) | |
| def test_invalid_signature_is_rejected() -> None: | |
| _signing_key, verify_key = generate_key_pair() | |
| other_signing_key, _ = generate_key_pair() | |
| message = b"minichain-message" | |
| wrong_signature = sign_message(message, other_signing_key) | |
| assert not verify_signature(message, wrong_signature, verify_key) |
🧰 Tools
🪛 Ruff (0.15.0)
[warning] 41-41: Unpacked variable signing_key is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_crypto.py` around lines 40 - 47, The linter flags an unused local
variable `signing_key` in test_invalid_signature_is_rejected; rename it to
`_signing_key` (or prefix with an underscore) where generate_key_pair() is
called so the variable is treated as intentionally unused—update the declaration
that currently reads `signing_key, verify_key = generate_key_pair()` to
`_signing_key, verify_key = generate_key_pair()` while keeping the rest of the
test logic using sign_message, verify_signature and other_signing_key unchanged.
|
Superseded by #11 . Closing this PR to keep review history in one place. |
Addressed Issues:
Part of #8
Depends on #9
What this PR adds
Validation
python3 -m pytestChecklist
We encourage contributors to use AI tools responsibly when creating Pull Requests. While AI can be a valuable aid, it is essential to ensure that your contributions meet the task requirements, build successfully, include relevant tests, and pass all linters. Submissions that do not meet these standards may be closed without warning to maintain the quality and integrity of the project. Please take the time to understand the changes you are proposing and their impact.
Summary by CodeRabbit
Release Notes
New Features
Chores