Multi-phase credit analysis system for real estate investment trusts (REITs) using Claude Code agents.
Credit analysts shouldn't be human copy machines.
If you've ever spent 6 hours copying numbers from PDFs, fighting with Excel, hunting for footnotes on page 63—only to be too exhausted to think clearly about the actual credit decision—this tool is for you.
This isn't about making credit analysis faster. It's about getting your brain back.
The tool runs automatically. You hit go, get coffee, and come back 30 minutes later to a complete professional credit opinion. Not a draft. Not a summary. Complete.
Because life's too short to copy numbers from PDFs.
Read the full story: Why I Built This
✅ Model v2.2 fixes critical underestimation issue - now accurately predicts distribution cut risk for distressed REITs
The Problem with v2.1:
- REIT A: Predicted 2.1% (Very Low) when actual risk was 67.1% (High)
- Underestimation: 65 percentage points off
- Root cause: Feature distribution mismatch (trained on total AFCF, but Phase 3 calculates sustainable AFCF)
Model v2.2 Improvements:
- ✅ 67.1% High risk prediction for REIT A (was 2.1% Very Low) - aligns with critical distress
- ✅ Sustainable AFCF methodology - matches Phase 3 calculations (Issue #40)
- ✅ 28 Phase 3 features (was 54 with market/macro) - more focused feature set
- ✅ Validated on 3 REITs - improvements of +27 to +65 percentage points
- ✅ Production deployment - default model path updated, v2.1 archived
Performance (5-fold CV):
- F1 Score: 0.870, ROC AUC: 0.930, Accuracy: 87.5%
- Top drivers: monthly_burn_rate, acfo_calculated, available_cash
See: Model v2.2 Documentation | Deployment Summary | Analysis
Structural Considerations Content Extraction (October 21, 2025)
✅ Debt Structure, Security & Collateral, Perpetual Securities sections now auto-populated from Phase 4 analysis
- Debt Structure: Credit facilities, covenant compliance, debt profile
- Security & Collateral: Unencumbered asset pool, LTV ratios, recovery estimates
- Perpetual Securities: Automatically detected or marked "Not applicable"
Impact: +15% report completeness, $0 token cost. See CHANGELOG.md.
This system performs comprehensive credit analysis on real estate issuers (REITs, real estate companies) using a multi-phase pipeline that achieves 99.2% token reduction while generating professional Moody's-style credit opinion reports.
- 5-Phase Sequential Pipeline: Proven architecture (PDF→Markdown→JSON→Metrics→Analysis→Report)
- Distribution Cut Prediction: ML model v2.2 predicts 12-month distribution cut risk (High accuracy: F1=0.87, ROC AUC=0.93)
- 99.2% Token Reduction: File reference patterns reduce Phase 2 from ~140K to ~1K tokens
- Dual PDF Conversion Methods: Choose between speed (PyMuPDF4LLM+Camelot, ~30s) or quality (Docling, ~20min)
- Market Data Integration: Automated price stress, volatility, and momentum analysis via OpenBB Platform
- Macro Environment Tracking: Bank of Canada and Federal Reserve rate monitoring with credit stress scoring
- Context-Efficient Phase 2: File references preserve ~199K tokens for extraction logic
- Absolute Path Implementation: Reliable execution from any working directory
- Organized Output: Issuer-specific folders with separate temp and reports directories
- Claude Code Integration: Uses Claude Code agents for intelligent extraction and analysis
- Zero-API Dependency: Core pipeline works entirely within Claude Code (OpenBB optional, $0 cost)
- Test-Driven Development: Comprehensive test suite for all phases
- Production Ready: Generates professional credit opinion reports with 5-factor scorecard analysis
- 100% Success Rate: Sequential markdown-first approach prevents context window exhaustion
Phase 1 Phase 2 Phase 3 Phase 3.5 Phase 4 Phase 5
PDF→MD → MD→JSON → Calculations → Enrichment → Agent → Report
PyMuPDF/ File refs Pure Python ML Model v2.2 Slim Agent Template
Docling (~1K tok) 0 tokens 0 tokens (12K tok) (0 tok)
30s-20min Efficient FFO/AFFO/ACFO Cut risk: Credit Final
AFCF metrics 67% High analysis report
Phase 3.5 (Optional - Enrichment):
- Market risk data (OpenBB Platform): Price stress, volatility, momentum
- Macro environment (Bank of Canada, Federal Reserve): Rate cycles, credit stress
- Distribution history: 10-year dividend history, cut detection, recovery analysis
- Distribution cut prediction (Model v2.2): 12-month cut probability with risk drivers
Method 1: PyMuPDF4LLM + Camelot (Default - Fast)
- Command:
/analyzeREissuer @statements.pdf @mda.pdf "Issuer Name" - Speed: ~30 seconds for 2 PDFs (75 pages total)
- Table Format: Enhanced 14-column tables with metadata
- Use Case: Interactive analysis, fast iteration, production workloads
- Extraction: 113 tables from 75 pages, superior to pure PyMuPDF4LLM
- Output: 545KB markdown with rich table formatting
Method 2: Docling (Alternative - Cleaner)
- Command:
/analyzeREissuer-docling @statements.pdf @mda.pdf "Issuer Name" - Speed: ~20 minutes for 2 PDFs (Docling FAST mode)
- Table Format: Compact 4-column tables, cleaner structure
- Use Case: Overnight batch processing, cleaner extraction testing
- Extraction: Same table coverage, more compact markdown
- Output: More concise markdown, easier to parse manually
| Method | Phase 1 Time | Table Format | Output Size | Best For |
|---|---|---|---|---|
| PyMuPDF4LLM + Camelot | ~30s | Enhanced (14 cols) | 545KB | Interactive, production |
| Docling FAST | ~20min | Compact (4 cols) | Smaller | Batch, testing |
Both methods produce identical Phase 2-5 outputs - the choice only affects Phase 1 processing time and markdown structure.
| Approach | Phase 2 Token Cost | Context Available | Result |
|---|---|---|---|
| Direct PDF Reading | ~136K tokens | ~64K remaining | ❌ Context exhausted |
| Markdown-First (v1.0.4+) | ~1K tokens (file refs) | ~199K remaining | ✅ Reliable extraction |
Key Benefits:
- ✅ 99.2% token reduction: File references (~1K) vs embedding content (~140K tokens)
- ✅ Enhanced table extraction: Both methods capture 113 tables from 75 pages
- ✅ Context preservation: Leaves ~199K tokens for extraction logic and validation
- ✅ Flexible conversion: Choose speed (PyMuPDF+Camelot) or quality (Docling) per use case
- ✅ Absolute paths: Reliable execution from any working directory using
Path.cwd() - ✅ Proven reliability: 100% success rate on production workloads
Predicts 12-month distribution cut probability for Canadian REITs using logistic regression trained on 24 observations (11 cuts, 13 controls).
Model Performance:
- F1 Score: 0.870 (5-fold CV) - Excellent balance between precision and recall
- ROC AUC: 0.930 - Strong discrimination between cut vs. no-cut
- Accuracy: 87.5% - High overall prediction accuracy
Key Features (Top 5):
- Monthly burn rate - Cash depletion speed (most predictive)
- ACFO calculated - Sustainable operating cash flow
- Available cash - Immediate liquidity
- Total available liquidity - Cash + undrawn facilities
- Self-funding ratio - AFCF / Total obligations
Example Predictions:
| REIT | Cut Probability | Risk Level | Financial Context |
|---|---|---|---|
| REIT A | 67.1% | 🔴 High | Cash runway: 1.6 months, Self-funding: -0.61x |
| REIT B | 48.5% | 🔴 High | Sustainable AFCF negative |
| REIT C | 29.3% | 🟠 Moderate | AFFO payout: 115% |
What Changed in v2.2:
- ✅ Fixed 65-point underestimation in severe distress cases
- ✅ Uses sustainable AFCF methodology (aligns with Phase 3)
- ✅ 28 Phase 3 features only (removed market/macro for simplicity)
- ✅ Deployed to production (Oct 29, 2025)
Risk Classification:
- 🟢 Very Low: 0-10% probability
- 🟡 Low: 10-25%
- 🟠 Moderate: 25-50%
- 🔴 High: 50-75%
- 🚨 Very High: 75-100%
See: Model Documentation | Deployment Analysis
Issuer_Reports/
{Issuer_Name}/
temp/ # Intermediate files (can delete)
phase1_markdown/*.md # Converted PDFs (markdown format)
phase2_extracted_data.json # Extracted financial data
phase2_extraction_prompt.txt # Extraction prompt for debugging
phase3_calculated_metrics.json # Calculated metrics (FFO/AFFO/ACFO/AFCF)
phase4_enriched_data.json # Phase 3 + market/macro/prediction (optional)
phase4_agent_prompt.txt # Agent prompt for debugging
phase4_credit_analysis.md # Qualitative credit assessment
reports/ # Final reports (permanent)
2025-10-29_031335_Credit_Opinion_{issuer}.md # Timestamped final report
Note: If Phase 3.5 enrichment runs successfully, phase4_enriched_data.json includes:
- Phase 3 calculated metrics
- Market risk assessment (price stress, volatility, momentum)
- Macro environment (BoC/Fed rates, credit stress score)
- Distribution history (10-year dividend data, cut detection)
- Distribution cut prediction (Model v2.2: probability, risk level, top drivers)
- Python 3.10+
- Node.js 18+ (for Claude Code)
- Git
1. Install Claude Code:
# Install Claude Code CLI globally
npm install -g @anthropic-ai/claude-code
# Verify installation
claude-code --version2. Set up the project:
# Clone repository
git clone https://github.com/reggiechan74/issuer-credit-analysis.git
cd issuer-credit-analysis
# Install Python dependencies
pip install -r requirements.txt
# Run tests to verify installation
pytest tests/3. Start Claude Code:
# Open Claude Code in the project directory
claude-code
# Or use the web interface at:
# https://claude.com/claude-codeConfigure the extraction pipeline via config/extraction_config.yaml.
phase1_extraction:
method: "enhanced" # PyMuPDF4LLM + Camelot (best table quality)
phase2_extraction:
method: "manual" # Markdown→JSON with file references
manual:
prompt_strategy: "reference" # ~1K tokens vs ~140K embedded| Setting | Token Cost | Context Available | Reliability |
|---|---|---|---|
| manual + reference | ~1K tokens | ~199K remaining | ✅ 100% success |
| ~136K tokens | ~64K remaining | ❌ Context exhaustion |
For very large files (>10MB), use the agent-based approach:
phase2_extraction:
method: "agent" # Uses financial_data_extractor agent
# Cost: ~$0.30 per extractionMethod 1: Fast Analysis (Default) - /analyzeREissuer
Best for interactive analysis and production workloads:
# With Claude Code open in this directory:
/analyzeREissuer @statements.pdf @mda.pdf "REIT Name"
# The slash command automatically executes all phases:
# 1. Phase 1: PDF → Markdown (PyMuPDF4LLM + Camelot, 113 tables extracted)
# 2. Phase 2: Markdown → JSON (file references, ~1K tokens)
# 3. Phase 3: Calculate metrics (0 tokens, pure Python)
# 3.5. Enrichment: Market/macro data + distribution cut prediction (Model v2.2)
# 4. Phase 4: Credit analysis (slim agent, ~12K tokens)
# 5. Phase 5: Generate report (0 tokens, templating)
# Total time: ~60 seconds | Total cost: ~$0.30
# Output includes: Distribution cut risk prediction with risk levelMethod 2: Cleaner Extraction - /analyzeREissuer-docling
Alternative for batch processing with cleaner markdown output:
# Same usage, different PDF conversion method:
/analyzeREissuer-docling @statements.pdf @mda.pdf "REIT Name"
# Uses Docling for Phase 1 (slower but more compact markdown)
# Phases 2-5 are identical to Method 1
# Total time: ~20 minutes | Total cost: ~$0.30
# Best for: Overnight batch jobs, testing cleaner extractionWhen to use Docling:
- Overnight/batch processing (time not critical)
- Testing if cleaner markdown improves extraction quality
- Fallback if PyMuPDF4LLM has issues with specific PDFs
Burn Rate Analysis: /burnrate (New in v1.0.7)
Generate comprehensive cash burn rate and liquidity runway analysis:
# Using issuer name (searches Issuer_Reports/)
/burnrate "REIT Name"
# Using issuer abbreviation
/burnrate REIT
# Using direct path to Phase 2 JSON
/burnrate Issuer_Reports/REIT_Name/temp/phase2_extracted_data.json
# Report includes:
# - Cash burn rate (monthly & annualized)
# - Cash runway (months until depletion)
# - Liquidity risk assessment (CRITICAL/HIGH/MODERATE/LOW)
# - Self-funding ratio and sustainable burn analysis
# - Credit implications and recommended actionsIf you prefer to run phases individually:
Phase 1: PDF → Markdown (MUST run first)
Choose your PDF conversion method:
# Method 1: PyMuPDF4LLM + Camelot (Fast - 30 seconds)
python scripts/preprocess_pdfs_enhanced.py \
--issuer-name "REIT Name" \
statements.pdf mda.pdf
# Method 2: Docling (Cleaner - 20 minutes)
python scripts/preprocess_pdfs_docling.py \
--issuer-name "REIT Name" \
statements.pdf mda.pdf
# Both create: Issuer_Reports/REIT_Name/temp/phase1_markdown/*.mdPhase 2: Markdown → JSON (after Phase 1 completes)
# Extract financial data using file references (~1K tokens)
python scripts/extract_key_metrics_efficient.py \
--issuer-name "REIT Name" \
Issuer_Reports/REIT_Name/temp/phase1_markdown/*.md
# Then Claude Code reads the prompt and extracts dataPhase 3: Metric Calculations
python scripts/calculate_credit_metrics.py \
Issuer_Reports/REIT_Name/temp/phase2_extracted_data.jsonPhase 4: Credit Analysis (requires Claude Code agent)
# Within Claude Code:
# Invoke issuer_due_diligence_expert_slim agent with metrics from Phase 3Phase 5: Final Report Generation
python scripts/generate_final_report.py \
Issuer_Reports/REIT_Name/temp/phase3_calculated_metrics.json \
Issuer_Reports/REIT_Name/temp/phase4_credit_analysis.md# Example 1: Single REIT with multiple PDFs (recommended)
/analyzeREissuer @financial_statements.pdf @mda.pdf "REIT Name"
# Example 2: Quarterly analysis
/analyzeREissuer @Q2_2025_statements.pdf "REIT Name"# Using PyMuPDF4LLM + Camelot (current default)
python scripts/preprocess_pdfs_enhanced.py \
--issuer-name "REIT Name" \
financial_statements.pdf mda.pdf
# Results: 113 tables extracted from 75 pages
# Output: Issuer_Reports/REIT_Name/temp/phase1_markdown/# Remove temporary files after analysis (keeps reports)
rm -rf Issuer_Reports/*/temp/
# Keep everything organized by issuer
ls Issuer_Reports/REIT_Name/reports/
# Output:
# 2025-10-17_153045_Credit_Opinion_REIT_Name.md
# 2025-10-18_091230_Credit_Opinion_REIT_Name.mdRun the complete test suite:
# All tests
pytest tests/
# Specific phase
pytest tests/test_phase3_calculations.py -v
# With coverage
pytest tests/ --cov=scripts --cov-report=htmlCurrent Test Status:
- Phase 1: ✅ All tests passing
- Phase 2: ✅ All tests passing
- Phase 3: ✅ All tests passing
- Phase 4: ✅ All tests passing
- Phase 5: ✅ All tests passing (13/19 active, 6 skipped)
Size: 7.7KB (85% reduction vs full agent) Focus: Qualitative credit assessment from pre-calculated metrics Token Usage: ~12,000 tokens per analysis Version: 1.0.1 (with parallel peer research)
Strengths:
- Fast execution (30-60 seconds)
- Parallel web research for peer comparisons (new in v1.0.1)
- Consistent output format
- Comprehensive 5-factor scorecard including 12 detailed sections
- Evidence-based assessments with proper citations
Size: 60KB Use Cases: Complex scenarios requiring deep domain knowledge
Leverage Metrics:
- Debt/Gross Assets (%)
- Net Debt Ratio (%)
- Total Debt, Net Debt, Gross Assets
REIT Metrics:
- FFO (Funds From Operations)
- AFFO (Adjusted FFO)
- ACFO (Adjusted Cash Flow from Operations)
- AFCF (Adjusted Free Cash Flow)
- FFO/AFFO per unit
- FFO/AFFO payout ratios
- Distribution coverage
Coverage Ratios:
- NOI/Interest Coverage
- EBITDA/Interest Coverage
- Debt Service Coverage
- AFCF Debt Service Coverage
- AFCF Self-Funding Ratio
Liquidity & Burn Rate Analysis (v1.0.7):
- Cash burn rate (monthly & annualized)
- Cash runway (months until depletion)
- Liquidity risk assessment (CRITICAL/HIGH/MODERATE/LOW)
- Sustainable burn rate analysis
- Self-funding ratio (AFCF / Net Financing Needs)
- Key Insight: Identifies REITs with positive AFCF that still burn cash
The system generates comprehensive reports with 15+ sections:
- Executive Summary - Rating and credit story
- Credit Strengths - Quantified positive factors
- Credit Challenges - Risk factors with mitigants
- Rating Outlook - Stable/Positive/Negative with timeframe
- Upgrade Factors - Specific thresholds for improvement
- Downgrade Factors - Quantified triggers
- 5-Factor Scorecard - Detailed rating methodology
- Key Observations - Portfolio quality, unusual metrics
- Peer Comparison - Parallel web research of 3-4 comparable REITs with citations (v1.0.1)
- Scenario Analysis - Base/Upside/Downside/Stress cases with pro forma metrics
- Structural Considerations - NEW in v1.0.13 - Auto-extracted from Phase 4 analysis:
- Debt Structure: Credit facilities, covenant compliance, debt profile
- Security & Collateral: Unencumbered assets, LTV ratios, recovery estimates
- Perpetual Securities: Hybrid capital instruments or "Not applicable"
- ESG Considerations - Environmental, Social, Governance factors with CIS scoring
- Company Background - Corporate structure, history, portfolio composition
- Business Strategy - Strategic priorities and capital allocation
- Detailed Financial Analysis - FFO/AFFO/ACFO/AFCF reconciliations and bridge analysis
- No Hardcoded Data: All calculations use explicit inputs
- Loud Failures: Invalid data triggers clear error messages
- Validation Checks: Balance sheet balancing, NOI margins, occupancy ranges
- Evidence Quality: Strong/moderate/limited evidence labels
- Professional Caveats: Clear disclaimers on limitations
| Phase | Tokens | Cost (approx) | Time | Details |
|---|---|---|---|---|
| Phase 1 | 0 | $0.00 | 10-15s | PyMuPDF4LLM + Camelot (113 tables from 75 pages) |
| Phase 2 | ~1,000 | $0.00 | 5-10s | File references (not embedded content) |
| Phase 3 | 0 | $0.00 | <1s | Pure Python calculations |
| Phase 4 | ~12,000 | ~$0.30 | 30-60s | Slim agent credit analysis |
| Phase 5 | 0 | $0.00 | <1s | Template-based report generation |
| Total | ~13,000 | ~$0.30 | ~60s | 99.2% token reduction |
| Approach | Total Tokens | Cost | Success Rate | Notes |
|---|---|---|---|---|
| v1.0.4 (Current) | ~13,000 | $0.30 | 100% | File reference patterns |
| ~148,000 | $3.70 | 0% | Context exhaustion (reverted) | |
| Original Single-Pass | ~121,500 | $3.04 | ~30% | Frequent context errors |
Key Achievement: 99.2% token reduction in Phase 2 alone (~140K → ~1K tokens)
This project was extracted from the geniusstrategies repository, which explores cognitive strategy coaching through AI agents embodying historical geniuses' thinking patterns.
The credit analysis pipeline was developed as a domain expert implementation demonstrating:
- Multi-phase pipeline architecture
- Claude Code agent integration
- Test-driven development practices
- Production-ready financial analysis
Issue #7 - Cash Burn Rate and Liquidity Runway Analysis (Implemented)
- ✅ 4 new calculation functions: burn rate, cash runway, liquidity risk, sustainable burn
- ✅ New
/burnrateslash command for comprehensive liquidity analysis - ✅ 36 tests passing (25 unit + 11 integration)
- ✅ Critical Discovery: REITs can have positive AFCF but still burn cash when financing needs exceed free cash flow
- ✅ Production-ready: Tested with Dream Industrial REIT and Artis REIT
Issue #6 - AFCF (Adjusted Free Cash Flow) Calculations (Implemented)
- ✅ New metric: AFCF = ACFO + Net Cash Flow from Investing
- ✅ More conservative than ACFO - includes ALL investment activities
- ✅ AFCF coverage ratios: debt service, distributions, self-funding
- ✅ 17 tests passing with comprehensive validation
Issue #5 - ACFO Implementation (Implemented)
- ✅ Automated ACFO calculation using REALPAC methodology
- ✅ 17 adjustments to IFRS CFO for normalized operating cash flow
- ✅ Prevents double-counting with AFCF calculations
Issue #1 - PDF Markdown Conversion (Resolved)
- ✅ Implemented PyMuPDF4LLM + Camelot hybrid approach
- ✅ Extracts 113 tables from 75-page documents with high fidelity
- ✅ Superior table structure preservation vs MarkItDown
Issue #2 - Context Length Optimization (Research Complete)
- ✅ Comprehensive semantic chunking research completed (docs/SEMANTIC_CHUNKING_RESEARCH.md)
- ✅ Current file reference architecture already achieves 99.2% token reduction
- 📋 Optional semantic chunking planned for v1.1.0 (documents >256KB)
- 🔮 RAG-based approach for v2.0.0 (documents >1MB, future consideration)
Code Quality Improvements
- ✅ Absolute path implementation using
Path.cwd()for reliable execution - ✅ Fixed property_count field mapping bug (Phase 3 calculations)
- ✅ Enhanced error handling and validation across all phases
- ✅ 100% test passing rate (Phase 1-5)
Contributions welcome! Areas of interest:
- Additional asset classes (corporate bonds, structured finance)
- Enhanced portfolio quality metrics
- Integration with financial data APIs
- Visualization dashboards
- Semantic chunking implementation (v1.1.0 roadmap available in Issue #2)
- Enhanced peer comparison analytics (parallel research implemented in v1.0.1)
Copyright 2025 Reggie Chan
Licensed under the Apache License, Version 2.0 (the "License").
This project is licensed under Apache 2.0, which allows:
- ✅ Commercial use
- ✅ Modification and distribution
- ✅ Patent use (explicit patent grant included)
- ✅ Private use
- ✅ Integration into proprietary systems
Requirements:
- Attribution to the original author (Reggie Chan)
- Include copy of the Apache 2.0 license
- State any significant changes made
For the complete license text, see the LICENSE file or visit http://www.apache.org/licenses/LICENSE-2.0
IMPORTANT NOTICE: PLEASE READ CAREFULLY BEFORE USING THIS SOFTWARE
This software tool (the "Tool") is designed solely for informational and analytical purposes to assist qualified credit professionals in evaluating real estate investment trusts ("REITs") and related issuers. The Tool generates automated credit assessments based on financial data extraction and machine learning models.
THIS TOOL DOES NOT PROVIDE INVESTMENT ADVICE, CREDIT RATINGS, OR RECOMMENDATIONS. The Tool is not:
- Investment advice, investment recommendations, or investment research
- A credit rating or credit opinion as defined by securities regulators or rating agencies
- A substitute for independent professional credit analysis or due diligence
- A guarantee, warranty, or assurance regarding credit quality, investment returns, or future performance
- Approved, endorsed, or validated by any credit rating agency, securities regulator, or financial authority
- Intended for retail investors or non-professional users
Use of this Tool does not create any fiduciary, advisory, or agency relationship between the user and the Tool's creators, maintainers, or contributors. Users retain full responsibility for their own investment and credit decisions.
Machine Learning Model Limitations:
- The distribution cut prediction model (v2.2) is trained on limited historical data (24 observations) and may not accurately predict future events
- Model performance metrics represent historical backtesting and do not guarantee future accuracy
- The model may produce false positives (incorrectly predicting distribution cuts) or false negatives (failing to predict actual cuts)
- Model predictions should be validated against independent analysis and not relied upon exclusively
Data and Extraction Risks:
- The Tool relies on automated PDF extraction which may introduce errors, omissions, or misinterpretations
- Financial data accuracy depends on the quality and completeness of source documents
- The Tool cannot verify the accuracy of issuer-reported financial statements
- Users must independently verify all extracted data against original source documents
Third-Party Data:
- Market data, macroeconomic data, and dividend history are sourced from third-party providers (OpenBB Platform, TMX, YFinance, Bank of Canada, Federal Reserve) and may contain errors or delays
- The Tool's creators make no representations regarding the accuracy, completeness, or timeliness of third-party data
Analytical Limitations:
- The Tool applies standardized REALPAC methodologies which may not be appropriate for all issuers or circumstances
- Automated analysis cannot replace human judgment regarding qualitative factors, management quality, or strategic considerations
- The Tool does not consider all factors relevant to credit analysis, including but not limited to: litigation risks, environmental liabilities, regulatory changes, or market conditions
ALL OUTPUTS FROM THIS TOOL MUST BE REVIEWED, VALIDATED, AND APPROVED BY QUALIFIED CREDIT PROFESSIONALS before being used for any credit decision, investment decision, or publication. Users should:
- Independently verify all extracted financial data
- Review model predictions against independent credit analysis
- Consider qualitative factors not captured by the Tool
- Obtain appropriate credit committee or investment committee approval
- Ensure compliance with applicable investment policies, credit policies, and regulatory requirements
Users are solely responsible for ensuring their use of this Tool complies with all applicable laws, regulations, and professional standards, including but not limited to:
- Securities laws and regulations
- Investment adviser regulations (if applicable)
- FINRA, SEC, IIROC, or other regulatory requirements
- Internal compliance policies and procedures
- Professional conduct standards and fiduciary duties
THIS TOOL IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. The creators disclaim all warranties including, but not limited to, warranties of accuracy, completeness, merchantability, fitness for a particular purpose, and non-infringement. The creators do not warrant that the Tool will be error-free, uninterrupted, or free from defects.
TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, THE CREATORS, MAINTAINERS, AND CONTRIBUTORS SHALL NOT BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL, SPECIAL, OR PUNITIVE DAMAGES arising out of or related to the use of this Tool, including but not limited to: investment losses, trading losses, lost profits, data inaccuracies, or business interruption, even if advised of the possibility of such damages.
This Tool is licensed under GNU General Public License v3.0. Users must comply with all license terms. Outputs generated by this Tool:
- Should not be represented as official credit ratings or third-party research
- Must include appropriate disclaimers when shared with third parties
- Must attribute automated/machine-generated analysis appropriately
- Should not be used to circumvent regulatory requirements for independent credit analysis
BY USING THIS TOOL, YOU ACKNOWLEDGE THAT YOU HAVE READ, UNDERSTOOD, AND AGREE TO BE BOUND BY THIS DISCLAIMER. If you do not agree to these terms, do not use this Tool.
For questions regarding proper use of this Tool, consult with your legal, compliance, or risk management teams before proceeding.
Last Updated: January 2025 Version: 1.0.15
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Original Project: geniusstrategies
Built with Claude Code | Documentation | Examples