Merged
Conversation
…ard flow mismatch issue
… remove fragmentation
…demic standard eval
…use some server down recommend manual download
…nce-based classification
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Versor 1.0.0 is officially released.
Over the past 5 weeks, the Versor framework has evolved from a theoretical proof-of-concept into a robust geometric computing engine. With this v1.0 milestone, the core architecture—spanning$Cl(p,q,r)$ kernels, signature-aware exponentials, and the Geometric Blade Network (GBN)—is now fully complete and mathematically grounded.
We replace standard matrix multiplications with pure Geometric Algebra (Rotor) operations to preserve the topological structure of data, achieving SOTA-level efficiency and performance.
What's Built
v1.0 Final Benchmarks
The cautious "alpha" performance has been shattered. Final metrics achieved on a single RTX Pro 4500:
Transition to Stabilization Phase
With the core foundation solidified, this project is entering a Stabilization Phase.
(Archive) Previous v1.0-alpha Notes
The previously unmanageable number of tasks has been consolidated into four tasks, as shown below. Overall, the framework has been reorganized into general-purpose structures and task-specific layers. While significant tuning and logic improvements are required for each model structure at this stage, development of the core structure has been completed. Below is a brief summary of the performance and development history from the current testing phase, for your reference.
SR Tasks
For symbolic regression validation, we used 24 tasks proposed in the paper "SRBench 2.0" and currently support them. We focused on a structure that enables accurate equations to be derived from small amounts of data. At this stage, some logic may exhibit numerical instability, and rotor_translate may operate inefficiently or contain errors. Extensive parameter tuning is required, and some sections utilize non-geometric heuristics and methodologies. However, this can be considered a proposal for an IU-SR (Iterative Unbending for Symbolic Regression) architecture.
MD17 Task
This task is similar to the previous logic and guarantees basic operation, but requires improvement. Improvements and merges are planned for the next version 1.0, and a dynamic rotor system is planned to be introduced.
DEAP (EEG) Task
This model uses the newly added Neutralizer. While the model's performance has been verified, the simple decision logic in the details hinders performance. However, the RMSE values are attached below.
The average RMSE is approximately 0.25, with some cases reporting values as low as 0.14. (Based on LOSO, it takes about 5 minutes to train once and see the results - based on an RTX PRO 4050.)
LQA Task
We confirmed that the chain maintains 100 hops within 9 epochs, even when the number of hops increases to 13. (This is similar to a sanity check using geometric operations.)
The entailment and negation tasks yielded the following results.
Negation sentences are processed using geometric mirroring (Grade Involution), suppressing the deviation between Acc_Original (66.0%) and **Acc_Negated (65.3%) to within 0.7 percentage points. This indicates that the model geometrically perfectly preserves the logical operation "negation" regardless of the presence or absence of knowledge.
The HANS benchmark evaluation results observed an imbalance between Acc_Contradiction (10.9%) and **Acc_Entailment (79.7%). This indicates that the current model is biased toward the similarity signal of Grade-0 (scalar). This will be improved by introducing non-commutative inference logic utilizing Grade-2 (non-vector) components.
This is an alpha PR toward version 1.0. Thank you for your interest. Please open a PR or issue with any suggestions.