Skip to content

Future of Lua-CSharp #272

@ashtonmeuser

Description

@ashtonmeuser

@akeit0 @nuskey8 figured I would tag y'all as this is more of a discussion than an issue but this repo does not expose discussions.

First off, thank you for all the hard work on this incredible library! The API and performance are both fantastic! With that in mind, I hope the following comes across as constructive rather than purely critical.

I think there is some work to be done for this library to truly be production ready. Below are some (non-code) areas that I believe present the largest opportunity for improvement and will have the most impact. With these QoL improvements, I think this project can foster the community and stability it deserves.

  1. Development documentation. Currently, there is no documentation re: how to build the project from source or run the tests. This is critical for users to understand the library and contribute meaningful changes. This could be as simple as a new section in the README or could accompany a website e.g. docfx.
  2. More thorough documentation. In addition to development docs, it would be useful to expand the docs in general. There are a lot of features (again, great work!) but many of these require poking around in the source to find. Again, something like docfx would go a long way.
  3. Test improvements. There are essentially two suites of tests: one is bespoke to this library, and the other (LuaTests) uses the official Lua tests. This is a great pattern, but at a glance, it can be alarming that several of the Lua tests are expected to fail due to unimplemented behaviour.
    1. The failing Lua tests are a good alert to behaviour that is not supported, but the fact that these tests are all clumped into large Lua scripts means that a failure early on, even if expected, prevents the rest of the test from running, rendering it less useful. For example, the errors.lua test currently fails on line 149. That leaves almost 300 lines of tests that will never run. A breaking change in Lua-CSharp may go unnoticed if it falls in this section because we're expecting this test to fail. I don't know if there is a silver bullet solution here but it may be wise to raise warning to the user while still conforming to the API that Lua expects. This PR, for example, is doing just that.
  4. CI/CD.
    1. Automate test suite. This will build confidence in the stability, detect errors in PRs, trace breakages, etc.
    2. Automate releases. There hasn't been a Lua-CSharp release in exactly two months. Automating releases i.e. publishing to NuGet would surely reduce the friction and open the door to more frequent releases.
    3. Automate code review. I know generative AI is a controversial subject, but I think it's hard to argue against its effectiveness in code review. Things like CodeRabbit are incredibly valuable in maintainers assessing the risk of merging a PR. There are currently quite a few PRs sitting without any comment or information re: why there are yet unmerged.
  5. Preview releases. In addition to automating releases, it might be prudent to provide release candidates i.e. preview releases. Users can adopt these and help detect breakages.
  6. Issue categorization/prioritization. The ratio of issues to stars is fairly high for Lua-CSharp. It would be beneficial to start labelling issues with severity (e.g. Checking Greater than or Equal to with a nil value hangs Unity. #260 strikes me as critical while Faster ILuaUserData indexing. #234 is a low-pri nice-to-have), difficultly, etc. This should make it a little clearer for users peeling off an issue to fix.

Happy to hear your thoughts and I hope this didn't come off as presumptive. I'd love to start tackling some of these issues and opening PRs, but figured I'd gather opinions first. Cheers!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions