Skip to content

Conversation

@ankrgyl
Copy link
Collaborator

@ankrgyl ankrgyl commented Aug 8, 2023

No description provided.

Copy link
Collaborator Author

@ankrgyl ankrgyl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, the change looks pretty clean to me.

Is the idea that we'll "subscribe" to test emitter events and then send braintrust log statements as they occur?

We are in the middle of adding more granular profiling to BrainTrust btw, via an idea called "spans".

Not strictly necessary to include in the MVP, but if it's possible to emit an event each time a node executes for each test case, we can incrementally log span progress to BrainTrust, and then you'd be able to see performance metrics across the whole execution tree.

inputs,
);
},
// braintrustApiKey: settings.braintrustApiKey,
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this the WIP section?

@@ -0,0 +1,35 @@
// Extracted from braintrust library
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do these need to be extracted? Can they just be re-exported?

@abrenneke
Copy link
Contributor

Not strictly necessary to include in the MVP, but if it's possible to emit an event each time a node executes for each test case, we can incrementally log span progress to BrainTrust, and then you'd be able to see performance metrics across the whole execution tree.

Ah yeah I can just pipe in the events from the graph execution!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants