-
Notifications
You must be signed in to change notification settings - Fork 24
FLIP 330: Scheduled Callbacks #331
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FLIP 330: Scheduled Callbacks #331
Conversation
|
In general I like this proposal and support Pros:
Cons:
@austinkline had very similar TransactionScheduler ( https://github.com/austinkline/transaction-scheduler/blob/main/contracts/TransactionScheduler.cdc ) without the execution guarantee part. But I think both have good use cases, depending on the economics. |
|
I am struggling with migraines atm so i cannot give as thorough a feedback that i want here but a couple of points.
|
|
Question - can this be used to implement a pattern for canned transaction? By canned transaction I mean, transaction which cannot have a known reference block upfront. Example a transaction where an organization has a multi-sign account and are using that to transfer fund. Not all signatures can be obtained at the same time so all users sign a scheduled transaction which in the future will transfer funds. Currently, this is not possible if all signers cannot sign within ~5mins before the reference block becomes invalid. |
|
@vishalchangrani I don't think it works in that case ( I mean not different than current design ); @tarakby knows much better than me, but basically in that case we need to implement every security measure on chain again ( sequence, expiry etc ) As example to my points above, this is perfect; something we innovate ( multi sign ) biting us and foot gun for devs ( for example when they implement multisign with callbacks ) |
thank you @bluesign |
|
Apologies for late replies I was absent.
Yes, this is a bit of a new paradigm, but at the same time something that is already done off-the-chain by many. A lot of contracts rely on being invoked periodically, but that is done in a centralized manner by some authority sending transactions. From the UX perspective this already exists (think any contract that does something with your data without you sending a transaction). The biggest difference is on-chain implementation, which will bring many benefits that decentralization brings.
I think economics of this are to be explored with usage, it's hard to predict it. But as long as the value of delayed execution is bigger than the fees required it should work fine. The fees are also adjustable so we can fine-tune it as we go.
If by events you mean callbacks, then yes, they can contain arbitrary data. What is encoded in that data it's up to the developer.
They will look similar to the system chunk transaction. But events emitted as part of callback execution will be contained within that transaction. So each callback execution will have a separate transaction. There will only be another service transaction that will be used to process all the scheduled callbacks. That one will emit events for each processed callback.
We could add reschedule, for now only cancel and schedule is possible. I don't think the overhead is large, it's only 2x (one additional transaction). But adding reschedule as an option could be nice, we just need to think about fee recalculation and also we probably would have to disable changing priorities etc.
Interesting. It's for sure possible to implement this logic inside Cadence. Let's say you have a contract that takes in an intent for an action and has a scheduled time after 1 hour, but checks there's enough signatures received up to that point. Then anyone up to that 1 hour window can submit signed intent. However, scheduling callback doesn't allow canned transactions natively. That dynamic doesn't change. But it allows implementations of such patterns in Cadence. |
Co-authored-by: Jan Bernatik <jan.bernatik@flowfoundation.org>
turbolent
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice proposal! Having a feature like this would be a great addition to the protocol and very useful for developers 👍
|
With events i mean the events that are emitted from the contract. Some of them only emit id now and that means in order to know what is going on here you have to know how it was created (the event is not independent). Adding some context to each callback that are emitted in events will fix this. What fields should be in the context can be predefined but having one of them be the app that scheduled the callback will make these events so much more usable. |
You mean for these events: to have more context beside just an ID? That could be useful, but how would that context be set? By the developer who calls schedule? |
|
Yes it would be set when you create the callbacks. Or maybe you could also infer them. What address scheduled this callback for instance. |
|
Can scheduled transactions be used as a way to pre-buy capacity on the network? So for example, lets say I want to a mint 100K NFTs in the future. Can I submit a scheduled transaction to do it at a specific time in the future and pay for it? |
|
@vishalchangrani Yes, that is possible. When you schedule a transaction, you have to deploy your own contract that contains the logic that you want to be executed when your schedule transaction executes. In that contract, you can store the tokens that will be required to pay for the NFTs when the transaction executes, or you can have a capability that withdraws them from your account's vault. You'd have to make sure that there is some way for your callback to account for any price changes of the NFTs in the intervening time frame though |
thanks @joshuahannan. However, I read in the FLIP that scheduled callback require a premium to be paid so I think pre-bought capacity would be more expensive than just submitting the transaction at least in the short term until the cost for callbacks is revisited in the future. |
Co-authored-by: Jordan Ribbink <17958158+jribbink@users.noreply.github.com>
|
@bjartek These are great suggestions!
Do either of those options sound better to you? I worry a little bit about incorrect information in those but I guess that is probably better than nothing at all. I'd love to chat about it with you sometime soon if you're up for it |
|
I took a look at the cadence implementation and I think replacing the search with binary search would increase performance significantly. See https://github.com/versus-flow/versus-contracts/blob/4fda8e3203b25fa75b6ba04193699722b86243c9/contracts/DutchAuction.cdc#L335 for an old take on mine on this. The SortedTimestamp implementation could be a totally standalone thing aswell as it would be useful outside of this contract. And does it need to be timestamp specific? |
|
@bjartek Which search are you talking about? Also, agreed that the implementation could be standalone, but I don't think we really have the time to put all of that together right now unfortunately |
Getting a list of ids would work. As long as there are other ways of getting more information based on ID that should be enough.
Nice!
Adding getName and getDescription to the interface would definately help. If we do that then even adding the name to the events would make it even more human readable. We could assert on the length there to make it not be a security concern. Should there be a method in the interface that describes the callback based on the data? Pseudo code access(all) struct CallbackData {
access(all) let time: UFix64
access(all| let name: String
}
access(all) resource Handler: FlowCallbackScheduler.CallbackHandler {
access(FlowCallbackScheduler.Execute)
fun executeCallback(id: UInt64, data: AnyStruct?) {
//
}
//could also be field?
access(all) fun getName(): String {
return "FulfillGr8Acution"
}
//could also be field?
access(all) fun getDescription(): String {
return "An callback to fulfill an gr8.acution."
}
access(all) fun describe(id: UInt64, data: AnyStruct?): String {
let callbackData = data as! CallbackData //add error handler
return "Fulfilling this great auction at \(callbackData.time) with name \(callbackData.name"
} PS: And yes I do own gr8.auction |
When you insert something in to the sorted timestamp you search for the correct index to add it to: I belive bisect would be better for large lists. gpt says O(log n) vs O(n). |
|
Flowscan could prob index these, show them and have a status for things that are failing. Especially if we find a way to make it easy to know what failing tx is a scheduled transaction. Can we use proposer here or how do we solve this? |
|
The proposer/authorizer of any scheduled callback will be the service account, so that could probably be the way to do it |
|
|
||
| - **Medium Priority** | ||
|
|
||
| The callback is expected to execute in the first eligible block as well, but could be deferred if the network is under load. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wanted to flag a couple of design/economics-level observations re: the difference between high and medium priority as described in the FLIP and as implemented. These might be acceptable to keep as-is but could perhaps be explicitly clarified in the FLIP.
- If you successfully schedule a medium priority callback at timeslot
T, it is just as "guaranteed" as a high priority callback and wont be deferred. Thus, given the cost difference (10x vs 5x) an economically rational user would use the following pattern when scheduling a high priority callback:
// Would this fit in as a medium prio callback?
let mediumAttempt = FlowCallbackScheduler.estimate(
data,
timestamp,
Priority.Medium,
executionEffort)
if mediumAttempt.timestamp == timestamp {
// A medium slot is available (no risk of race
// condition due to serializability)
res = FlowCallbackScheduler.schedule(
callback,
timestamp,
Priority.Medium,
<- fees)
} else {
// Resort to expensive hipri callback
res = FlowCallbackScheduler.schedule(
callback,
timestamp,
Priority.High,
<- fees)
}
This code achieves the same level of guarantee of execution time as directly scheduling it as a high priority callback. It would frequently (during non-congested time) save ~50% of scheduling costs for the person scheduling (minus the minor computation cost of a single estimate call).
- The 10kee execution effort pool shared between high and medium priority is first-come-first-serve. Based on the FLIP, readers might be left with the impression that high priority callbacks could "push out" medium priority callbacks from this shared pool, but this is not the case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I agree totally, I mentioned before for user experience ( developer experience ) I think there needs to be a higher level contract to manage these callbacks.
Normally I have 3 options as a developer:
-
I want my callback at some timestamp t ( around t ) for cost estimated c and I decide to go with it or not.
-
I want my callback to execute as cheap as possible but at a fixed timestamp t. ( around t )
Also we have:!
- I want to run as cheap as possible, don't care when it runs. ( technically this shouldn't be an option imo )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bluesign I'm not really sure what you're asking for. The contract currently satisfies those three bullet points. Why do you think the third one shouldn't be an option? And what does this "higher level contract" that you're asking for look like and how is it different than the one we have now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@joshuahannan yeah we have this currently, thats why I said normally we need high level contract to handle can be something like:
I want to run this callback after tStart before tEnd, schedule this as cheap as possible for example.
For me low priority ones are too open ended for mature usage. ( considering computation reserved for block is very small ) I don't see any use case someone can schedule something and wait for it for unknown time. Even something like minting in advance feels like very hard for this use case. Like imagine you don't know when minting will finish.
Something like auto-splitting big tasks into smaller callbacks which can fit it blocks can be also nice. ( which fail handling )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good points. It seems to me like that low priority callbacks could be useful as they are. Like someone can schedule a purchase of something for any time after they receive their salary or staking rewards payment. I agree that the other two will likely be used more, but that is reflected in their respective allocations of execution effort.
@devbugging What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- That's a valid optimization that the user can do but I don't think it makes high-priority scheduling redundant since the reserved limit for those is higher and can even be increased with time. That being said we should probably even encourage this kind of an approach in docs.
- We will make sure in docs that is not the message.
@bluesign
I think it totally makes sense for higher-level contracts to exists for scheduling and have different optimisations/DX improvements. Another is tracking of callback status for case of failed stauses (with blocking execution downside). Another is to define ranges like you said and then optimise within those while charging X and having an incentive to spend less than user is charged.
|
|
||
| For each callback ID retrieved in Step 1, the block computer generates a dedicated execution transaction and enqueues it in the FVM’s execution queue. It also sets the computation effort as the limit for this transaction. | ||
|
|
||
| These transactions are designed to operate in full isolation and do not write to the shared state. This enables parallel execution, constrained only by the behavior of the callback code itself. If two callbacks write to overlapping storage paths or depend on external contract state, concurrency limits may still apply, but that is out of our hands. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@devbugging Can you elaborate more on this? I still don't understand how it is possible for callbacks to execute in parallel unless we have some way to detect which state they will modify. How does the parallel callback execution handle when two callbacks try to write to the same state?
It is also important to users to be able to use a scheduled callback to schedule another callback. This seems to break the requirement for executeCallback() to not modify any state of the callbacks contract, but I don't see how we can enforce that.
In your response, can you please provide specific details or links to the code and tests that you have written to handle these cases so we can have a reference for what you're talking about?
Thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 to this. It's critical that a scheduled callback can create a new scheduled callback.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is speculative ( optimistic concurrent ) execution.
Basically it executes everything in parallel and generates a list of read and writes.
So transactions become: tx ( readSet(....) writeSet(....) )
Then in order it goes over, tx1, tx2 ...
If nothing reading from prev transactions writeset union, it is safe to just add writeset
If it is reading from something previously written, it reexecutes the transaction.
Hopefully most will not write to shared state ( or will not read from shared state , valid too )
So it can be technically arranged that we avoid read from shared stage but freely write.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay, I can see that being fine from the perspective of regular callbacks, but I can see it being pretty popular to schedule callbacks that also schedule other callbacks, which will modify the shared state of the main callbacks contract, so that could potentially be an issue if it is pretty common
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not big an issue, if we move processed to another dictionary for example ( I am not sync with latest contract changes, maybe it is already like this ) only thing needed is not to read from where you write
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@joshuahannan what @bluesign first responded is basically how it works on the execution level. But this section only talks about the execute callback system transaction. What the actual callback handler implementation is in the contract is out of our control. If that implementation access shared state with some other callback the concurrent execution will block and fallback to serial. That is fine. The main point is that we should make sure if handler implementation done by users is not accessing shared state the scheduler does execute concurrently. In the other case we would always have a serial execution even if user implementation is not accessing shared state, which is bad.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any callback that will have a reschedule in the execute phase will block other execute callback transactions (but only during that execution). That is fine tho because it will only likely block one transaction. Optimization of this is very complex and not worth it at this point imho.
Writes to maps are conflicting because each write also reads.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bluesign any more feedback on this? Does what we said sound good to you?
|
|
||
| ### Execution Isolation | ||
|
|
||
| Each callback is executed in isolation as an independent transaction. This design ensures that the failure of one callback, due to logic errors, execution effort exhaustion, or contract reverts, does not impact the execution of others. From a developer perspective, this also provides better observability since each callback receives its own transaction ID and emits events from the transaction domain, allowing for better traceability. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@devbugging I have a question about this:
Each callback is executed in isolation as an independent transaction.
previously, you mentioned
Callback execution is triggered in each new block by a system transaction
Currently, there is only a single system transaction. Are you saying that there will now be an additional (1 + Number of Callbacks) transactions in the system chunk? Or will there be some other mechanism of triggering them via the system tx that behaves like a separate transaction?
I'm wondering because if we add non-standard tx into the system chunk, that will have implications on the execution sync and indexing processes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assumed that scheduling transactions would be run as normal transactions and not part of system chunk. If that is not the case then we also need to change things in our indexer at findlabs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you saying that there will now be an additional (1 + Number of Callbacks) transactions in the system chunk?
@peterargue
Exactly. There will be 1 "process" transaction added to the system collection as the first transaction and then 0-N "execute" transactions based on the process transaction result. What is the implication? Why it matters if it's part of the system collection? Would it be different if it was a new collection just for this, but then I'm not sure that's a good idea either. When we discussed this (and I'm sorry you didn't read this sooner) I remember we said that in future there will be more system transactions anyway since now the system chunk tx does too many things anyway, so it seems to me like relying on there only being a single tx in system collection is not scalable anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly. There will be 1 "process" transaction added to the system collection as the first transaction and then 0-N "execute" transactions based on the process transaction result. What is the implication? Why it matters if it's part of the system collection? Would it be different if it was a new collection just for this, but then I'm not sure that's a good idea either. When we discussed this (and I'm sorry you didn't read this sooner) I remember we said that in future there will be more system transactions anyway since now the system chunk tx does too many things anyway, so it seems to me like relying on there only being a single tx in system collection is not scalable anyway.
@devbugging It's not a problem for there to be more transactions. It only matters because today we do not store or index the system collection or its transaction since they are static and can be regenerated as needed. If there will be dynamic tx, we will need to begin indexing the tx so they are queryable via the API.
It's not a huge lift, just not something we had been planning for.
So that I'm clear, the system collection will now have:
- The existing system tx which handle epochs, randomness, etc
- A new schedule callbacks orchestrator tx
3+ 0-N addition transactions for each callback
Is that correct?
Will the orchestrator tx be dynamic as well, or will it have the same contents every block?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohh I see. Yeah that will have to be indexed, the only other way is to put them in another collection, but that also opens new issues so I wouldn't do that unless necessary.
You are almost right (the order is reversed):
- Process transaction - static transaction that only takes in the service account as authorizer https://github.com/onflow/flow-core-contracts/blob/feature/callback-scheduling/transactions/callbackScheduler/admin/process_callback.cdc
- 0-N (based on previous process tx result) execute transaction with ID as argument https://github.com/onflow/flow-core-contracts/blob/feature/callback-scheduling/transactions/callbackScheduler/admin/execute_callback.cdc
- Existing system transaction will come last
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does this mean there is a custom verification logic needed for system chunk transaction ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, verification nodes have special case for system chunk
|
Hey everyone! We're nearing completion of the first version of the callbacks contract and flow-go integration. We believe all of the feedback here has been addressed, but we want to check one more time. We're still looking for more feedback on the smart contract too, so if you have time, the PR is here in the flow-core-contracts repo. We'll leave this open until the end of next week and if there are no more comments, we'll mark it as completed and merge it. Tagging everyone who left feedback to make sure y'all see this. @bluesign @bjartek @peterargue @janezpodhostnik @briandoyle81 @oebeling @turbolent @SupunS |
|
Great work from everyone involved 👏 all my comments and concerns are addressed. Can't wait to build with this 😍 |
|
Hey all - opened a ticket relating to scheduled callbacks and data availability. One of the gaps I see as an app developer is that it's currently impossible to locate a scheduled callback's transaction & result given its callback ID. This feels like an integration challenge as developers will likely want to surface these results to their users, but cannot do so today without a third party indexer listening to the callback Would love to hear thoughts here and whether we think that this is something possible to add to the access API. I've defined what I see as my "bare minimum" for parity with regular Flow transactions in the ticket as an SDK developer, but it would also be great to hear anything else that could be useful/gaps I may not have though of here. |
Great feedback. We've looked into way to enable on-chain mapping to tx IDs but that has proved to be unfeasible for the project timeline. Your suggested approach of adding new APIs to AN is way to go, and we have already briefly discussed it. The issue you've created is great and you should be part of the discussion that will be done defining this new APIs. The current state is that AN only support returning callbacaks txs without any new APIs but we should define and implement new APIs as soon as possible. Adding @peterargue and @vishalchangrani to look at the flow-go issue and include you in discussions. Thank you! |
|
I think can be also solved at SDK level; txId from callbackId is pretty easy to derive. |
|
Naming proposal: “Scheduled Transactions” (rename from “Scheduled Callbacks”) TL;DR Why this matters Path forward Happy to open a PR with the wording change and a lightweight deprecation note. |
We can rename it. No aliasing or deprecations. That will just make it messy. |
Issue: #330
Abstract
In traditional blockchain architectures, application state transitions occur only in response to externally submitted transactions. This design limits the autonomy of on-chain applications, preventing them from operating independently.
Flow introduces scheduled callbacks, a novel mechanism that enables smart contracts to autonomously trigger actions at predefined times.
Scheduled callbacks enable a contract to “wake up” and execute logic based on on-chain state, allowing for recurring tasks, deferred operations, or reactive behaviors. This unlocks a wide range of powerful use cases, such as autonomous arbitrage bots, recurring subscription services, automated transaction batching, self-destructing wallets, and other advanced decentralized logic patterns.