Skip to content

Conversation

@devbugging
Copy link
Contributor

Issue: #330

Abstract
In traditional blockchain architectures, application state transitions occur only in response to externally submitted transactions. This design limits the autonomy of on-chain applications, preventing them from operating independently.

Flow introduces scheduled callbacks, a novel mechanism that enables smart contracts to autonomously trigger actions at predefined times.

Scheduled callbacks enable a contract to “wake up” and execute logic based on on-chain state, allowing for recurring tasks, deferred operations, or reactive behaviors. This unlocks a wide range of powerful use cases, such as autonomous arbitrage bots, recurring subscription services, automated transaction batching, self-destructing wallets, and other advanced decentralized logic patterns.

@bluesign
Copy link
Collaborator

bluesign commented Jun 9, 2025

In general I like this proposal and support

Pros:

  • Very easy to implement, does not touch protocol etc; from what I understand (please correct me if I am wrong ) scheduling is mostly managed by smart contract, FVM is basically just while(hasBudget){runWork(getWork())} loop.
  • One more marketable feature.
  • This can be good source for new fees

Cons:

  • Can be a bit footgun for developers ( it is more like async for blockchain, or a bit event based programming for us oldies )
  • Never has done before on other chains ( this makes me little bit scared ), the things we innovate first, tend to bite us in the long run in my experience.
  • I feel economics on this will fail somehow

@austinkline had very similar TransactionScheduler ( https://github.com/austinkline/transaction-scheduler/blob/main/contracts/TransactionScheduler.cdc ) without the execution guarantee part. But I think both have good use cases, depending on the economics.

@j1010001 j1010001 mentioned this pull request Jun 5, 2025
48 tasks
@bjartek
Copy link
Contributor

bjartek commented Jun 10, 2025

I am struggling with migraines atm so i cannot give as thorough a feedback that i want here but a couple of points.

  • can events hold some human readable data? Some fields to group how scheduled this for instance. IMHO events should be independent and both human and machine readable. Since these currently only have id for some of them they are not independent, you have to know the previous action to get the context. That makes them less usable.
  • how will these transactions look like on chain? would it be possible to add some arguments to them from the outside so that it is possible to make them look good in the blockexplorer? For instance lets say you want to associate callbacks with a dapp/solution. If you could provide some fields that are sent in as arguments to the scheduler tx that would be very good.
  • is it possible to reschedule a transaction? Lets say i implement a name service (.find), i want to run a tx 1 year after registration to change status of name. But what if the user renews the name, then i want to run the same task just 1 year after this time. Having to cancel the other task and then create a new one is tedious and would just flood the system if you ask me.

@vishalchangrani
Copy link
Contributor

Question - can this be used to implement a pattern for canned transaction? By canned transaction I mean, transaction which cannot have a known reference block upfront. Example a transaction where an organization has a multi-sign account and are using that to transfer fund. Not all signatures can be obtained at the same time so all users sign a scheduled transaction which in the future will transfer funds. Currently, this is not possible if all signers cannot sign within ~5mins before the reference block becomes invalid.

@bluesign
Copy link
Collaborator

@vishalchangrani I don't think it works in that case ( I mean not different than current design ); @tarakby knows much better than me, but basically in that case we need to implement every security measure on chain again ( sequence, expiry etc )

As example to my points above, this is perfect; something we innovate ( multi sign ) biting us and foot gun for devs ( for example when they implement multisign with callbacks )

@vishalchangrani
Copy link
Contributor

@vishalchangrani I don't think it works in that case ( I mean not different than current design ); @tarakby knows much better than me, but basically in that case we need to implement every security measure on chain again ( sequence, expiry etc )

As example to my points above, this is perfect; something we innovate ( multi sign ) biting us and foot gun for devs ( for example when they implement multisign with callbacks )

thank you @bluesign

@devbugging
Copy link
Contributor Author

Apologies for late replies I was absent.

@bluesign

Can be a bit footgun for developers ( it is more like async for blockchain, or a bit event based programming for us oldies )

Yes, this is a bit of a new paradigm, but at the same time something that is already done off-the-chain by many. A lot of contracts rely on being invoked periodically, but that is done in a centralized manner by some authority sending transactions. From the UX perspective this already exists (think any contract that does something with your data without you sending a transaction). The biggest difference is on-chain implementation, which will bring many benefits that decentralization brings.

I feel economics on this will fail somehow

I think economics of this are to be explored with usage, it's hard to predict it. But as long as the value of delayed execution is bigger than the fees required it should work fine. The fees are also adjustable so we can fine-tune it as we go.

@bjartek

can events hold some human readable data? Some fields to group how scheduled this for instance. IMHO events should be independent and both human and machine readable. Since these currently only have id for some of them they are not independent, you have to know the previous action to get the context. That makes them less usable.

If by events you mean callbacks, then yes, they can contain arbitrary data. What is encoded in that data it's up to the developer.

how will these transactions look like on chain? would it be possible to add some arguments to them from the outside so that it is possible to make them look good in the blockexplorer? For instance lets say you want to associate callbacks with a dapp/solution. If you could provide some fields that are sent in as arguments to the scheduler tx that would be very good.

They will look similar to the system chunk transaction. But events emitted as part of callback execution will be contained within that transaction. So each callback execution will have a separate transaction. There will only be another service transaction that will be used to process all the scheduled callbacks. That one will emit events for each processed callback.

is it possible to reschedule a transaction? Lets say i implement a name service (.find), i want to run a tx 1 year after registration to change status of name. But what if the user renews the name, then i want to run the same task just 1 year after this time. Having to cancel the other task and then create a new one is tedious and would just flood the system if you ask me.

We could add reschedule, for now only cancel and schedule is possible. I don't think the overhead is large, it's only 2x (one additional transaction). But adding reschedule as an option could be nice, we just need to think about fee recalculation and also we probably would have to disable changing priorities etc.

@vishalchangrani

Question - can this be used to implement a pattern for canned transaction? By canned transaction I mean, transaction which cannot have a known reference block upfront. Example a transaction where an organization has a multi-sign account and are using that to transfer fund. Not all signatures can be obtained at the same time so all users sign a scheduled transaction which in the future will transfer funds. Currently, this is not possible if all signers cannot sign within ~5mins before the reference block becomes invalid.

Interesting. It's for sure possible to implement this logic inside Cadence. Let's say you have a contract that takes in an intent for an action and has a scheduled time after 1 hour, but checks there's enough signatures received up to that point. Then anyone up to that 1 hour window can submit signed intent. However, scheduling callback doesn't allow canned transactions natively. That dynamic doesn't change. But it allows implementations of such patterns in Cadence.

Co-authored-by: Jan Bernatik <jan.bernatik@flowfoundation.org>
Copy link
Member

@turbolent turbolent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice proposal! Having a feature like this would be a great addition to the protocol and very useful for developers 👍

@bjartek
Copy link
Contributor

bjartek commented Jun 26, 2025

With events i mean the events that are emitted from the contract. Some of them only emit id now and that means in order to know what is going on here you have to know how it was created (the event is not independent).

Adding some context to each callback that are emitted in events will fix this. What fields should be in the context can be predefined but having one of them be the app that scheduled the callback will make these events so much more usable.

@devbugging
Copy link
Contributor Author

devbugging commented Jun 27, 2025

With events i mean the events that are emitted from the contract. Some of them only emit id now and that means in order to know what is going on here you have to know how it was created (the event is not independent).

Adding some context to each callback that are emitted in events will fix this. What fields should be in the context can be predefined but having one of them be the app that scheduled the callback will make these events so much more usable.

You mean for these events:

access(all) event CallbackScheduled(id: UInt64, timestamp: UFix64, priority: UInt8, computationEffort: UInt64)
access(all) event CallbackProcessed(id: UInt64, computationEffort: UInt64)
access(all) event CallbackExecuted(id: UInt64)
access(all) event CallbackCanceled(id: UInt64)

to have more context beside just an ID?

That could be useful, but how would that context be set? By the developer who calls schedule?

@bjartek
Copy link
Contributor

bjartek commented Jul 3, 2025

Yes it would be set when you create the callbacks. Or maybe you could also infer them.

What address scheduled this callback for instance.

@vishalchangrani
Copy link
Contributor

Can scheduled transactions be used as a way to pre-buy capacity on the network? So for example, lets say I want to a mint 100K NFTs in the future. Can I submit a scheduled transaction to do it at a specific time in the future and pay for it?

@joshuahannan
Copy link
Member

@vishalchangrani Yes, that is possible. When you schedule a transaction, you have to deploy your own contract that contains the logic that you want to be executed when your schedule transaction executes. In that contract, you can store the tokens that will be required to pay for the NFTs when the transaction executes, or you can have a capability that withdraws them from your account's vault. You'd have to make sure that there is some way for your callback to account for any price changes of the NFTs in the intervening time frame though

@vishalchangrani
Copy link
Contributor

@vishalchangrani Yes, that is possible. When you schedule a transaction, you have to deploy your own contract that contains the logic that you want to be executed when your schedule transaction executes. In that contract, you can store the tokens that will be required to pay for the NFTs when the transaction executes, or you can have a capability that withdraws them from your account's vault. You'd have to make sure that there is some way for your callback to account for any price changes of the NFTs in the intervening time frame though

thanks @joshuahannan. However, I read in the FLIP that scheduled callback require a premium to be paid so I think pre-bought capacity would be more expensive than just submitting the transaction at least in the short term until the cost for callbacks is revisited in the future.

Co-authored-by: Jordan Ribbink <17958158+jribbink@users.noreply.github.com>
@joshuahannan
Copy link
Member

joshuahannan commented Aug 20, 2025

@bjartek These are great suggestions!

  • I'll add a getter function to query callbacks for a specified time period. This could get pretty costly depending on how big the time period is, but definitely could be useful. Would a list of IDs be okay or do you also want the identifying information about the callbacks like priority, execution effort, handler type, description, etc?
  • Good point! I think adding a type identifier to the events would be great
  • Can you elaborate a bit more on the human friendly metadata part? We currently have the data: AnyStruct argument, but i understand that probably doesn't do enough. I think I see two options:
    • Change data to {CallbackData} where CallbackData is a struct interface that includes an AnyStruct field and a name and description field.
    • Update the CallbackHandler interface to have getName(id: UInt64): String and getDescription(id: UInt64): String functions. So the contract that implements the handler has to store this information for each of the callbacks it handles.

Do either of those options sound better to you? I worry a little bit about incorrect information in those but I guess that is probably better than nothing at all. I'd love to chat about it with you sometime soon if you're up for it

@bjartek
Copy link
Contributor

bjartek commented Aug 20, 2025

I took a look at the cadence implementation and I think replacing the search with binary search would increase performance significantly. See https://github.com/versus-flow/versus-contracts/blob/4fda8e3203b25fa75b6ba04193699722b86243c9/contracts/DutchAuction.cdc#L335 for an old take on mine on this.

The SortedTimestamp implementation could be a totally standalone thing aswell as it would be useful outside of this contract. And does it need to be timestamp specific?

@joshuahannan
Copy link
Member

@bjartek Which search are you talking about? Also, agreed that the implementation could be standalone, but I don't think we really have the time to put all of that together right now unfortunately

@bjartek
Copy link
Contributor

bjartek commented Aug 20, 2025

@bjartek These are great suggestions!

  • I'll add a getter function to query callbacks for a specified time period. This could get pretty costly depending on how big the time period is, but definitely could be useful. Would a list of IDs be okay or do you also want the identifying information about the callbacks like priority, execution effort, handler type, description, etc?

Getting a list of ids would work. As long as there are other ways of getting more information based on ID that should be enough.

  • Good point! I think adding a type identifier to the events would be great

Nice!

  • Can you elaborate a bit more on the human friendly metadata part? We currently have the data: AnyStruct argument, but i understand that probably doesn't do enough. I think I see two options:

    • Change data to {CallbackData} where CallbackData is a struct interface that includes an AnyStruct field and a name and description field.
    • Update the CallbackHandler interface to have getName(id: UInt64): String and getDescription(id: UInt64): String functions. So the contract that implements the handler has to store this information for each of the callbacks it handles.

Do either of those options sound better to you? I worry a little bit about incorrect information in those but I guess that is probably better than nothing at all. I'd love to chat about it with you sometime soon if you're up for it

Adding getName and getDescription to the interface would definately help. If we do that then even adding the name to the events would make it even more human readable. We could assert on the length there to make it not be a security concern.

Should there be a method in the interface that describes the callback based on the data? Pseudo code

    access(all) struct CallbackData {
        
        access(all) let time: UFix64
        access(all| let name: String

    }
    access(all) resource Handler: FlowCallbackScheduler.CallbackHandler {
        
        access(FlowCallbackScheduler.Execute) 
        fun executeCallback(id: UInt64, data: AnyStruct?) {
            //
           
        }

        //could also be field?
         access(all) fun getName(): String {
            return "FulfillGr8Acution"
         }
         //could also be field?
          access(all) fun getDescription(): String {
            return "An callback to fulfill an gr8.acution."
         }

         access(all) fun describe(id: UInt64, data: AnyStruct?): String {
             let callbackData = data as! CallbackData //add error handler
             return "Fulfilling this great auction at \(callbackData.time) with name \(callbackData.name"
         } 

PS: And yes I do own gr8.auction

@bjartek
Copy link
Contributor

bjartek commented Aug 20, 2025

@bjartek Which search are you talking about? Also, agreed that the implementation could be standalone, but I don't think we really have the time to put all of that together right now unfortunately

When you insert something in to the sorted timestamp you search for the correct index to add it to:
https://github.com/onflow/flow-core-contracts/blob/5a65b326f1cb82a42f5f31cb039df177a453024b/contracts/FlowCallbackScheduler.cdc#L355

I belive bisect would be better for large lists. gpt says O(log n) vs O(n).

@bjartek
Copy link
Contributor

bjartek commented Aug 20, 2025

Flowscan could prob index these, show them and have a status for things that are failing. Especially if we find a way to make it easy to know what failing tx is a scheduled transaction.

Can we use proposer here or how do we solve this?

@joshuahannan
Copy link
Member

The proposer/authorizer of any scheduled callback will be the service account, so that could probably be the way to do it


- **Medium Priority**

The callback is expected to execute in the first eligible block as well, but could be deferred if the network is under load.
Copy link

@oebeling oebeling Aug 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wanted to flag a couple of design/economics-level observations re: the difference between high and medium priority as described in the FLIP and as implemented. These might be acceptable to keep as-is but could perhaps be explicitly clarified in the FLIP.

  1. If you successfully schedule a medium priority callback at timeslot T, it is just as "guaranteed" as a high priority callback and wont be deferred. Thus, given the cost difference (10x vs 5x) an economically rational user would use the following pattern when scheduling a high priority callback:
// Would this fit in as a medium prio callback?
let mediumAttempt = FlowCallbackScheduler.estimate(
  data,
  timestamp,
  Priority.Medium,
  executionEffort)
if mediumAttempt.timestamp == timestamp {
  // A medium slot is available (no risk of race
  // condition due to serializability)
  res = FlowCallbackScheduler.schedule(
    callback,
    timestamp,
    Priority.Medium,
    <- fees)
} else {
  // Resort to expensive hipri callback
  res = FlowCallbackScheduler.schedule(
    callback,
    timestamp,
    Priority.High,
    <- fees)
}

This code achieves the same level of guarantee of execution time as directly scheduling it as a high priority callback. It would frequently (during non-congested time) save ~50% of scheduling costs for the person scheduling (minus the minor computation cost of a single estimate call).

  1. The 10kee execution effort pool shared between high and medium priority is first-come-first-serve. Based on the FLIP, readers might be left with the impression that high priority callbacks could "push out" medium priority callbacks from this shared pool, but this is not the case.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I agree totally, I mentioned before for user experience ( developer experience ) I think there needs to be a higher level contract to manage these callbacks.

Normally I have 3 options as a developer:

  • I want my callback at some timestamp t ( around t ) for cost estimated c and I decide to go with it or not.

  • I want my callback to execute as cheap as possible but at a fixed timestamp t. ( around t )

Also we have:!

  • I want to run as cheap as possible, don't care when it runs. ( technically this shouldn't be an option imo )

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bluesign I'm not really sure what you're asking for. The contract currently satisfies those three bullet points. Why do you think the third one shouldn't be an option? And what does this "higher level contract" that you're asking for look like and how is it different than the one we have now?

Copy link
Collaborator

@bluesign bluesign Aug 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@joshuahannan yeah we have this currently, thats why I said normally we need high level contract to handle can be something like:

I want to run this callback after tStart before tEnd, schedule this as cheap as possible for example.

For me low priority ones are too open ended for mature usage. ( considering computation reserved for block is very small ) I don't see any use case someone can schedule something and wait for it for unknown time. Even something like minting in advance feels like very hard for this use case. Like imagine you don't know when minting will finish.

Something like auto-splitting big tasks into smaller callbacks which can fit it blocks can be also nice. ( which fail handling )

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good points. It seems to me like that low priority callbacks could be useful as they are. Like someone can schedule a purchase of something for any time after they receive their salary or staking rewards payment. I agree that the other two will likely be used more, but that is reflected in their respective allocations of execution effort.

@devbugging What do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@oebeling

  1. That's a valid optimization that the user can do but I don't think it makes high-priority scheduling redundant since the reserved limit for those is higher and can even be increased with time. That being said we should probably even encourage this kind of an approach in docs.
  2. We will make sure in docs that is not the message.

@bluesign
I think it totally makes sense for higher-level contracts to exists for scheduling and have different optimisations/DX improvements. Another is tracking of callback status for case of failed stauses (with blocking execution downside). Another is to define ranges like you said and then optimise within those while charging X and having an incentive to spend less than user is charged.


For each callback ID retrieved in Step 1, the block computer generates a dedicated execution transaction and enqueues it in the FVM’s execution queue. It also sets the computation effort as the limit for this transaction.

These transactions are designed to operate in full isolation and do not write to the shared state. This enables parallel execution, constrained only by the behavior of the callback code itself. If two callbacks write to overlapping storage paths or depend on external contract state, concurrency limits may still apply, but that is out of our hands.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@devbugging Can you elaborate more on this? I still don't understand how it is possible for callbacks to execute in parallel unless we have some way to detect which state they will modify. How does the parallel callback execution handle when two callbacks try to write to the same state?

It is also important to users to be able to use a scheduled callback to schedule another callback. This seems to break the requirement for executeCallback() to not modify any state of the callbacks contract, but I don't see how we can enforce that.

In your response, can you please provide specific details or links to the code and tests that you have written to handle these cases so we can have a reference for what you're talking about?

Thank you!

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to this. It's critical that a scheduled callback can create a new scheduled callback.

Copy link
Collaborator

@bluesign bluesign Aug 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is speculative ( optimistic concurrent ) execution.

Basically it executes everything in parallel and generates a list of read and writes.

So transactions become: tx ( readSet(....) writeSet(....) )

Then in order it goes over, tx1, tx2 ...

If nothing reading from prev transactions writeset union, it is safe to just add writeset

If it is reading from something previously written, it reexecutes the transaction.

Hopefully most will not write to shared state ( or will not read from shared state , valid too )

So it can be technically arranged that we avoid read from shared stage but freely write.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay, I can see that being fine from the perspective of regular callbacks, but I can see it being pretty popular to schedule callbacks that also schedule other callbacks, which will modify the shared state of the main callbacks contract, so that could potentially be an issue if it is pretty common

Copy link
Collaborator

@bluesign bluesign Aug 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not big an issue, if we move processed to another dictionary for example ( I am not sync with latest contract changes, maybe it is already like this ) only thing needed is not to read from where you write

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@joshuahannan what @bluesign first responded is basically how it works on the execution level. But this section only talks about the execute callback system transaction. What the actual callback handler implementation is in the contract is out of our control. If that implementation access shared state with some other callback the concurrent execution will block and fallback to serial. That is fine. The main point is that we should make sure if handler implementation done by users is not accessing shared state the scheduler does execute concurrently. In the other case we would always have a serial execution even if user implementation is not accessing shared state, which is bad.

Copy link
Contributor Author

@devbugging devbugging Aug 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any callback that will have a reschedule in the execute phase will block other execute callback transactions (but only during that execution). That is fine tho because it will only likely block one transaction. Optimization of this is very complex and not worth it at this point imho.
Writes to maps are conflicting because each write also reads.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bluesign any more feedback on this? Does what we said sound good to you?


### Execution Isolation

Each callback is executed in isolation as an independent transaction. This design ensures that the failure of one callback, due to logic errors, execution effort exhaustion, or contract reverts, does not impact the execution of others. From a developer perspective, this also provides better observability since each callback receives its own transaction ID and emits events from the transaction domain, allowing for better traceability.
Copy link
Contributor

@peterargue peterargue Aug 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@devbugging I have a question about this:

Each callback is executed in isolation as an independent transaction.

previously, you mentioned

Callback execution is triggered in each new block by a system transaction

Currently, there is only a single system transaction. Are you saying that there will now be an additional (1 + Number of Callbacks) transactions in the system chunk? Or will there be some other mechanism of triggering them via the system tx that behaves like a separate transaction?

I'm wondering because if we add non-standard tx into the system chunk, that will have implications on the execution sync and indexing processes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assumed that scheduling transactions would be run as normal transactions and not part of system chunk. If that is not the case then we also need to change things in our indexer at findlabs.

Copy link
Contributor Author

@devbugging devbugging Aug 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you saying that there will now be an additional (1 + Number of Callbacks) transactions in the system chunk?

@peterargue
Exactly. There will be 1 "process" transaction added to the system collection as the first transaction and then 0-N "execute" transactions based on the process transaction result. What is the implication? Why it matters if it's part of the system collection? Would it be different if it was a new collection just for this, but then I'm not sure that's a good idea either. When we discussed this (and I'm sorry you didn't read this sooner) I remember we said that in future there will be more system transactions anyway since now the system chunk tx does too many things anyway, so it seems to me like relying on there only being a single tx in system collection is not scalable anyway.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly. There will be 1 "process" transaction added to the system collection as the first transaction and then 0-N "execute" transactions based on the process transaction result. What is the implication? Why it matters if it's part of the system collection? Would it be different if it was a new collection just for this, but then I'm not sure that's a good idea either. When we discussed this (and I'm sorry you didn't read this sooner) I remember we said that in future there will be more system transactions anyway since now the system chunk tx does too many things anyway, so it seems to me like relying on there only being a single tx in system collection is not scalable anyway.

@devbugging It's not a problem for there to be more transactions. It only matters because today we do not store or index the system collection or its transaction since they are static and can be regenerated as needed. If there will be dynamic tx, we will need to begin indexing the tx so they are queryable via the API.

It's not a huge lift, just not something we had been planning for.

So that I'm clear, the system collection will now have:

  1. The existing system tx which handle epochs, randomness, etc
  2. A new schedule callbacks orchestrator tx
    3+ 0-N addition transactions for each callback

Is that correct?
Will the orchestrator tx be dynamic as well, or will it have the same contents every block?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ohh I see. Yeah that will have to be indexed, the only other way is to put them in another collection, but that also opens new issues so I wouldn't do that unless necessary.

You are almost right (the order is reversed):

  1. Process transaction - static transaction that only takes in the service account as authorizer https://github.com/onflow/flow-core-contracts/blob/feature/callback-scheduling/transactions/callbackScheduler/admin/process_callback.cdc
  2. 0-N (based on previous process tx result) execute transaction with ID as argument https://github.com/onflow/flow-core-contracts/blob/feature/callback-scheduling/transactions/callbackScheduler/admin/execute_callback.cdc
  3. Existing system transaction will come last

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this mean there is a custom verification logic needed for system chunk transaction ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, verification nodes have special case for system chunk

@joshuahannan
Copy link
Member

Hey everyone! We're nearing completion of the first version of the callbacks contract and flow-go integration. We believe all of the feedback here has been addressed, but we want to check one more time.

We're still looking for more feedback on the smart contract too, so if you have time, the PR is here in the flow-core-contracts repo.

We'll leave this open until the end of next week and if there are no more comments, we'll mark it as completed and merge it.

Tagging everyone who left feedback to make sure y'all see this.

@bluesign @bjartek @peterargue @janezpodhostnik @briandoyle81 @oebeling @turbolent @SupunS

@bluesign
Copy link
Collaborator

Great work from everyone involved 👏 all my comments and concerns are addressed.

Can't wait to build with this 😍

@jribbink
Copy link
Contributor

jribbink commented Sep 4, 2025

Hey all - opened a ticket relating to scheduled callbacks and data availability. One of the gaps I see as an app developer is that it's currently impossible to locate a scheduled callback's transaction & result given its callback ID. This feels like an integration challenge as developers will likely want to surface these results to their users, but cannot do so today without a third party indexer listening to the callback Executed/PendingExecution events.

Would love to hear thoughts here and whether we think that this is something possible to add to the access API.

I've defined what I see as my "bare minimum" for parity with regular Flow transactions in the ticket as an SDK developer, but it would also be great to hear anything else that could be useful/gaps I may not have though of here.

onflow/flow-go#7830

cc @devbugging @joshuahannan

@devbugging
Copy link
Contributor Author

Hey all - opened a ticket relating to scheduled callbacks and data availability. One of the gaps I see as an app developer is that it's currently impossible to locate a scheduled callback's transaction & result given its callback ID. This feels like an integration challenge as developers will likely want to surface these results to their users, but cannot do so today without a third party indexer listening to the callback Executed/PendingExecution events.

Would love to hear thoughts here and whether we think that this is something possible to add to the access API.

I've defined what I see as my "bare minimum" for parity with regular Flow transactions in the ticket as an SDK developer, but it would also be great to hear anything else that could be useful/gaps I may not have though of here.

onflow/flow-go#7830

cc @devbugging @joshuahannan

Great feedback. We've looked into way to enable on-chain mapping to tx IDs but that has proved to be unfeasible for the project timeline.

Your suggested approach of adding new APIs to AN is way to go, and we have already briefly discussed it. The issue you've created is great and you should be part of the discussion that will be done defining this new APIs. The current state is that AN only support returning callbacaks txs without any new APIs but we should define and implement new APIs as soon as possible. Adding @peterargue and @vishalchangrani to look at the flow-go issue and include you in discussions.

Thank you!

@bluesign
Copy link
Collaborator

bluesign commented Sep 5, 2025

I think can be also solved at SDK level; txId from callbackId is pretty easy to derive.

@j1010001 j1010001 merged commit 8607de1 into onflow:main Sep 5, 2025
@SeanRobb
Copy link
Contributor

Naming proposal: “Scheduled Transactions” (rename from “Scheduled Callbacks”)

TL;DR
Let’s rename the primitive to Scheduled Transactions. “Callback” implies control flow back into code you own. This feature schedules and executes any authorized transaction—including ones touching contracts you don’t own—so “transaction” is the accurate, least-surprising name.

Why this matters
• Scope & accuracy: What runs is a first-class transaction with its own TxID/events and fee model, not a function thunk.
• Power signaling: “Callback” reads as “call my contract later.” The real unlock is cross-contract, cross-module automation—any transaction you’re authorized to execute.
• Economic model alignment: Priorities, slot budgets, and multipliers are already reasoned about in transaction terms; naming should match the mental model.
• DX clarity: Avoids devs under-scoping the feature or thinking they must deploy bespoke handlers just to benefit.

Path forward
• Adopt Scheduled Transactions as the canonical name in the FLIP, docs, and examples.
• If timing is tight for testnet, could we ship with current symbols but alias/dual-label in docs and tooling, keeping legacy names temporarily for compatibility? We can deprecate legacy names before the mainnet cycle.

Happy to open a PR with the wording change and a lightweight deprecation note.

@devbugging
Copy link
Contributor Author

Naming proposal: “Scheduled Transactions” (rename from “Scheduled Callbacks”)

TL;DR Let’s rename the primitive to Scheduled Transactions. “Callback” implies control flow back into code you own. This feature schedules and executes any authorized transaction—including ones touching contracts you don’t own—so “transaction” is the accurate, least-surprising name.

Why this matters • Scope & accuracy: What runs is a first-class transaction with its own TxID/events and fee model, not a function thunk. • Power signaling: “Callback” reads as “call my contract later.” The real unlock is cross-contract, cross-module automation—any transaction you’re authorized to execute. • Economic model alignment: Priorities, slot budgets, and multipliers are already reasoned about in transaction terms; naming should match the mental model. • DX clarity: Avoids devs under-scoping the feature or thinking they must deploy bespoke handlers just to benefit.

Path forward • Adopt Scheduled Transactions as the canonical name in the FLIP, docs, and examples. • If timing is tight for testnet, could we ship with current symbols but alias/dual-label in docs and tooling, keeping legacy names temporarily for compatibility? We can deprecate legacy names before the mainnet cycle.

Happy to open a PR with the wording change and a lightweight deprecation note.

We can rename it. No aliasing or deprecations. That will just make it messy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.