During a migration I ran into the issue, that my deployed file share for the Azure Container App was full and new chunks could not be written. Which caused the migration to fail. I understand that buffering 100GB of migration data into a file share is probably not ideal. But especially in migration scenarios where we migrate from e.g. M40 in MongoDB Atlas to M30 in Azure there will be a bottleneck in inserts. To not have to fine tune the number of dump, restore and insert workers according to resource consumption it would be a nice option to store more of the data in Azure files during the long running migration.
I would like to have a parameter fileShareSizeGB in the deploy-to-aca.ps1 script which should be between 100 and 102400.