In the GPT guide(https://github.com/NVIDIA/FasterTransformer/blob/main/docs/gpt_guide.md#workflow),
Fig 2 shows fuseQKV masked attention, which looks very similar to Flash Attention. However, there's no longer any mention of fuseQKV masked attention or 'Flash Attention' in the text, so I'm wondering if it's the same technology as Flash Attention.
Am I understanding it correctly?