We release the source code for the paper "Autonomous Workflow for Multimodal Fine-Grained Training Assistants Towards Mixed Reality" based on LangChain framework. There are three key phrases as follows: Phase 1: Develop the workflow using general LLMs (such as "gpt-3.5-turbo-16k-0613"). See code in LEGO_data_simulation Phase 2: Create XRTA dataset by asking trainees to play with the assistant using the workflow. See code in LEGO_manual_crawler and LEGO_data_simulation. Phase 3: Finetune LLMs (such as llama2) and replace them in the workflow. We release our dataset for open science.

Distributed and Interactive Systems

Pei, J., Viola, I., Huang, H., Wang, J., Ahsan, M., Jiang, Y., … César Garcia, P. S. (2024). Autonomous workflow for multimodal fine-grained training assistants towards mixed reality.