Towards Comprehensive Multimodal Perception:

Introducing the Touch-Language-Vision Dataset

1 Beijing Jiaotong University
2 Tsinghua University

Abstract

Tactility provides crucial support and enhancement for the perception and interaction capabilities of both humans and robots. Nevertheless, the multimodal research related to touch primarily focuses on visual and tactile modalities, with limited exploration in the domain of language. Beyond vocabulary, sentence-level descriptions contain richer semantics. Based on this, we construct a touch-language-vision dataset named TLV (Touch-Language-Vision) by human-machine cascade collaboration, featuring sentence-level descriptions for multimode alignment. The new dataset is used to fine-tune our proposed lightweight training framework, TLV-Align, achieving effective semantic alignment with minimal parameter adjustments (1%).

Citation

If you use this work or find it helpful, please consider citing our work.


          @article{cheng2024towards,
            title={Towards Comprehensive Multimodal Perception: Introducing the Touch-Language-Vision Dataset},
            author={Cheng, Ning and Li, You and Gao, Jing and Fang, Bin and Xu, Jinan and Han, Wenjuan},
            journal={arXiv preprint arXiv:2403.09813},
            year={2024}
          }
      

Credit: The design of this project page references the project pages of Nerfies.