Competition and System Integration
Section name
Competition and System Integration
List of Section Editors
[Section Chief Editor]
Tetsunari Inamura, Professor, Brain Science Institute, Tamagawa University, Japan
[Section Editor]
- Hiroyuki Okada (Tamagawa University, Japan)
- Jeffrey Too Chuan Tan (Nankai University, China)
- Hakaru Tamukou (Kyusyu Institute of Technology, Japan)
- Mihoko Niitsuma (Chuou University, Japan)
- Amy Eguchi (University of California, San Diego, USA)
- Ubbo Visser (University of Miami, USA)
- Hidehisa Akiyama (Okayama University of Science, Japan)
- Nobuhiro Itoh (Aichi Institute of Technology, Japan)
- Wataru Uemura (Ryukoku University, Japan)
- Kazuyoshi Wada (Tokyo Metropolitan University, Japan)
- Noriaki Ando (National Institute of Advanced Industrial Science and Technology(AIST), Japan)
- Yoshinobu Hagiwara (Ritsumeikan University, Japan)
- Yoshiaki Mizuchi (Tamagawa University, Japan)
Scope
Efforts embodied by competitions and challenges have been actively planned in recent AI-related international conferences, such as NeurIPS and CVPR. They serve as a driving force to evaluate AI agent performance against globally shared standards, fostering productive discussions. In the robotics community, competitions like RoboCup have flourished over a quarter of a century, cultivating a culture embedded with events like the World Robot Summit. However, the context in which the operation and participation in robot competitions are recognized as academic achievements is not fully established, raising concerns about the potential divergence between competition participation and academic publication directions.
While the AI competition community has even implemented "Embodied AI" competitions, the radical embodiment observed in real-world operating robots has not yet become a significant subject of discussion and evaluation.
Considering these backgrounds, this section focuses on the academic value generated by competitions in robotics. We aim to amplify the synergy between competition operations and the creation of academic achievements through proactive global dissemination.
Our perspective is not limited to robot competition. We are also focusing on system integration, social implementation in the real world, evaluation in case studies, and user experience evaluation involving human subjects - aspects that have traditionally been challenging to evaluate in academic papers. Our goal is to create shared value in the robotics engineering community by examining the way of implementation and evaluation. Additionally, our scope will encompass the design of software/hardware platforms in the assessment under unified conditions, the construction of open datasets, and the standardization of evaluation criteria. We aim to address these crucial elements to ensure holistic and fair assessment across various studies. While such topics might have been considered lacking in novelty according to traditional peer review standards, we aim to create academic value by illustrating societal challenges, developing, and designing technologies to solve them, implementing methods, and integrating supporting theories and concepts in a positive spiral. Therefore, we include case reports of actual application systems (both successful and unsuccessful) within our scope. However, mere reports of 'it was built, and it worked' will not be considered for review. Instead, we emphasize the perspective of how the work contributes to the robotics research community.
Keywords
Robot Competition
Benchmarking
Design of Evaluation Criteria
System Integration
Social Implementation
Evaluation of User Experience
Case studies in robot implementation
User Studies
Proof of Concept
Design of Software/Hardware Platform
Standardization
Dataset Construction
Additional information (review criteria)
The peer-review process will be identical to the conventional review process in Advanced Robotics. However, we anticipate situations in a social implementation where there may be no baseline for comparison. Moreover, we expect a need to recognize the academic value in the know-how obtained through real-world operations as case studies. Given these considerations, our review process will not be bound by single-valued comparisons, such as those with mere baselines. Instead, we will adopt a flexible approach that assesses academic value from a broad perspective. To ensure this background, this section will handle the review process by above section editor members.
We welcome contributions that tackle these topics and look forward to stimulating vigorous discussions and facilitating meaningful progress in our field.
Specifically, please note that aspects such as the following will be considered for evaluation. Including all items is unnecessary; focusing on one aspect is acceptable. Focus on areas other than the following examples is also welcome; however, descriptions that allow for an objective assessment of the contribution to the robotics research community are required.
In the context of competition:
- Rather than merely reporting on the results achieved using a certain method, there must be a quantitative or qualitative comparison with the results of other methods (from other teams, other algorithms, or methods and results from past competitions).
- Proposals for designing new competition rules based on social needs and academic seeds, as well as proposals for new benchmarking standards.
- Proposals for technological development and theories to ensure fair competition and benchmark evaluation.
In the context of system integration:
- Mere reports of 'a system was built, and it worked' will not be considered for review.
- Descriptions of the hurdles for societal implementation and methods to overcome them in actual system integration and operation.
- Proposals for methods to evaluate usefulness in real environments and discussions based on thorough quantitative evaluations.
- Descriptions of lessons learned from demonstration experiments and operation of the system in real environments.