Call for Papers
The AIWare Datasets and Benchmarks track invites high quality publications on highly valuable datasets and benchmarks crucial for the development and continuous improvement of AIware. Such datasets and benchmarks are essential for development and evaluation of AIware and their evolution. This track encourages high quality datasets and benchmarks for development and assessment of AIware in the following areas:
- Data papers that include:
- New datasets, or carefully and thoughtfully designed (collections of) datasets based on previously available data tailored for AIware.
- Data generators and reinforcement learning environments.
- Data-centric AI methods and tools, e.g. to measure and improve data quality or utility, or studies in data-centric AI that bring important new insights.
- Advanced practices in data collection and curation are of general interest even if the data itself cannot be shared.
- Frameworks for responsible dataset development, audits of existing datasets, and identifying significant problems with existing datasets and their use.
- Tools and best practices to enhance dataset creation, documentation, metadata standards, ethical data handling (e.g., licensing, privacy), and accessibility.
- Benchmarking papers are expected to include:
- Benchmarks on new or existing metrics, as well as benchmarking tools.
- Systematic analyses of existing systems on novel datasets yield important new insights.
- Establish meaningful benchmarks that drive progress in performance, robustness, fairness, reliability, and usability of AIware tools.
Topics of interest
Topics of interest fall under the topics of interest of AIware conference with an emphasis on the scope for dataset and benchmark papers explained above.
Submissions
AIware 2025 Benchmark and Dataset Track welcomes submissions from both academia and industry. At least one author from each accepted submission will be required to attend the conference and present the paper. Submissions are 4 pages including references. At the time of submission, the papers should disclose (anonymized and curated) data/benchmarks to increase reproducibility and replicability.
All submissions must be in English and PDF. The page limit is strict, and it will not be possible to purchase additional pages at any point in the process (including after acceptance).
Submission guidelines follows the guidelines in the main track of AIware conference. Submission link will be available shortly.
Review and evaluation process
A double-anonymous review process will be employed for submissions to the Benchmark and Dataset Track. The submission must not reveal the identity of the authors in any way.