Call for Papers
The AIWare Datasets and Benchmarks track invites high quality publications on highly valuable datasets and benchmarks crucial for the development and continuous improvement of AIware. Such datasets and benchmarks are essential for development and evaluation of AIware and their evolution. This track encourages high quality datasets and benchmarks for development and assessment of AIware in the following areas:
- Data papers that include:
- New datasets, or carefully and thoughtfully designed (collections of) datasets based on previously available data tailored for AIware.
- Data generators and reinforcement learning environments.
- Data-centric AI methods and tools, e.g. to measure and improve data quality or utility, or studies in data-centric AI that bring important new insights.
- Advanced practices in data collection and curation are of general interest even if the data itself cannot be shared.
- Frameworks for responsible dataset development, audits of existing datasets, and identifying significant problems with existing datasets and their use.
- Tools and best practices to enhance dataset creation, documentation, metadata standards, ethical data handling (e.g., licensing, privacy), and accessibility.
- Benchmarking papers are expected to include:
- Benchmarks on new or existing metrics, as well as benchmarking tools.
- Systematic analyses of existing systems on novel datasets yield important new insights.
- Establish meaningful benchmarks that drive progress in performance, robustness, fairness, reliability, and usability of AIware tools.
Topics of interest
Topics of interest fall under the topics of interest of AIware conference with an emphasis on the scope for dataset and benchmark papers explained above.
Submissions
AIware 2025 Benchmark and Dataset Track welcomes submissions from both academia and industry. At least one author from each accepted submission will be required to attend the conference and present the paper. Submissions are 4 pages including references. At the time of submission, the papers should disclose (anonymized and curated) data/benchmarks to increase reproducibility and replicability.
All submissions must be in English and PDF. The page limit is strict, and it will not be possible to purchase additional pages at any point in the process (including after acceptance).
Submission guidelines follows the guidelines in the main track of AIware conference. Papers must be submitted electronically in OpenReview platform through the following submission site: https://openreview.net/group?id=ACM.org/AIWare/2025/Data_and_Benchmark_Track
Authors are required to sign up active OpenReview accounts for submission. (Institutional email is recommended for registration otherwise it might take a couple of days for OpenReview to manually activate the account.) More information about OpenReview is provided in the AIware conference main track page.
Review and evaluation process
A double-anonymous review process will be employed for submissions to the Benchmark and Dataset Track. The submission must not reveal the identity of the authors in any way.
Evaluation criteria:
For Data papers:
- Novelty: originality of the dataset or tool and clarity of relation with related work
- Impact: value, usefulness, and reusability of the datasets or tool
- Relevance: the relevance of the proposed demonstration for the AIware audience
- Presentation: quality of the presentation
- Open Usage: accessibility of the datasets or tool, i.e., the data/tool can be found and obtained without a personal request, and any required code should be open source
For Benchmarking papers:
- Novelty: the originality of its underlying ideas and clarity of relation with related work
- Impact: the outreach of the proposed tool, metric or dataset and the usefulness of the results
- Relevance: the relevance of the proposed demonstration for the AIware audience
- Presentation: the quality of the presentation
- Open Usage: accessibility of the datasets, metrics, or tools, i.e., the data/tool/metric can be found and obtained without a personal request, and any required code should be open source