International Workshop on Adaptable, Reliable, and Responsible Learning (ARRL)


Co-located with IEEE ICDM 2023.

December 1st 2023, Shanghai,China


For years, machine Learning has advanced artificial intelligence (AI) by enabling the development of systems that generate models from various databases without explicit instruction. The growing availability of data across various fields has led to the proliferation of learning-enabled systems, which embed machine learning components in the core, that have become increasingly powerful and integral to industry and everyday life. Data mining techniques allow such systems to examine vast quantities of data, identifying subtle features that often elude human capabilities. However, these techniques frequently rely on oversimplified learning objectives and data that may be biased, incomplete, or even hazardous. The transition from learning-enabled systems into real-world decision-making contexts thus can pose risks, primarily due to their limited adaptability, reliability, and responsibility in dealing with unfamiliar or unknown circumstances.

The inaugural International Workshop on Adaptable, Reliable, and Responsible Learning (ARRL) aims to gather researchers and practitioners to present recent advancements in addressing the three key aspects of learning within the context of data-driven and data-centric systems: adaptability, reliability, and responsibility. The workshop will explore theoretical foundations, algorithm designs, and frameworks that ensure future learning-enabled systems are

1) *Adaptable*, by exhibiting evolvability with changes in the environment, societal dynamics, and task objectives or requirements, ensuring that the system remains relevant and effective in addressing diverse and dynamic challenges while maintaining high-performance standards;

2) *Reliable*, by demonstrating robustness and stability in the presence of uncertainty, variability, and unknown unknowns, ensuring system safety and performance consistency across diverse conditions and high-stakes operating environments; and

3) *Responsible*, by promoting sustainability, fairness, explainability, and trustworthiness in learning processes and outcomes, addressing ethical and privacy concerns and championing technology use for positive societal impact including solutions for affordable clean energy and climate action.

This workshop cordially invites submissions that showcase cutting-edge advances in research and development of adaptable, reliable, and responsible (ARR) learning algorithms and designs, as well as late-breaking research that introduces published work or software that address ARR challenges and provide significant value to the community.

IMPORTANT DATE


September 15th, 2023 Paper Submission
September 25th, 2023 Author Notification
October 15th, 2023 Camera-Ready
October 15th, 2023 Registration
December 1 -- 4th, 2023 Conference Date
December 1st, 2023 Workshop Date

TOPICS


Theory, methodology, and resource papers are welcome from any of the following areas, including but not limited to:


Adaptable Learning Reliable Learning Responsible Learning

SUBMISSION AND PUBLICATION


    Paper submission link: International Workshop on Adaptable, Reliable, and Responsible Learning (ARRL) .

    Paper submissions should be limited to a maximum of 8 pages, and follow the IEEE ICDM format. More detailed information is available in the IEEE ICDM 2023 Submission Guidelines.

    All accepted papers will be included in the ICDM'23 Workshop Proceedings (ICDMW 2023) published by the IEEE Computer Society Press. Therefore, papers must not have been accepted for publication elsewhere or be under review for another workshop, conferences or journals.

    All accepted papers, including workshops, must have at least one “FULL” registration. A full registration is either a “member” or “non-member” registration. Student registrations are not considered full registrations. All authors are required to register by 15th October 2023.

    For registration queries please contact: registration@computer.org

PROGRAM


Time (Beijing Time) Title Presenter/Author
8:00-8:10 Opening Remarks Hill Zhu and Yi He
8:10-8:30 Pyramid Feature Iterative Fusion: A Cross-Scale Fusion Algorithm for Enhanced Analysis of H&E Images in HER2-Positive Breast Cancer Xiaomin Xiong, Yuqi Zhang, Lihua Gu, Yi Li, Bo Lin, Dajiang Lei, Guoyin Wang, and Bo Xu
8:30-8:50 Data Intrusion Tolerance Model based on Game Theory for Energy Internet Zhanwang Zhu, Yiming Yuan, and Song Deng
8:50-9:10 A Label Distribution for Few-shot In-domain Out-of-Scope Detection Xinyi Cai, Pei-Wei Tsai, Jiao Tian, Kai Zhang, and Jinjun Chen Qian, and Bin Ouand Wayne Goodridge
9:10-9:30 Human-interpretable features derived from breast cancer pathology slides detect BRCA1/2 gene mutations Yi Li, Xiaomin Xiong, Xiaohua Liu, Yihan Wu, Lin Chen, Bo Lin, and Bo Xu Zhang
9:30-9:50 Improving HER2-Positive Breast Cancer Targeted Therapy Prediction Using multiMSnet: A Multi-Scale Pathological Image-Based Approach Xiaohua Liu, Yi Li, Xiaomin Xiong, Yihan Wu, Mengke Xu, Lin Chen, Bo Lin, Bo Xu, and Guoxiang Liu
10:00-10:30 Coffee Break and Refreshment
10:30-11:30 Keynote Talk: Decoding AI: Mastering the Craft of Explainability My Thai
12:00-13:00 Lunch Break
13:00-13:20 Learning and Adapting Diverse Representations for Cross-Domain Few-shot Learning Ge Liu, Zhongqiang Zhang, Fuhan Cai, Duo Liu, and Xiangzhong Fang
13:20-13:40 An Approach for Data Publishing with Sensitive Attribute Synthesis Zhihui Wang, Yun Zhu, and Xinyuan Mi
13:40-14:00 A Risk-Averse Framework for Non-Stationary Stochastic Multi-Armed Bandits Reda Alami, Mohammed Mahfoud, and Mastane Achab
14:00 Closing Remarks Hill Zhu and Yi He

Keynote Talk


    Decoding AI: Mastering the Craft of Explainability

    Speaker: Dr. My T. Thai, Professor, University of Florida (more info)

    Abstract: With the impressive feats of using deep learning models in many application domains, researchers and the public have grown alarmed by the fact that these models lack interpretability and transparency. They have been used as black-boxes with little explanation for why the models make such predictions. In this talk, we will discuss the art of explainable AI, which has emerged as one of promising tools towards the safety and responsible AI. We will cover linear to non-linear explanation, their evaluation, and the risk of being exploited.

WORKSHOP CHAIRS


Program Chairs


Contact information


Xingquan Zhu, Ph.D.
Professor
Dept. of Electrical Engineering and Computer Science
Florida Atlantic University
777 Glades Road, EE-503B
Tel: 561-297-3452
E-mail: xzhu3@fau.edu
Webpage: Homepage

                  Yi He, Ph.D.
                  Assistant Professor
                  Department of Computer Science
                  Old Dominion University
                  3108 ECS Building, Norfolk, VA 23529
                  Tel: 757-683-7821
                  E-mail: yihe@cs.odu.edu
                  Webpage: Homepage