For years, machine Learning has advanced artificial intelligence (AI) by enabling the development of systems that generate models from various databases without explicit instruction. The growing availability of data across various fields has led to the proliferation of learning-enabled systems, which embed machine learning components in the core, that have become increasingly powerful and integral to industry and everyday life. Data mining techniques allow such systems to examine vast quantities of data, identifying subtle features that often elude human capabilities. However, these techniques frequently rely on oversimplified learning objectives and data that may be biased, incomplete, or even hazardous. The transition from learning-enabled systems into real-world decision-making contexts thus can pose risks, primarily due to their limited adaptability, reliability, and responsibility in dealing with unfamiliar or unknown circumstances.
The inaugural International Workshop on Adaptable, Reliable, and Responsible Learning (ARRL) aims to gather researchers and practitioners to present recent advancements in addressing the three key aspects of learning within the context of data-driven and data-centric systems: adaptability, reliability, and responsibility. The workshop will explore theoretical foundations, algorithm designs, and frameworks that ensure future learning-enabled systems are
1) *Adaptable*, by exhibiting evolvability with changes in the environment, societal dynamics, and task objectives or requirements, ensuring that the system remains relevant and effective in addressing diverse and dynamic challenges while maintaining high-performance standards;
2) *Reliable*, by demonstrating robustness and stability in the presence of uncertainty, variability, and unknown unknowns, ensuring system safety and performance consistency across diverse conditions and high-stakes operating environments; and
3) *Responsible*, by promoting sustainability, fairness, explainability, and trustworthiness in learning processes and outcomes, addressing ethical and privacy concerns and championing technology use for positive societal impact including solutions for affordable clean energy and climate action.
This workshop cordially invites submissions that showcase cutting-edge advances in research and development of adaptable, reliable, and responsible (ARR) learning algorithms and designs, as well as late-breaking research that introduces published work or software that address ARR challenges and provide significant value to the community.
|September 15th, 2023
|September 25th, 2023
|October 15th, 2023
|October 15th, 2023
|December 1 -- 4th, 2023
|December 1st, 2023
Theory, methodology, and resource papers are welcome from any of the following areas, including but not limited to:
Paper submission link: International Workshop on Adaptable, Reliable, and Responsible Learning (ARRL) .
Paper submissions should be limited to a maximum of 8 pages, and follow the IEEE ICDM format. More detailed information is available in the IEEE ICDM 2023 Submission Guidelines.
All accepted papers will be included in the ICDM'23 Workshop Proceedings (ICDMW 2023) published by the IEEE Computer Society Press. Therefore, papers must not have been accepted for publication elsewhere or be under review for another workshop, conferences or journals.
All accepted papers, including workshops, must have at least one “FULL” registration. A full registration is either a “member” or “non-member” registration. Student registrations are not considered full registrations. All authors are required to register by 15th October 2023.
For registration queries please contact: email@example.com
|Time (Beijing Time)
|Hill Zhu and Yi He
|Pyramid Feature Iterative Fusion: A Cross-Scale Fusion Algorithm for Enhanced Analysis of H&E Images in HER2-Positive Breast Cancer
|Xiaomin Xiong, Yuqi Zhang, Lihua Gu, Yi Li, Bo Lin, Dajiang Lei, Guoyin Wang, and Bo Xu
|Data Intrusion Tolerance Model based on Game Theory for Energy Internet
|Zhanwang Zhu, Yiming Yuan, and Song Deng
|A Label Distribution for Few-shot In-domain Out-of-Scope Detection
|Xinyi Cai, Pei-Wei Tsai, Jiao Tian, Kai Zhang, and Jinjun Chen Qian, and Bin Ouand Wayne Goodridge
|Human-interpretable features derived from breast cancer pathology slides detect BRCA1/2 gene mutations
|Yi Li, Xiaomin Xiong, Xiaohua Liu, Yihan Wu, Lin Chen, Bo Lin, and Bo Xu Zhang
|Improving HER2-Positive Breast Cancer Targeted Therapy Prediction Using multiMSnet: A Multi-Scale Pathological Image-Based Approach
|Xiaohua Liu, Yi Li, Xiaomin Xiong, Yihan Wu, Mengke Xu, Lin Chen, Bo Lin, Bo Xu, and Guoxiang Liu
|Coffee Break and Refreshment
|Keynote Talk: Decoding AI: Mastering the Craft of Explainability
|Learning and Adapting Diverse Representations for Cross-Domain Few-shot Learning
|Ge Liu, Zhongqiang Zhang, Fuhan Cai, Duo Liu, and Xiangzhong Fang
|An Approach for Data Publishing with Sensitive Attribute Synthesis
|Zhihui Wang, Yun Zhu, and Xinyuan Mi
|A Risk-Averse Framework for Non-Stationary Stochastic Multi-Armed Bandits
|Reda Alami, Mohammed Mahfoud, and Mastane Achab
|Hill Zhu and Yi He
Decoding AI: Mastering the Craft of Explainability
Speaker: Dr. My T. Thai, Professor, University of Florida (more info)
Abstract: With the impressive feats of using deep learning models in many application domains, researchers and the public have grown alarmed by the fact that these models lack interpretability and transparency. They have been used as black-boxes with little explanation for why the models make such predictions. In this talk, we will discuss the art of explainable AI, which has emerged as one of promising tools towards the safety and responsible AI. We will cover linear to non-linear explanation, their evaluation, and the risk of being exploited.