While AI has attained remarkable achievements across multiple application domains, there remains significant concerns about the sustainability of AI. The quest for improved accuracy on large-scale problems is driving the use of increasingly deeper neural networks, which in turn increases energy consumption and climate-changing carbon emissions. For example, researchers estimated that training a particular state-of-the-art deep learning model resulted in 626,000 pounds of carbon dioxide emissions. The current trend towards massive dataset collection can also be extremely expensive (e.g. running massively expensive high-fidelity numerical modelling with supercomputer clusters or collecting millions of hours of autonomous driving). Hence, the ability to reduce such emissions even as AI uptake and deployment becomes more prevalent has huge consequences to our planet.
In addition to the environmental sustainability of AI, another big concern for the sustainable development of AI is its societal impact. There is increasing concern about AI-related ethical issues, such as fairness, privacy, and safety. For instance, the biases and privacy issues of AI may restrict its wide applicability for various applications. Without fully understanding the decision-making process, AI may not be suited for sensitive domains, such as healthcare and autonomous driving.
Another major area in which AI can have significant societal impact is in the direct application to sustainability-related problems of our age. For example, there are increasing possibilities for applying AI and data mining techniques to climate modelling, urban planning and layout (e.g. in mitigating urban heat island effect or deploying renewables such as solar), or green technologies (e.g. better battery materials or wind/ocean turbine design). This is also an important path to ensuring the use of AI for a net benefit to sustainability.
Thus, there is a rising and urgent need towards Sustainable AI for both AI research and industry. Specifically, for sustainability of AI, we aim to reduce carbon emissions and huge computing power consumption, as well as addressing AI-related ethical issues, via developing advanced AI technology. This workshop aims to prompt state-of-the-art approaches on sustainable AI research, and also propagate data/resource-efficient methods and possible applications in the Sustainability domain. For example, submissions that are at the intersection of data mining and society like use of AI and data mining for ensuring affordable and clean energy and motivating climate/climate-change related action will be particularly welcome. The organizers invite researchers to participate and submit their research papers in the Sustainable AI Workshop.
|September 17, 2022 ||Submission deadline|
|October 8, 2022 ||Acceptance notification|
Due to requests, the workshop paper submission deadline is extended to Sept. 17 2022. The notification date is extended to Oct. 8 2022.
Topics of interest include but are not limited to
Deep Model Compression
Data analytics for Technologies to enable a Sustainable Future (e.g. climate modelling, batteries or renewables development and utilization, urban sustainability)
Submissions are limited to a total of 8 pages, including all content and references, and must be in PDF format and formatted according to the IEEE 2-column format. Following this conference submission policy, reviews are triple-blind, and author names and affiliations should NOT be listed. Template can be downloaded via here.
Submitted papers will be assessed based on their novelty, technical quality, potential impact, and clarity of writing. For papers that rely heavily on empirical evaluations, the experimental methods and results should be clear, well executed, and repeatable. Authors are strongly encouraged to make data and code publicly available whenever possible. The accepted papers will be published in the dedicated ICDMW proceedings published by the IEEE Computer Society Press. Submission Link
Selected papers will be invited to submit an extended version to World Scientific Annual Review of Artificial Intelligence.
Authors must complete a reproducibility checklist at the time of paper submission (the questions in PDF format) [checklist link].
Authors are strongly recommended to start thinking about these questions already when writing the paper and to fill in the questionnaire before submission. These responses will become part of each paper submission and will be shared with the area chairs and/or reviewers to help them in the evaluation process. Reviewers will be asked to assess the degree to which the results reported in a paper are reproducible, and this assessment will be weighed when making final decisions about each paper. These responses will help facilitate the “Open Source Project Forum” initiative of the conference.
TITLE: Explainable AI for Climate Science: Detection, Prediction and Discovery
SPEAKER: Prof. Elizabeth A. Barnes (Colorado State University)
ABSTRACT: Earth’s climate is chaotic and noisy. Finding usable signals amidst all of the noise can be challenging: be it predicting if it will rain, knowing which direction a hurricane will go, understanding the implications of melting Arctic ice, or detecting the impacts of human-induced climate warming. Here, I will demonstrate how explainable artificial intelligence (XAI) techniques can sift through vast amounts of climate data and push the bounds of scientific discovery. But machine learning models are only as capable as the scientists designing them. I will further discuss how climate science requires the crafting of domain specific XAI methods, both to gauge the trustworthiness of the XAI’s predictions, but also to uncover predictable signals we did not know were there. Explainable AI can open doors to scientific understanding — supporting scientists as we ask entirely new questions about the coupled human-Earth system.
Time Zone: Estern Time
Keynote Presentation: Explainable AI for Climate Science: Detection, Prediction and Discovery
Prof. Elizabeth A. Barnes (Colorado State University)
Oral 1: FastFlow: AI for Fast Urban Wind Velocity Prediction
Shi Jer Low, Venugopalan Raghavan, Harish Gopalan, Jian Cheng Wong, Justin Yeoh, and Chin Chun Ooi
Oral 2: Empirical analysis of fairness-aware data segmentation
Seiji Okura and Takao Mohri
Oral 3: Domain Adaptation through Cluster Integration and Correlation
Vishnu Manasa Devagiri, Veselka Boeva, and Shahrooz Abghari
Oral 4: Data-Driven Usage Profiling and Anomaly Detection in Support of Sustainable Machining Processes
Fabian Fingerhut, Chaitra Harsha, Amirmohammad Eghbalian, Tom Jacobs, Mahdi Tabassian, Robbert Verbeke, and Elena Tsiporkova
Oral 5: Interpreting Categorical Data Classifiers using Explanation-based Locality
Peyman Rasouli, Ingrid Chieh Yu, and Ernesto Jiménez-Ruiz
Oral 6: Equal Confusion Fairness: Measuring Group-Based Disparities in Automated Decision Systems
Furkan Gursoy and Ioannis Kakadiaris
Nanyang Technological University/A*STAR, Singapore
New Jersey Institute of Technology, USA
Sichuan University, P.R.C
Singapore Management University, Singapore
Nanyang Technological University, Singapore
University of New South Wale, Australian
University of Edinburg, UK
University of Electronic Science and Technology of China
Heidelberg University, German
University of Washington, USA