Â鶹´«Ã½Ó³»­

Deadline Re-Extension (Oct 29): JSAIT Issue on Information-Theoretic Methods for Trustworthy and Reliable Machine Learning
The deadline has been extended to October 29 for the JSAIT issue on Information-Theoretic Methods for Trustworthy and Reliable Machine Learning.
Oct 20, 2023

The deadline for this JSAIT issue has been extended to October 29.

Over the past decade, machine learning (ML), that is the process of enabling computing systems to take data and churn out decisions, has been enabling tremendously exciting technologies. Such technologies can assist humans in making a variety of decisions by processing complex data to identify patterns, detect anomalies, and make inferences. At the same time, these automated decision-making systems raise questions about security and privacy of user data that drive ML, fairness of the decisions, and reliability of automated systems to make complex decisions that can affect humans in significant ways. In short, how can ML models be deployed in a responsible and trustworthy manner that ensures fair and reliable decision-making? This requires ensuring that the entire ML pipeline assures security, reliability, robustness, fairness, and privacy. Information theory can shed light on each of these challenges by providing a rigorous framework to not only quantify these desirata but also rigorously evaluate and provide assurances. From its beginnings, information theory has been devoted to a theoretical understanding of the limits of engineered systems. As such, it is a vital tool in guiding machine learning advances.

We invite previously unpublished papers that contribute to the fundamentals, as well as the applications of information- and learning-theoretic methods for secure, robust, reliable, fair, private, and trustworthy machine learning. Exploration of such techniques to practical systems is also relevant.

Read the call for papers.