主 题： Credibility of Machine Learning and Decision-Making Models: Designing Self-Awareness Mechanisms
主讲人简介：Witold Pedrycz (IEEE终身会士)，加拿大埃德蒙顿阿尔伯塔大学电气和计算机工程系的教授，加拿大皇家科学院院士，波兰科学院外籍院士，加拿大计算智能研究中心首席科学家，历任IFSA主席和北美模糊信息处理协会会长。Witold Pedrycz院士曾获得包括IEEE系统人类和控制论学会的诺伯特-维纳奖、IEEE加拿大计算机工程奖、欧洲软计算中心的卡贾斯特尔软计算奖、基拉姆奖、IEEE计算智能学会的模糊先锋奖，以及2019年IEEE系统、人类和控制论学会的功勋服务奖等在内的多项荣誉。Witold Pedrycz院士的主要研究方向包括计算智能、模糊建模和粒度计算、知识发现和数据挖掘、模糊控制、模式识别、基于知识的神经网络、关系计算和软件工程等。Witold Pedrycz院士现任《Information Sciences》主编、《WIREs Data Mining and Knowledge Discovery (Wiley)》主编、《Int. J. of Granular Computing (Springer)》和《J. of Data Information and Management (Springer)》的联合主编以及IEEE Trans. SMC、IEEE Trans. Fuzzy Systems等多个国际知名期刊的编委。
Over the recent years, we have been witnessing numerous and far-reaching developments and applications of Machine Learning (ML). Efficient and systematic design of their architectures is important. Equally important are comprehensive evaluation mechanisms aimed at the assessment of the quality of the obtained results. The credibility of ML models is also of concern to any application, especially the one exhibiting a high level of criticality commonly encountered in autonomous systems. With this regard, there are a number of burning questions: how to quantify the quality of a result produced by the ML model? What is its credibility? How to equip the models with some self-awareness mechanism so careful guidance for additional supportive experimental evidence could be triggered?
Proceeding with a conceptual and algorithmic pursuits, we advocate that these problems could be formalized in the settings of Granular Computing. We show that any numeric result be augmented by the associated information granules and the quality of the results is expressed in terms of the characteristics of information granules such as coverage and specificity. Different directions are covered including confidence/ prediction intervals, granular embedding of ML models, and granular Gaussian Process models.
Several representative and direct applications in the realm of transfer learning, knowledge distillation, and federated learning are discussed.