2018年12月6日（周四） 10:25-12:00 a.m.
Image forensics and anti-forensics have been extensively studied in the multimedia security community for over a decade. There are many interesting results showing that a variety of features inherent to images can be effectively destroyed with negligible incurred distortions. Recently, in machine learning (ML) community, the adversarial ML demonstrates that many state-of-the-art, sophistically designed ML algorithms can be very vulnerable and be easily fooled by the adversarial inputs. In this talk, we use the widely-adopted SIFT feature as an example to link the image forensics/anti-forensics and adversarial ML. It will be shown that some results and insights developed from multimedia security community can still be quite valuable in the adversarial ML community.
Jiantao Zhou received the Ph.D. degree from the Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology in 2009. He held various research positions with the University of Illinois at Urbana–Champaign, the Hong Kong University of Science and Technology, and the McMaster University. He is currently an Associate Professor with the Department of Computer and Information Science, Faculty of Science and Technology, University of Macau. He holds four granted U.S. patents and two granted Chinese patents. His research interests include multimedia security and forensics, multimedia signal processing, and adversarial ML. He has co-authored two papers that received the Best Paper Award at the IEEE Pacific-Rim Conference on Multimedia in 2007 and the Best Student Paper Award at the IEEE International Conference on Multimedia and Expo in 2016. He serves as an Associate Editor of IEEE Trans. on Image Processing.