Trustworthy Machine Learning

Trustworthy machine learning for real-world systems.

I am a Staff Research Scientist at the Okinawa Institute of Science and Technology (OIST). My research focuses on trustworthy machine learning, anomaly detection, out-of-distribution detection, continual learning, and AI robustness. Before joining OIST, I led AI research initiatives at the Institute for Research in Fundamental Sciences (IPM) in Tehran, Iran, and worked as a Senior Researcher in Finland and France, including at the Center for Machine Vision and Signal Analysis (CMVS). My work bridges machine learning foundations and practical robustness for reliable, secure, and effective AI systems.

Research Interests

  • Trustworthy machine learning
  • Anomaly detection & out-of-distribution detection
  • Continual and lifelong learning
  • Robustness, safety, and reliability in AI

Open Positions

I welcome applications from highly motivated researchers interested in trustworthy machine learning, anomaly detection, AI safety, and robust computer vision.

Applications for technician positions, postdoctoral positions, internships, and visiting researcher positions are welcome. Applicants with publications in top-tier conferences or journals are especially encouraged to apply.

To apply, please send your CV, research interests, and selected publications to mohammad.sabokrou@oist.jp.

Selected Research Projects

  • Investigating the Trustworthiness of Deep Pre-trained and Self-Supervised Models
    2024–2027 · 3.6 million yen · Grant-in-Aid for Early-Career Scientists
  • Breaking Boundaries: Robust, Domain-General Anomaly Detection with Vision-Language Models
    2026–2030 · 14 million yen · Grant-in-Aid for Scientific Research (B) (KAKENHI B)
  • AI Safety and Security for Classical AI Models
    Institute for Research in Fundamental Sciences (IPM)

Selected News

  • Area Chair, NeurIPS 2026.
  • Awarded JSPS KAKENHI (Scientific Research B), 2026–2030.
  • Area Chair, BMVC 2026.
  • Two papers accepted at NeurIPS 2025.
  • Area Chair, ICLR 2026.
  • Paper accepted at ICLR 2025 on adversarially robust anomaly detection.
  • Paper accepted to TMLR on stealthy backdoor attacks via confidence-driven sampling.
  • Paper accepted at NeurIPS 2024 on scanning trojaned models using out-of-distribution samples.