Iman Mirzadeh

Iman Mirzadeh

Machine Learning Research Engineer

Apple

About Me

I’m currently a Senior ML Research Engineer at Apple. Prior to joining Apple, I received my PhD from Washington State University, where I worked on the continual learning problem at the Embedded Machine Intelligence Lab (EMIL) under the supervision of Dr. Hassan Ghasemzadeh.

My current research interests focus on the real-world challenges of working with Large Language Models (LLMs), particularly in areas of inference efficiency and reasoning capability.

Interests
  • Large Language Models
  • Formal Reasoning
  • Continual Learning
  • Inference Optimization
Education
  • Ph.D. in Computer Science (Artificial Intelligence), 2018-2022

    Washington State University

  • M.Sc. in Computer Science (Artificial Intelligence), 2018-2020

    Washington State University

  • B.Sc. in Computer Engineering (Information Technology), 2013-2018

    University of Tehran

Experience

 
 
 
 
 
Apple
Senior Machine Learning Research Engineer
May 2023 – Present
 
 
 
 
 
DeepMind
Research Scientist Intern
Aug 2021 – Dec 2021 Remote
 
 
 
 
 
Washington State University
Graduate Research Assistant
Aug 2018 – Aug 2022
 
 
 
 
 
Sokhan AI
Machine Learning Engineer
Aug 2017 – Aug 2018

Selected Publications

The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS), 2025
The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models
The Thirteenth International Conference on Learning Representations (ICLR), 2025
GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models
OpenELM: An Efficient Language Model Family with Open Training and Inference Framework
OpenELM: An Efficient Language Model Family with Open Training and Inference Framework
LLM in a flash: Efficient Large Language Model Inference with Limited Memory
The 62nd Annual Meeting of the Association for Computational Linguistics (ACL), 2024
LLM in a flash: Efficient Large Language Model Inference with Limited Memory
ReLU Strikes Back: Exploiting Activation Sparsity in Large Language Models
International Conference on Learning Representations (ICLR), 2024 [Oral]
ReLU Strikes Back: Exploiting Activation Sparsity in Large Language Models
Wide Neural Networks Forget Less Catastrophically
International Conference on Machine Learning (ICML), 2022
Wide Neural Networks Forget Less Catastrophically
Linear Mode Connectivity in Multitask and Continual Learning
International Conference on Learning Representations (ICLR), 2021
Linear Mode Connectivity in Multitask and Continual Learning