Alan Fern
Professor and Associate Head of
Research
School of Electrical Engineering and Computer
Science
Oregon State University
Office Location: 2071 Kelley
Engineering Center
(541) 737-9202 (office) (I never check phone messages, send an email
instead)
(541) 737-1300 (fax)
E-mail: alan.fern@oregonstate.edu
Postal Address: Kelley Engineering Center, Corvallis, OR 97330-5501, U.S.A.
Quick Links: Teaching Publications
(Google
Scholar) Students
Education
B.S. Electrical Engineering, University of Maine,
1997
|
M.S. Computer Engineering, Purdue University, 2000
|
Ph.D. Computer Engineering, Purdue University, 2004
(advised by Robert Givan)
|
|
Teaching
o
Video
Lectures
o
Lab
Exercises: Distributed AI using Ray
Research
My primary research interests are
in the field of artificial intelligence, where I focus on the sub-areas of
machine learning and automated planning. I am particularly interested in the
intersection of these areas. Some example projects include:
AgAID AI Institute – AI for Agricultural
Applications: I am very excited to be part of the AgAIG AI
Institute funded by the USDA-NIFA and led by Washington State University. The
project kickoff is October 2021 and I’m leading a team of 13 OSU researchers
focused on AI, robotics, and human-computer/robot interaction.
Learning and Planning for
Bipedal Robot Locomotion: I co-direct the Dynamic
Robot Lab (DRL) with Jonathan Hurst. We are studying techniques for
training the bipedal robot Cassie to exhibit agile biped locomotion. We study
learning and planning for both low-level behavior and high-level planned
behaviors. See our Youtube channel for examples of Cassie in the real
world.
Machine Common Sense: This DARPA sponsored OSU-led
project is in collaboration with Behavior Psychologist, Karen Adolph at NYU, and roboticist
Tucker Hermans at the
University of Utah. We are studying and developing learning and reasoning
techniques to enable AI systems to exhibit common sense reasoning and planning
capabilities on par with those of an 18-month infant. A key aspect of our
approach is to study how to effectively combine the representation-learning
capabilities of deep neural network with the powerful reasoning capabilities of
state-of-the-art AI planning and reasoning engines.
Explainable
Artificial Intelligence: It is
becoming increasingly common for autonomous and semi-autonomous systems, such
as UAVs, robots, and virtual agents, to be develop via a combination of
traditional programming and machine learning. Currently, acceptance testing of
these systems is problematic due to the black box nature of machine-learned
components, which does not allow testers to understand the rationale behind the
learned decisions. Our research will develop the new paradigm of explanation-informed
acceptance testing (xACT), which will allow testers
to not only observe and evaluate the behavior of machine-learned systems, but
to also evaluate explanations of the decisions leading to that behavior. As a
result, the xACT paradigm allows testers to determine
whether machine-learned systems are making decisions "for the right
reasons", which provides stronger justification for trusting the system in
(semi-)autonomous operation. The public will benefit from this technology via
the availability of more understandable and, in turn, trustworthy
(semi-)autonomous systems for complex applications in defense,
industry, and everyday life.
Anomaly Detection and
Explanation: We study how to best
detect and explain anomalies with a particular focus on security applications
and interaction with end-user analysts.
Current Graduate
Students
- Devin
Crowley, MS Robotics
- Jeremy
Dao, PhD AI
- Helei
Duan (co-advised w/ Jonathan
Hurst), PhD Robotics
- Anurag
Koul, PhD CS
- Kin-Ho
Lam, MS AI (co-advised w/
Minsuk Kahng)
- Zhengxian Lin, MS CS & AI
- Ashish
Malik, MS Robotics
- Erich
Merrill, PhD CS
- Aseem
Saxena, MS Robotics
- Jonah Seikmann, MS
Robotics (co-advised w/ Jonathan Hurst)
- Aayam
Shrestha, PhD CS
- Zeyad
Shureih, MS CS
- Diego
David Charrez Ticona, MS CS
Former
Students
- Murugeswari
Issakkimuthu, PhD 2021,
Thesis: Learning and Improving
Policies for Probabilistic Planning
- Mohamad
Danesh, MS 2021, Project: Re-Understanding Finite State Representations of
Recurrent Policy Networks
- Chengxi Yang, MS 2021, Project: A Comparison
of Representations for Learning to Predict Molecule Mechanical Behavior
- Risheek Garrepalli
(co-advised w/ Tom
Dietterich), MS 2020, Project: Oracle
Analysis of Representations for Deep Open Category Detection
- Shan Xue, PhD 2020, Thesis: Scheduling
and Online Planning in Stochastic Diffusion Networks
- Zoe
Juozapaitis, MS 2019, Project: Explainable Reinforcement Learning via Reward
Decomposition
- Md Amran Siddiqui, PhD 2019,
Thesis: Anomaly Detection: Theory,
Explanation, and User Feedback
- Amrita Sadarangani,
MS 2019, Project: Saliency of
Attributes for Object Oriented Domains
- Patrick Clary (co-advised w/
Jonathan Hurst), MS 2019, Thesis:Sim-to-Real Transfer
for the Bipedal Robot Cassie
- Nima Dolatnia,
PhD 2018, (co-avised w/ Sarah Emerson), Thesis: Bayesian Optimization with Resource and
Production Constraints
- Trevor
Fiez, MS 2017 (co-advised w/ Sinisa Todorovic), Thesis: An Analysis
of Training Methodologies for Deep Visual Tracking
- Jesse
Hostetler, PhD 2017 (co-advised w/ Tom Dietterich),
Thesis: Monte Carlo Tree Search with Fixed and Adaptive
Abstractions
- Sheng
Chen, PhD 2017, Thesis: Object Tracking-by-Segmentation in Videos
- Eric
Marshall, MS 2015, Project: An Empirical Evaluation of
Policy Rollout for Clue
- Jervis
Pinto, PhD 2015, Thesis: Incorporating and Learning Behavior
Constraints for Sequential Decision Making
- Vikedo
Terhuja, MS 2015, Thesis: Automatic Detection of
Possessions and Shots from Raw Basketball Video
- Qingkai Lu, MS 2015, Thesis: Offensive
Direction Inference in Real-World Football Video
- Kshitij Judah,
PhD 2014, Thesis: New Learning Modes for Sequential Decision Making
- Janardhan Rao ( Jana )
Doppa, PhD 2014 (co-advised
w/ Prasad Tadepalli), Thesis: Integrating Learning and Search for
Structured Prediction
- Kranti
Kumar, MS 2013 (co-advised with Prasad Tadepalli), Thesis: Coactive Learning for Multi-Robot Search and
Coverage
- Shikhar
Mall, MS 2013, Project: Reinforcement
Learning for P2P Backup Applications
- Joe
Selman, MS 2012, Project: REPEL: An Inference Engine for
Probabilistic Event Logic
- Aaron
Wilson, PhD 2012 (co-advised with Prasad Tadepalli), Thesis: Bayesian
Methods for Knowledge Transfer and Policy Search in Reinforcement Learning
- Rob
Hess, PhD 2012, Thesis: Toward Computer Vision for Understanding
American Football in Video
- Brian
King, MS 2012, Thesis: Adversarial Planning by Strategy Switching
in a Real-Time Strategy Game
- Yuehua Xu, PhD
2010, Thesis: Learning Ranking Functions for Efficient Search
- Paul
Lewis, MS 2010, Thesis: Ensemble Monte-Carlo Planning: An
Empirical Study
- Ronny
Bjarnason, PhD 2010 (co-advised with Prasad Tadepalli): Monte-Carlo Planning for Probabilistic Domains
- Guohua Hao, MS 2009, Thesis: Revisiting
Output Coding for Sequential Supervised Learning
- Radha-Krishna
Balla, MS, 2009, Thesis: UCT for Tactical
Assaults in Real-Time Strategy Games
- Sean
McDougal, MS 2008, Project: Automatic Panorama Stitching
- Benjamin
Brewster, MS 2007, Project: Finding and Using Chokepoints in Stratagus
- Christopher
Ventura, MS 2007, Thesis: A SAT-Based Planning Framework for
Optimizing Resource Production
- Sungwook
Yoon, PhD 2006 (co-advised with Robert Givan), Thesis: Learning
Control Knowledge for AI Planning Domains
- Daman
Oberoi, MS 2006, Project: "Simulation-Based Optimization of
Football Defenses"
- Hema
Jyothi, MS 2006 (co-advised with Thinh Nguyen), Project: "Reinforcement
Learning for Network Routing"