Prasad Tadepalli
Professor
Computer Science Department
Oregon State University
Corvallis, OR 97331
Voice: (541) 737-5552
Fax: (541)-737-3014
e-mail: tadepall at cs dot orst dot edu
Office: 3069, Kelley Engineering Center
Postal Address: 1048, Kelley Engineering Center, Corvallis, OR 97331-3202, U.S.A.
Office Hours:
Tuesday: 1:00-2:00, Thursday: 2:00-3:00
Education
PhD, Computer Science, Rutgers University, U.S., 1990;
MTech, Computer Science, Indian Institute of Technology, Madras, India, 1981;
BTech, Electrical Engineering, Regional Engineering College, Warangal, India, 1979.
Teaching
Special Courses and Tutorials
- Monte-Carlo Methods in AI , with Tom Dietterich, Alan Fern, and Weng-Keen Wong, Corvallis, March 18-22, 2013.
- Reinforcement Learning: From Foundations to Advanced Topics , with Sridhar Mahadevan and Vivek Borkar, IJCAI 2007.
- Decision-Theoretic Planning and Learning in Relational Domains , with Alan Fern and Kristian Kersting, AAAI, 2008.
Research
- Research Interests: Natural Language Understanding, Reinforcement Learning, Relational Learning, Causal Inference
- Publications (DBLP page)
- Research Projects:
Explainable AI
Machine Common Sense
- Applications of research: Computer Games, Fire and Emergency Response, Logistics, Biological networks
Conferences and Journals
International Planning Competition - Learning Track
Machine Learning Journal Special issue on Structured Prediction
International Conference on Machine Learning (ICML) 2007
Inductive Logic Programming (ILP) 2007
Useful Links
Current students
- Alexander Turner
Topic: AI Safety
- Yilin Yang
Topic: Explaining Neural Machine Translation
- Vivswan Shitole
Topic: Structured Attention Graphs for Understanding Immage Classifications
- Prachi Rahurkar
Topic: Natural Language Question Answering
- Parijat Bhatt
Topic: Learning to Search
- Rajesh Mangannavar
Topic: Reinforcement Learning
Past Advisees
- Walker Orr, PhD: Towards Narrative Understanding with Deep Networks and Hidden Markov Models
- Mandana Hamidi-Haines, PhD: Learning from Examples and Interactions
- Chao Ma, PhD: New Directions in Search-based Structured Prediction: Multi-task Learning and Integration of Deep Models
- Aswin Nadamuni Raghavan, PhD: Domain-Independent Planning for Markov Decision Processes with Factored State and Action Spaces
-
Qin Rui , MS: Information Extraction from Weather Reports
- Purbasha Chatterjee,
MS: Answer Selection with Attentive Clustering
- Meghamala Sinha,
MS: Pooling vs. Voting: An Empirical Study of Learning Causal Structures
- Durga "Harish" Dayapule ,
Project: Extending the Scope of Hindisight Optimization for Emergency Planning
- Janardhan Rao Doppa , PhD:
A Search-based Framework for Structured Prediction
- Kranti Kumar Potanapalli, MS: Learning for Search and Coverage
-
Neville Mehta , PhD: Learning Hierarchies for Reinforcement Learning
- Aaron Wilson , PhD: Bayesian Optimization for Reinforcement Learning
-
Scott Proper , PhD: Multi-agent Reinforcement Learning
-
Ronny Bjarnason , PhD: Multi-level Rollout Reinforcement Learning
- Sriraam Natarajan , Ph.D.: Statistical Relational Learning
- Charles Parker, Ph.D: Structured Gradient Boosting
- Kiran Polavarapu, MS: Event and Sentiment Extraction in the Financial Domain
- Thierry Donneaugolencer, MS: Planning by Sparse Sampling in Partially Observable Domains
-
Kim Mach , MS: Experimental Evaluation of Auto-exploratory
Model-free Average-Reward Reinforcement Learning
- Nimish Dharawat, MS: Learning Tree Patterns for Information Extraction
- Sriraam Natarajan, MS: Multi-criterion Average-Reward Reinforcement Learning
- Sandeep Seri, MS : Hierarchical Average-reward Reinforcement Learning.
- Hong Tang, MS : Average-reward Reinforcement Learning for Product
Delivery by Multiple Vehicles.
- Tom Amoth, PhD : Exact Learning of Tree Patterns.
- Ray Liere, PhD : Active learning with committees with applications to text categorization.
- Chandra Reddy, PhD : Learning Hierarchical Decomposition Rules for Planning: an Inductive Logic Programming Approach.
- DoKyeong Ok, PhD: A Study of Model-based Average Reward Reinforcement Learning.
- Michael Chisholm, MS:
Learning Classification Rules by Radomized Iterative Local Search.
- Peter Drake, MS: Constructive Induction for Improved Learning of Boolean Functions
- Yenong Qi, MS: Local Search Methods for Job Shop Scheduling
-
Silvana Roncagliolo , MS: Empirical Speedup Learning of Decomposition Rules for Planning
- Ramana Isukapalli, MS: Learning Macro-operators for Planning Using Simulators
Prasad Tadepalli, tadepall@cs.orst.edu