Computer Science Department
Oregon State University
Corvallis, OR 97331
Voice: (541) 737-5552
e-mail: tadepall at cs dot orst dot edu
Office: 3069, Kelley Engineering Center
Postal Address: 1048, Kelley Engineering Center, Corvallis, OR 97331-3202, U.S.A.
Office Hours: Tuesday: 1:00-2:00, Thursday: 2:00-3:00
PhD, Computer Science, Rutgers University, U.S., 1990;
MTech, Computer Science, Indian Institute of Technology, Madras, India, 1981;
BTech, Electrical Engineering, Regional Engineering College, Warangal, India, 1979.
Special Courses and Tutorials
- Monte-Carlo Methods in AI , with Tom Dietterich, Alan Fern, and Weng-Keen Wong, Corvallis, March 18-22, 2013.
- Monte-Carlo Methods in AI , with Tom Dietterich, Alan Fern, Kagan Tumer, and Weng-Keen Wong, Corvallis, 2012.
- Reinforcement Learning: From Foundations to Advanced Topics , with Sridhar Mahadevan and Vivek Borkar, IJCAI 2007.
- Decision-Theoretic Planning and Learning in Relational Domains , with Alan Fern and Kristian Kersting, AAAI, 2008.
- Research Interests: Transfer Learning, Natural Language Understanding, Reinforcement Learning, Decision-Theoretic Planning, Structured Prediction
- Research Projects:
Active Transfer Learning
Deep Reading and Learning
Learning and Planning in Service Domains
Search-based Structured Prediction
- Applications of research: Real-Time Strategy Games, Air-space Management, Fire and Emergency Response, Product Delivery, Proactive Assistance, Card Games, Information Extraction, Natural Language Understanding
Conferences and Journals
International Planning Competition - Learning Track
Machine Learning Journal Special issue on Structured Prediction
International Conference on Machine Learning (ICML) 2007
Inductive Logic Programming (ILP) 2007
- Walker Orr , Topic: Event Detection and Inference from Texts
- Mandana Hamidi,
Topic: Imitation Learning of Hiererachical Policies
- Chao Ma,
Topic: Coreference Resoloution and Entity Linking
- Beatrice Moissinac,
Topic: Algorithmic Teaching and Tutoring
- Aswin Nadamuni Raghavan, PhD: Domain-Independent Planning for Markov Decision Processes with Factored State and Action Spaces
- Qin Rui, MS: Information Extraction from Weather Reports
- Janardhan Rao Doppa , PhD:
A Search-based Framework for Structured Prediction
- Kranti Kumar Potanapalli, MS: Learning for Search and Coverage
Neville Mehta , PhD: Learning Hierarchies for Reinforcement Learning
- Aaron Wilson , PhD: Bayesian Optimization for Reinforcement Learning
Scott Proper , PhD: Multi-agent Reinforcement Learning
Ronny Bjarnason , PhD: Multi-level Rollout Reinforcement Learning
- Sriraam Natarajan , Ph.D.: Statistical Relational Learning
- Charles Parker, Ph.D: Structured Gradient Boosting
- Kiran Polavarapu, MS: Event and Sentiment Extraction in the Financial Domain
- Thierry Donneaugolencer, MS: Planning by Sparse Sampling in Partially Observable Domains
- Kim Mach, MS: Experimental Evaluation of Auto-exploratory
Model-free Average-Reward Reinforcement Learning
- Nimish Dharawat, MS: Learning Tree Patterns for Information Extraction
- Sriraam Natarajan, MS: Multi-criterion Average-Reward Reinforcement Learning
- Sandeep Seri, MS : Hierarchical Average-reward Reinforcement Learning.
- Hong Tang, MS : Average-reward Reinforcement Learning for Product
Delivery by Multiple Vehicles.
- Tom Amoth, PhD : Exact Learning of Tree Patterns.
- Ray Liere, PhD : Active learning with committees with applications to text categorization.
- Chandra Reddy, PhD : Learning Hierarchical Decomposition Rules for Planning: an Inductive Logic Programming Approach.
- DoKyeong Ok, PhD: A Study of Model-based Average Reward Reinforcement Learning.
- Michael Chisholm, MS:
Learning Classification Rules by Radomized Iterative Local Search.
- Peter Drake, MS: Constructive Induction for Improved Learning of Boolean Functions
- Yenong Qi, MS: Local Search Methods for Job Shop Scheduling
- Silvana Roncagliolo, MS: Empirical Speedup Learning of Decomposition Rules for Planning
- Ramana Isukapalli, MS: Learning Macro-operators for Planning Using Simulators
Prasad Tadepalli, firstname.lastname@example.org