Xiao Fu's Homepage
|
Xiao Fu, Ph.D.
Associate Professor
School of Electrical Engineering and Computer Science
Oregon State University
3003, Kelley Engineering Center
Corvallis, OR, 97331, United Staes
(xiao.fu(AT)oregonstate.edu)
Links: Google Scholar, OSU AI Program, Signal Processing Group
|
News and Updates
Jul. 2024: Check out our new submission on arXiv. This work reviewed the developments of noisy label crowdsourcing from a signal processing perspective, connecting ideas, formulations, and algorithms in this domain to techniques that are widely used in SP, e.g., NMF and tensor decomposition.
Sep 2023: I am humbled and honored to have received the Engelbrecht Early Career Award from the College of Engineering, Oregon State University. Thank you COE for the recognition. Many thanks to my students, collaborators, mentors and my nominator.
Sep 2023: Shahana defended her dissertation in July 2023! In Dec. 2023, Shahana will join the AI Initiative of the University of Central Florida as a Tenure-Track Assistant Professor (joint appointment with the Department of ECE and the Department of CS). Congratulations Shahana! We are proud of you and looking forward to seeing more accomplishments of yours.
June 2023, I visited KU Leuven, Belgium, hosted by Prof. Aritra Konar and Prof. Lieven De Lathauwer.
-
The work studies a graph neural network-based acceleration method for provably solving the joint beamforming and antenna selection problem, under a uni-cast setting. An interesting take-away is that, under reasonable conditions, this mixed integer and nonconvex program optimally and efficiently with the help of neural imitation learning.
We showed that the graph neural imitation learning approach can reduce the branch and bound method's computational complexity from an exponential order to a linear one, with high probability.
|
|
-
2022 IEEE Signal Processing Society Best Paper Award: H. Sun, X. Chen, Q. Shi, M. Hong, X. Fu, and N.D. Sidiropoulos, ‘‘Learning to Optimize: Training Deep Neural Networks for Interference Management,’’ in IEEE Transactions on Signal Processing, vol. 66, no. 20, pp. 5438-5453, 15 Oct.15, 2018.
-
2022 IEEE Signal Processing Society Donald G. Fink Overview Paper Award: N.D. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E.E. Papalexakis, and C. Faloutsos, ‘‘Tensor Decomposition for Signal Processing and Machine Learning’’, IEEE Transactions on Signal Processing, vol. 65, no. 13, pp. 3551-3582, July 1, 2017
-
The paper shows that post-nonlinear mixture (PNL) learning does not need stringent conditions (e.g., statistical independence of the latent components as often used in nonlinear ICA) for model identifiability. The lesson learned in the paper is that if the latent model is a low-rank matrix, then the PNL is identifiable.
May. 2022, Our group has received the National Science Foundation Faculty Early Career Development Program Award (NSF CAREER Award). This award will support us to develop exciting nonlinear factor analysis tools for machine learning and signal processing tasks like unsupervised representation learning, self-supervised learning, hyperspectral imaging, and brain signal processing; see the public information from NSF: Click Here
-
Q. Lyu and X. Fu, ‘‘Identifiability-Guaranteed Simplex-Structured Post-Nonlinear Mixture Learning via Autoencoder’’, IEEE Transactions on Signal Processing, accepted, June 2021. In this paper we ask a fundamental question: When and how can we identify the matrix factorization model under unknown post-nonlinear distortions; i.e., given y=g(As) with unknown element-wise nonlinear distortion g(.), how to learn A and s in an unsupervised manner? Check out the paper for our most recent take on this problem.
-
H. Sun, W. Pu, X. Fu, T.-H. Chang, and M. Hong, ‘‘Learning to Continuously Optimize Wireless Resource in a Dynamic Environment: A Bilevel Optimization Perspective’’, submitted to IEEE Transaction on Signal Processing, April, 2021.
-
Trivia: My first journal publication 8 years ago was in Signal Processing. See: K. K. Lee, W.-K. Ma, X. Fu, T.-H. Chan, and C.-Y. Chi, “A Khatri-Rao subspace approach to blind identification of mixtures of quasi-stationary sources,” Signal Processing, vol. 93, no. 12, pp. 3515-3527, Dec 2013 (special issue in memory of Alex B. Gershman). (Matlab Code)
|
|
|
|
|
|
At least two Ph.D. positions available! Click here for more information. We welcome applicants who are interested in:
deep unsupervised learning,
social network analysis,
statistical machine learning,
hyperspectral imaging,
tensor/nonnegative matrix factorization,
deep learning for wireless communication.
Applicants please send (i) CV, (ii) sample research papers, and (iii) research statement to Dr. Xiao Fu (xiao.fu@oregonstate.edu). Materials received before Jan 1, 2021 will be considered with high priority.
June 2020: Check out this submission ‘‘Hyperspectral super-resolution via interpretable block-term tensor modeling’’. Here we offer an alternative to our previous work on tensor based hyper spectral super-resolution (Kanatsoulis, Fu, Sidiropoulos, and Ma 2018). The new model has two advantages: 1) the recoverability of the super-res. image is guaranteed (as in other tensor models); 2) the latent factors of this model has physical interpretations (but other tensor models do not). The second property allows us to design structural constraints for performance enhancement.
|
|
|
|
June 2020: Our overview paper on structured tensor and matrix decomposition has been accepted in IEEE Signal Processing Magazine, special issue on ‘‘Non-Convex Optimization for Signal Processing and Machine Learning’’. We discussed a series developments in optimization tools for tensor/matrix decomposition with structural requirements on the latent factors. We introduced inexact BCD, Gauss–Newton (foundation of Tensorlab), and stochastic optimization (with ideas from training deep nets) for tensor and matrix decomposition.
-
B. Yang, X. Fu, Kejun Huang, N. D. Sidiropoulos, ‘‘Learning Nonlinear Mixture: Identifiability and Algorithm’’ has been accepted by IEEE Transactions on Signal Processing.
Mar. 2020, Undergraduate Research Assistantship Available: I am looking undergraduate research assistants in EECS at Oregon State University who are interested in statistical machine learning. Please send me your C.V. and transcripts if you are interested in working with me starting summer or Fall 2020 (or Winter 2021). The research experience program will typically be 10 weeks (one term).
Mar. 2020, Ph.D. Position available (Research Assistantship): I have always been looking for PhD students who are interested in signal processing and machine learning, especially matrix/tensor factorization models, deep unsupervised learning, and optimization algorithm design. Please send me your C.V. and transcripts (and papers if you have published your work) if you are interested in working with me starting Fall 2020. I would expect some details for why you're interested in my group.
-
S. Ibrahim, X. Fu, and X. Li, ‘‘On recoverability of randomly compressed tensors with low CP rank’’, submitted to IEEE Signal Processing Letters, Jan. 2020.
X. Fu, N. Vervliet, L. De Lathauwer, K. Huang and N. Gillis, ‘‘Nonconvex optimization tools for large-scale tensor and matrix decomposition with structured factors’’, submitted to IEEE Signal Processing Magazine, Jan. 2020.
Dec. 2019: Our paper ‘‘Link Prediction Under Imperfect Detection: Collaborative Filtering for Ecological Networks’’ has been accepted by IEEE Transactions on Knowledge and Data Engineering! This paper is co-authored by Xiao, Eugene Seo, Justin Clarke, and Rebecca Hutchinson, all from EECS at Oregon State! Justin was with us as an undergraduate student by the time of submission, and he is now at UMass for his graduate degree. Congratulations, team!
-
K. Tang, N. Kan, J. Zou, C. Li, X. Fu, M. Hong, H. Xiong ‘‘Multi-user Adaptive Video Delivery over Wireless Networks: A Physical Layer Resource-Aware Deep Reinforcement Learning Approach’’, submitted to IEEE Transactions on Circuits and Systems for Video Technology.
Sep 2019: First good news in September! Shahana's first NeuriPS paper ‘‘Crowdsourcing via Pairwise Co-occurrences: Identifiability and Algorithms’’ (S. Ibrahim, X. Fu, N. Kargas, and K. Huang) has been accepted! This year NeuriPS has a record-breaking 6743 submissions, and only 1428 were accepted (= 21%).
June 2019: We have submitted a paper (with Ryan, Ken, Qiang) titiled ‘‘Hyperspectral Super-Resolution via Global-Local Low-Rank Matrix Estimation’’ to IEEE Transactions on Geoscience and Remote Sensing.
June 2019: We have submitted a paper titled ‘‘Link Prediction Under Imperfect Detection: Collaborative Filtering for Ecological Networks’’ to IEEE Transactions on Knowledge and Data Engineering. In this work, we proposed a statistical generative model for ecological network link prediction. The challenge for this type of networks is that all the observed entries suffer from systematic under estimation–which is very different from online recommender systems. This is a collaborative research with Eugene Seo, Justin Clarke, and Rebecca–all from EECS at Oregon State.
April 2019: Our paper (with Kejun) ‘‘Detecting Overlapping and Correlated Communities: Identifiability and Algorithm’’ has been accepted to ICML 2019! This work proposes a new community detection method that has correctness guarantees for identifying the popular mixed membership stochastic blockmodel (MMSB). Many existing methods rely on the existence of ‘‘pure nodes’’ (i.e., nodes in a network that only belong to one community) to identify MMSB. This assumption may be a bit restrictive. Our method leverage convex geometry-based matrix factorization to establish identifiability under much milder conditions.
-
H. Sun, X. Chen, Q. Shi, M. Hong, X. Fu, and N. D. Sidiropoulos, ‘‘Learning to Optimize: Training Deep Neural Networks for Interference Management,’’ IEEE Transactions on Signal Processing, vol. 66, no. 20, pp. 5438-5453, October 2018.
|
|
Jan. 2019, Check out the new paper: ‘‘Learning Nonlinear Mixtures: Identifiability and Algorithm’’. In this work we push forward parameter identifiability of linear mixture models (LMM) to nonlinear ones. LMM finds many applications in blind source separation-related problems, e.g., hyperspectral unmixing and topic mining. In practice, however, the mixing process is hardly linear. This work studies a fundamental question: if there is nonlinearity imposed upon an LMM, can we still identify the underlying parameters of interest? The interesting observation of our work is that: under some conditions, nonlinearity can be effectively removed and the problem will boil down to an LMM identification problem — for which we have tons of tools to handle.
|
|
Sep 2018: Our first IEEE TKDE paper has been accepted! The paper ‘‘Efficient and Distributed Generalized Canonical Correlations Analysis for Big Multiview Data’’ comes from a collaborative work with CMU (Prof. Christos Faloutsos and Prof. Tom Mitchell). Now the team members are spread across the U.S. and the world (OSU,UFL,CMU,UVA,UCR,IIS). Congratualations to all! The full paper will be uploaded soon.
June 2018: I gave a talk in the College of Mathematical Science at University of Electronic Science and Technology of China (UESTC), Chengdu, China. The title is ‘‘Hyperspectral Super-Resolution: A Coupled Tensor Factorization Approach’’. See the slides here. The pre-print of the paper is here. Will also be giving this talk at Chongqing University on Jul. 13.
Feb 2018: Five papers have been accepted to ICASSP 2018, Calgary, Canada, April 2018 — congratulations to all!
-
T. Qiu, X. Fu, N. D. Sidiropoulos, and D. Palomar, ‘‘MISO Channel Estimation and Tracking from Received Signal Strength Feedback’’ accepted to IEEE Transactions on Signal Processing
K. Huang, X. Fu, and N. D. Sidiropoulos, ‘‘On Convergence of Epanechnikov Mean Shift,’’ to AAAI 2018 (acceptance rate = 25%.)
Mar. 2017: I gave a tutorial at ICASSP 2017 together with Prof. Nikos Sidiropouos, Prof. Vagelis Papalexakis (University of California Riverside) and Prof. L. De Lathauwer (KU Leuven). The title is ‘‘ Tensor Decomposition for Signal Processing and Machine Learning ’’ which is based on our IEEE Transactions on Signal Processing overview paper. Check out the slides and the camera-ready paper.
Sep., 2016: Our paper ‘‘Efficient and Distributed Algorithms for Large-Scale
Generalized Canonical Correlations Analysis’’ has been accepted by IEEE Internatial Conference on Data Mining (ICDM 2016)! This year ICDM will be held in the week right after NIPS, also in Barcelona. The acceptance rate of ICDM this year is 19.6%.
June, 2016: We have uploaded an overview paper of tensor decomposition to arXiv.org; see ‘‘Tensor decomposition for signal processing and machine learning’’. In this paper, fundamental aspects of tensor are addressed, which include identifiability issues (and insightful simple proofs!), algorithms, and applications from classical signal processing and recent machine learning topics. The goal of this paper is to provide researchers a starting point of doing tensor-related research.
June, 2016: Our EUSIPCO2016 papers have been accepted. Here is the one that deals with the famous unit-modulus quadratic program ‘‘Fast unit-modulus least squares with applications in transmit beaforming’’. We proposed an algorithm that uses a three-line code (see Algorithm 1 in the paper) to approximate this famous problem, which works surprisingly well, and saves memory and runtime substantially compared to some popular approaches, e.g., semidefinite relaxation.
April, 2016: We have submitted a manuscript titled ‘‘Learning from hidden traits: Joint factor analysis and latent clustering’’ to IEEE Transactions on Signal Processing. Motivated by the fact that many data (e.g., documents and handwritten digits) exhibit better cluster structure in some latent domain relative to the original data domain, we propose a formulation that seeks such cluster-aware dimensionality reduction.
Feb. 2016: Check out the paper ‘‘Robust volume minimization-based structured matrix factorization via alternating optimization’’. We look into an important matrix factorization model in hyperspectral imaging and topic mining, where the data are considered from a convex hull.
We find the loading factors via solving a simplex-volume minimization problem.
We pay special attention to a practical problem in this structured matrix factorization model, namely, the outlier sensitivity.
This paper will be presented in ICASSP2016, Mar. 20-25, 2016, Shanghai, China.
Dec. 2015: I gave two talks in the Digital Technology Center at University of Minnesota, Minneapolis, MN55455, United States, and in the School of Electronic Engineering at University of Electronic Science and Technology of China, Chengdu, China, respectively; the title of the talks was ‘‘A Structured Matrix Factorization Model for Signal Prcoessing and Machine Learning’’.
Jul. 2015: Check out this paper which has been accepted by IEEE Transactions on Signal Processing “A factor analysis framework for power spectra separation and multiple emitter localization,” . We consider a scenario in wireless communication, where multiple emitters exist, and the receivers wish to know their locations and their individual power spectra. This problem finds its application in dynamic spectrum access systems, e.g., cognitive radio. It may also be used for intelligent beamforming, routing, and scheduling. Existing spectrum sensing approaches mostly consider estimating the aggregate spectrum of the received signal, rather than the underlying spectral atoms, i.e., individual spectra corresponding to different sources. We consider modeling, formulating, and solving this problem; robustification against sensor failure is also considered.
Jul. 2015: The paper “Joint Tensor Factorization and Outlying Slab Suppression With Applications” has been accepted by IEEE Transactions on Signal Processing; see the pre-print here ArXiv. In this work, we consider a realistic scenario where some slabs of a tensor is corrupted. Such a setup is commonly seen in speech separation, Fluorescence data analysis, and social network data mining. A simple low-rank tensor factorization algorithm is proposed to deal with this problem, and interesting interpretable results are observed.
|