Some Research Topics of My Current Interests
Integration of Deep Learning and First-Order Logic (Knowledge)
Towards developing an end-to-end learning framework to integrate high-level logic representation and deep learning architectures. Our representative works (with applications to NLP tasks) by far:
Our previous attempts on encoding knowledge in terms of syntactic information for NLP tasks (fine-grained sentiment analysis):
- Syntactically-Meaningful and Transferable Recursive Neural Networks for Aspect and Opinion Extraction. [Bibtex]
Wenya Wang and Sinno Jialin Pan.
In Computational Linguistics (CL). Volume 45, Issue 4, Pages 705-736, 2019.
- Memory Networks for Fine-grained Opinion Mining. [Bibtex]
Wenya Wang, Sinno Jialin Pan and Daniel Dahlmeier.
In Artificial Intelligence (AIJ). Volume 265, Pages 1-17, 2018.
- Transferable Interactive Memory Network for Domain Adaptation in Fine-grained Opinion Extraction. [Bibtex]
Wenya Wang and Sinno Jialin Pan.
In Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI-19).
- Recursive Neural Structural Correspondence Network for Cross-domain Aspect and Opinion Co-Extraction. [Bibtex]
Wenya Wang and Sinno Jialin Pan.
In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL-18).
[Code]
- Transition-based Adversarial Network for Cross-lingual Aspect Extraction. [Bibtex]
Wenya Wang and Sinno Jialin Pan.
In Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI-18).
- Coupled Multi-layer Attentions for Co-extraction of Aspect and Opinion Terms. [Bibtex]
Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier and Xiaokui Xiao.
In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI-17).
[Code & Dataset]
- Recursive Neural Conditional Random Fields for Aspect-based Sentiment Analysis. [Bibtex]
Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier and Xiaokui Xiao.
In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP-16).
[Code & Dataset]
Deep Models Compression
Towards learning compressed deep models while preserving satisfactory prediction performance for edge or resource-limted devices. Our representative works by far:
Distributed (or Federated) Multi-task Learning
Towards developing communication-efficient distributed learning frameworks for multi-task learning (i.e, jointly and efficiently learning multiple tasks in a geo-distributed computing environment). Our representative works by far:
Other related works on distributed learning:
Learning from Time Series Data
Towards developing a general learning framework for time series data, where statistical and temporal patterns underlying time series data can be effectively captured. Our representative works (with an application to sensor-based activity recognition) by far:
|