Linyi Yang

Linyi Yang

Postdoctoral Associate

Westlake University

Biography

I’m a Postdoctoral Associate in the Westlake NLP group, working with Yue Zhang. Previously, I graduated with a PhD from the Insight Centre, University College Dublin, where I worked with Barry Smyth and Ruihai Dong.

I am broadly interested in the problem of Data-centric AI and Trustworthy AI, improving the transparency and generalization of neural networks for natural language understanding. To achieve this goal, I am now working on causality guided methods for NLP and its applications in high-stake domains. Feel free to contact me via email if you have any questions about our research work.

Interests

  • Data-centric AI, Causality.
  • (Awards)
  • Outstanding Postdoctoral Representative (only one awardee), 2023.
  • Outstanding Postdoc Researcher, 2022.
  • Microsoft Early-Career Rising Star, MSRA, 2022.
  • Outstanding Self-financed Students Abroad, Applicable for Non-CSC PhDs (only one winner in Ireland), 2021.
  • Best Paper Candiadate, CCIS, 2018.

Education

  • PhD in Artificial Intelligence, 2017-2021

    University College Dublin

  • MSc in Artificial Intelligence, 2017

    University College Dublin

  • BSc in Computer Science, 2016

    Harbin Engineering University

News

  • Area Chair / Senior Programme Committe (SPC): EMNLP-22; IJCAI-23.
  • PC Member/Reviewer: CIKM-20; COLING-20; ACL-21; SIGIR-21; CKIM-21; EMNLP-21; IEEE-Access.
  • 2022-Dec: I recieved Outstanding Postdoctoral Fellows from Westlake University (5/600+).
  • 2022-Sep: One paper co-operating with MSRA has been accepted to NeurIPS 2022. The first author was my intern at Westlake University. Big congrats! (core: A*, CCF: A)
  • 2022-Aug: Two papers (one first-author paper) have been accepted to COLING 2022. (core: A, CCF: B)
  • 2022-Mar: One co-first author long paper has been accepted to ACL 2022 main conference. (core: A*, CCF: A)
  • 2022-Jan: One first-author long paper has been accepted to AAAI 2022 (15% acceptance rate). (core: A*, CCF: A)
  • 2022-Jan It’s my great honor to serve as an Area Chair (AC) at EMNLP-22!
  • We start working on tracking the progress in the topic of FinNLP. Feel free to add any relevant items to Project Link
  • 2021-May: One first-author long paper has been accepted to ACL 2021 (21% acceptance rate). (core: A*, CCF: A)
  • 2020-Oct: One first-author long paper has been accepted to COLING 2020 (Top 5% submissions). (core: A, CCF: B)
  • 2020-Sep: One co-first author resource paper has been accepted to CIKM 2020 (20% acceptance rate). (core: A, CCF: B)
  • 2019-Dec: One first-author long paper has been accepted to WWW 2020 (19% acceptance rate). (core: A*, CCF: A)
  • 2019-Aug: One first-author long paper has been accepted to FinNLP Workshop@IJCAI-19 (Oral).
  • 2018-Nov: Our paper won the best paper nomitation at CCIS 2018 (Best Paper Candiadate).
  • 2017-Dec: My first paper was published at AICS 2017.

Recent Publications

Quickly discover relevant papers by filtering publications.

NumHTML: Numeric-Oriented Hierarchical Transformer Model for Multi-task Financial Forecasting

This paper describes a numeric-oriented hierarchical transformer model (NumHTML) to predict stock returns, and financial risk using multi-modal aligned earnings calls data by taking advantage of the different categories of numbers (monetary, temporal, percentages etc.) and their magnitude.

Deep Neural Approach for Financial Analysis

We demonstrate the success of deep learning methods in modeling unstructured data for financial applications, including explainable deep learning models, multi-modal multi- task learning frameworks, and counterfactual generation systems for explanations and data augmentations.

Exploring the Efficacy of Automatically Generated Counterfactuals for Sentiment Analysis

While state-of-the-art NLP models have been achieving the excellent performance of a wide range of tasks in recent years, important questions are being raised about their robustness and their underlying sensitivity to systematic biases that may exist in their training and test data.

Generating Plausible Counterfactual Explanations for Deep Transformers in Financial Text Classification

This paper proposes a novel methodology for producing plausible counterfactual explanations, whilst exploring the regularization benefits of adversarial training on language models in the domain of FinTech. Exhaustive quantitative experiments demonstrate that not only does this approach improve the model accuracy when compared to the current state-of-the-art and human performance, but it also generates counterfactual explanations which are significantly more plausible based on human trials.

MAEC: A Multimodal Aligned Earnings Conference Call Dataset for Financial Risk Prediction

We present the approach used in this work as providing a suitable framework for processing similar forms of data in the future. The resulting dataset is more than six times larger than those currently available to the research community and we discuss its potential in terms of current and future research challenges and opportunities.

HTML: Hierarchical Transformer-based Multi-task Learning for Volatility Prediction

This paper proposes a novel hierarchical, transformer, multi-task architecture designed to harness the text and audio data from quarterly earnings conference calls to predict future price volatility in the short and long term. This includes a comprehensive comparison to a variety of baselines, which demonstrates very significant improvements in prediction accuracy, in the range 17% - 49% compared to the current state-of-the-art.

Leveraging BERT to Improve the FEARS Index for Stock Forecasting

In this paper, we take into account the semantics of the FEARS search terms by leveraging the Bidirectional Encoder Representations from Transformers (BERT), and further apply a self-attention deep learning model to our refined FEARS seamlessly for stock return prediction.

Explainable Text-Driven Neural Network for Stock Prediction (*Best Paper Nominated)

We use an output attention mechanism to allocate different weights to different days in terms of their contribution to stock price movement. Thorough empirical studies based upon historical prices of several individual stocks demonstrate the superiority of our proposed method in stock price prediction compared to state-of-the-art methods

Multi-level Attention-Based Neural Networks for Distant Supervised Relation Extraction

We propose a dual-level attention mechanism for the relation extraction problem

Multi-level Attention-Based Neural Networks for Distant Supervised Relation Extraction

We propose a dual-level attention mechanism for the relation extraction problem

Recent Posts

Recent & Upcoming Talks