Banner
<

PAKDD Workshop on
Graph Learning with Foundation Models
(PAKDD-GLFM)
PAKDD-GLFM
Graph Learning with Foundation Models

2025 2025

June 10-13, 2025 June 10-13, 2025

Sydney, Australia Sydney, Australia

About PAKDD-GLFM 2025

In PAKDD-GLFM 2025 (Workshop on Graph Learning with Foundation Models @ PAKDD 2025), we aim to bring together researchers and practitioners from academia and industry to discuss and advance the state-of-the-art in graph machine learning with foundation models.

Graph Foundation Models (GFMs) represent a cutting-edge approach in graph machine learning that integrates the power of large-scale foundation models with graph structures. Specifically, GFMs are designed to effectively capture complex relationships and dependencies present in graph-structured data, such as social networks, biological networks, and knowledge graphs. By learning from diverse and extensive graph data, GFMs show emergent capabilities that significantly enhance performance across various, even unseen, downstream tasks, such as node/graph classification and link prediction.

As graphs continue to be a powerful tool for modeling real-world interactions, the development of GFMs is becoming increasingly important with real-world applications spanning diverse fields, including detecting anomalies in social networks, drug discovery through graph-based molecular classification, enhancing recommendation systems for personalized content, and strengthening cybersecurity by identifying vulnerabilities in network structures. Again, the emergent capabilities of GFMs allow them to adapt to new applications, generalizing across diverse domains and uncovering insights beyond the reach of traditional models.

Call For Papers

The scope of this workshop includes (but is not limited to):

  • Theoretical foundation of GFMs: Understanding their ability to generalize and remain resilient.
  • Building blocks of GFMs: Exploring the structural components that underpin graph foundation models.
  • Empirical analysis: Evaluation across tasks and datasets, identifying limitations.
  • Large-scale pre-training: Techniques to reduce computational costs while maintaining performance.
  • Fine-tuning and adaptation: Techniques for improving performance on target tasks.
  • Multi-modality in graph tasks: Integration of multi-modal data.
  • LLM + Graph learning: Combining large language models with graph learning to unlock new capabilities and applications.
  • New applications: Leveraging emergent capabilities for real-world scenarios.

Important Dates

  • Workshop Paper Deadline: March 9, 2025 (Extended!)
  • Acceptance Notification: March 15, 2025
  • Camera-ready Submission: March 29, 2025
  • All deadlines are 23:59 Pacific Standard Time (PST).

Submission Instructions

Paper submission must be in English. All papers will be double-blind reviewed by the Program Committee based on technical quality, relevance to the GLFM workshop, originality, significance, and presentation quality. All paper submissions will be handled electronically. The author list and order cannot be changed after the paper is submitted. Papers that do not comply with the Submission Policy will be rejected without review.

Each submitted paper must include an abstract up to 200 words. Each submitted paper must be no longer than 12 pages (including references, appendices, etc.). Authors must use Springer LNCS/LNAI manuscript submission guidelines and formatting template (https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines) for their submissions. All papers must be submitted electronically through the GLFM's CMT paper submission system (https://cmt3.research.microsoft.com/GLFM2025) in PDF format only.

GLFM 2025 will not accept any paper that, at the time of submission, is under review for, has already been published in, or has already been accepted for publication in a journal or another venue with formally published proceedings. Authors are also required not to submit their papers to other venues with formal publication during the GLFM 2025 review period. Papers on arXiv do not violate this rule as long as the submitted paper does not cite them.

Proceedings and Publication: All accepted papers will be published in the official workshop proceedings volume via Springer LNCS/LNAI. Authors of accepted papers will have their contributions included in this volume. By submitting a paper, authors agree to have their paper published in the proceedings if it is accepted.

Acknowledgment: The Microsoft CMT (https://cmt3.research.microsoft.com/) service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.

Program Schedule

Date: June 10, 2025
Location: Boardroom (Third floor), Sydney Masonic Centre

Time Session Details
13:30 - 13:40 Opening Remarks
13:40 - 14:15 Keynote Talk 1 Evangelos (Vagelis) Papalexakis
University of California Riverside
It's all about the latent structure: Tensor and graph methods for actionable insights
14:15 - 15:00 Invited Talks
  • 14:15-14:30 Probabilistic Circuits for Graph-Based PU-Learning
    Sagad Hamid, Tanya Braun, and Jaemin Yoo
  • 14:30-14:45 Closed-Form Node Classifier on Hypergraphs
    Chaewoon Bae, Jaehyun Lee, and Jaemin Yoo
  • 14:45-15:00 Oldie but Goodie: Re-illuminating Label Propagation on Graphs with Partially Observed Features
    Sukwon Yun, Xin Liu, Yunhak Oh, Junseok Lee, Sungwon Kim, Tianlong Chen, Tsuyoshi Murata, and Chanyoung Park
15:00 - 15:25 Tea Break
15:25 - 16:00 Keynote Talk 2 Junbin Gao
University of Sydney
Title: TBD
16:00 - 17:00 Regular Workshop Papers
  • 16:00-16:15 Graph Generative Models Evaluation with Masked Autoencoder
    Chengen Wang and Murat Kantarcioglu
  • 16:15-16:30 Investigating the Limits of Graph Foundation Model in Real-World Travel Recommendation Systems
    Nayoung Lee, Gunmin Lee, and Donghun Lee
  • 16:30-16:45 Adaptive Context-Aware GCN for Aspect-Based Sentiment Analysis
    Jianhua Chi and Xianguo Zhang
  • 16:45-17:00 FedCIPP: Full-Lifecycle IP Protection for Federated Learning
    Junjie He, Xianyi Chen, Jiangfeng Qian, Xuebo Wang, and Hui Mi

Keynote Talk 1 Details

Speaker: Evangelos (Vagelis) Papalexakis
Affiliation: University of California Riverside
Time: June 10, 2025 - 13:40 to 14:15
Title: It's all about the latent structure: Tensor and graph methods for actionable insights

Abstract

Tensors and graphs have been essential tools in expressing complex relations in data. Of particular interest are tensor and graph mining methods that allow us to uncover the latent structure of the data, and as a result produce interpretable representations of the data that can be used for a number of downstream tasks and for generating actionable insights.

In this talk, we will first explore how tensor methods can supercharge graph and data mining, by showing exciting examples including community detection. Subsequently, we are going to present novel self-supervised graph representation learning methods which rely on uncovering the latent structure of the data in achieving high performance and speedup over existing state of the art. Finally, if time allows, we are going to briefly discuss fascinating connections between latent structure recovery and robustness of deep learning models.

Speaker Bio

Evangelos (Vagelis) Papalexakis is an Associate Professor and the Ross Family Chair of the CSE Dept. at University of California Riverside. He received his PhD degree at the School of Computer Science at Carnegie Mellon University (CMU). Prior to CMU, he obtained his Diploma and MSc in Electronic & Computer Engineering at the Technical University of Crete, in Greece. Broadly, his research interests span the fields of Data Science, Machine Learning, Artificial Intelligence, and Signal Processing.

His research involves designing interpretable models and scalable algorithms for extracting knowledge from large multi-aspect datasets, with specific emphasis on tensor factorization models, and applying those algorithms to a variety of real-world problems, including AI for Science, explainable AI, gravitational wave detection, cybersecurity, transportation and railway safety, and precision agriculture.

He is heavily involved in the data science research community with extensive experience in conference organization, including organizing a workshop at ACM SIGKDD 2019 on "Tensor Methods for Emerging Data Science Problems", being the Deep Learning Day Co-Chair for ACM SIGKDD 2019, the Doctoral Forum Co-Chair for SIAM SDM 2021, the Demos Co-Chair for ACM WSDM 2022, the Program Co-Chair for SIAM SDM 2022, and the General Co-Chair for SIAM SDM 2024 and SIAM SDM 2025.

His work has appeared in top-tier conferences and journals, and has attracted a number of distinctions, including the 2017 SIGKDD Dissertation Award (runner-up), a number of paper awards, the National Science Foundation CAREER award, the 2021 IEEE DSAA Next Generation Data Scientist Award, the 2022 IEEE Signal Processing Society Donald G. Fink Overview Paper Award, and the IEEE ICDM 2022 Tao Li Award and 2025 PAKDD Early Career Research Award, both of which award excellence in early-career researchers in data mining.

Organizers

Fanchen Bu
Fanchen Bu

PhD Student
KAIST

Email: boqvezen97@kaist.ac.kr

Website
Minyoung Choe
Minyoung Choe

PhD Student
KAIST

Email: minyoung.choe@kaist.ac.kr

Website
Jaemin Yoo
Jaemin Yoo

Assistant Professor
KAIST

Email: jaemin@kaist.ac.kr

Website
Chanyoung Park
Chanyoung Park

Assistant Professor
KAIST

Email: cy.park@kaist.ac.kr

Website
Namyong Park
Namyong Park

Postdoctoral Researcher
Meta AI

Email: namyongp@meta.com

Website
Bryan Hooi
Bryan Hooi

Assistant Professor
National University of Singapore

Email: bhooi@comp.nus.edu.sg

Website
Neil Shah
Neil Shah

Research Scientist
Snap Research

Email: nshah@snap.com

Website
Shirui Pan
Shirui Pan

Professor
Griffith University

Email: s.pan@griffith.edu.au

Website
Kijung Shin
Kijung Shin

Associate Professor
KAIST

Email: kijungs@kaist.ac.kr

Website

PC Members

We sincerely appreciate the valuable contributions of our reviewers who helped ensure the quality of submissions.

  • Minyoung Choe (KAIST)
  • Yeonjun In (KAIST)
  • Jiaxin Ju (Griffith University)
  • Shinhwan Kang (KAIST)
  • Kibum Kim (KAIST)
  • Sein Kim (KAIST)
  • Sunwoo Kim (KAIST)
  • Huan Yee Koh (Monash University)
  • Soo Yong Lee (KAIST)
  • Geon Lee (KAIST)
  • Jongha Lee (KAIST)
  • Namkyeong Lee (KAIST)
  • Langzhang Liang (KAIST)
  • Linhao Luo (Monash University)
  • Sangwoo Seo (KAIST)
  • Yuan Sui (National University of Singapore)
  • Zhangchi Qiu (Griffith University)
  • Kanghoon Yoon (KAIST)
  • Zicheng Zhao (Nanjing University of Science and Technology)

Acknowledgments

This workshop is supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. RS-2024-00406985).