Banner
<

PAKDD Workshop on
Graph Learning with Foundation Models
(PAKDD-GLFM)
PAKDD-GLFM
Graph Learning with Foundation Models

2025 2025

June 10-13, 2025 June 10-13, 2025

Sydney, Australia Sydney, Australia

About PAKDD-GLFM 2025

In PAKDD-GLFM 2025 (Workshop on Graph Learning with Foundation Models @ PAKDD 2025), we aim to bring together researchers and practitioners from academia and industry to discuss and advance the state-of-the-art in graph machine learning with foundation models.

Graph Foundation Models (GFMs) represent a cutting-edge approach in graph machine learning that integrates the power of large-scale foundation models with graph structures. Specifically, GFMs are designed to effectively capture complex relationships and dependencies present in graph-structured data, such as social networks, biological networks, and knowledge graphs. By learning from diverse and extensive graph data, GFMs show emergent capabilities that significantly enhance performance across various, even unseen, downstream tasks, such as node/graph classification and link prediction.

As graphs continue to be a powerful tool for modeling real-world interactions, the development of GFMs is becoming increasingly important with real-world applications spanning diverse fields, including detecting anomalies in social networks, drug discovery through graph-based molecular classification, enhancing recommendation systems for personalized content, and strengthening cybersecurity by identifying vulnerabilities in network structures. Again, the emergent capabilities of GFMs allow them to adapt to new applications, generalizing across diverse domains and uncovering insights beyond the reach of traditional models.

Call For Papers

The scope of this workshop includes (but is not limited to):

  • Theoretical foundation of GFMs: Understanding their ability to generalize and remain resilient.
  • Building blocks of GFMs: Exploring the structural components that underpin graph foundation models.
  • Empirical analysis: Evaluation across tasks and datasets, identifying limitations.
  • Large-scale pre-training: Techniques to reduce computational costs while maintaining performance.
  • Fine-tuning and adaptation: Techniques for improving performance on target tasks.
  • Multi-modality in graph tasks: Integration of multi-modal data.
  • LLM + Graph learning: Combining large language models with graph learning to unlock new capabilities and applications.
  • New applications: Leveraging emergent capabilities for real-world scenarios.

Important Dates

  • Workshop Paper Deadline: February 22, 2025
  • Acceptance Notification: March 15, 2025
  • Camera-ready Submission: March 29, 2025
  • All deadlines are 23:59 Pacific Standard Time (PST).

Submission Instructions

Paper submission must be in English. All papers will be double-blind reviewed by the Program Committee based on technical quality, relevance to the GLFM workshop, originality, significance, and presentation quality. All paper submissions will be handled electronically. The author list and order cannot be changed after the paper is submitted. Papers that do not comply with the Submission Policy will be rejected without review.

Each submitted paper must include an abstract up to 200 words. Each submitted paper must be no longer than 12 pages (including references, appendices, etc.). Authors must use Springer LNCS/LNAI manuscript submission guidelines and formatting template (https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines) for their submissions. All papers must be submitted electronically through the GLFM's CMT paper submission system (https://cmt3.research.microsoft.com/GLFM2025) in PDF format only.

GLFM 2025 will not accept any paper that, at the time of submission, is under review for, has already been published in, or has already been accepted for publication in a journal or another venue with formally published proceedings. Authors are also required not to submit their papers to other venues with formal publication during the GLFM 2025 review period. Papers on arXiv do not violate this rule as long as the submitted paper does not cite them.

Acknowledgment: The Microsoft CMT (https://cmt3.research.microsoft.com/) service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.

Program Schedule

Organizers

Fanchen Bu
Fanchen Bu

PhD Student
KAIST

Email: boqvezen97@kaist.ac.kr

Website
Minyoung Choe
Minyoung Choe

PhD Student
KAIST

Email: minyoung.choe@kaist.ac.kr

Website
Jaemin Yoo
Jaemin Yoo

Assistant Professor
KAIST

Email: jaemin@kaist.ac.kr

Website
Chanyoung Park
Chanyoung Park

Assistant Professor
KAIST

Email: cy.park@kaist.ac.kr

Website
Namyong Park
Namyong Park

Postdoctoral Researcher
Meta AI

Email: namyongp@meta.com

Website
Bryan Hooi
Bryan Hooi

Assistant Professor
National University of Singapore

Email: bhooi@comp.nus.edu.sg

Website
Neil Shah
Neil Shah

Research Scientist
Snap Research

Email: nshah@snap.com

Website
Shirui Pan
Shirui Pan

Professor
Griffith University

Email: s.pan@griffith.edu.au

Website
Kijung Shin
Kijung Shin

Associate Professor
KAIST

Email: kijungs@kaist.ac.kr

Website