|LLM|TRANSFORMER|FOUNDATION MODEL|GRAPH|NETWORK|
“If the inspiration is solid, every part else will follow.” – Unknown
“The loftier the constructing, the deeper must the inspiration be laid.” – Thomas à Kempis
Foundation models have modified artificial intelligence lately. A foundation model is a model trained with huge amounts of knowledge (often by unsupervised learning) that may be adapted to different tasks. Models reminiscent of BERT or GPT caused a revolution through which one model could then be adapted for all tasks in a site, simplifying AI access and reducing the necessity for data for a single task. We’ve foundation models for text and other modalities, but for modalities reminiscent of graphs and tabular data, we don’t. On this paper we discuss why we shouldn’t have a foundation model for graphs and the way we’d get one, specifically, we are going to answer these questions:
- Why do we would like a foundation model for graphs? Why can we not have one?