Neural Graph Databases Outline: Neural Graph Databases: What and Why? The Blueprint of NGDBs Neural Graph Storage Neural Query Engine A Taxonomy of Neural Graph Reasoning for Query Engines Open Challenges for NGDBs Learn More


What’s Latest in Graph ML?

A latest milestone in graph data management

We introduce the concept of Neural Graph Databases as the subsequent step within the evolution of graph databases. Tailored for giant incomplete graphs and on-the-fly inference of missing edges using graph representation learning, neural reasoning maintains high expressiveness and supports complex logical queries similar to plain graph query languages.

Image by Authors, assisted by Stable Diffusion.
  1. Neural Graph Databases: What and Why?
  2. The blueprint of NGDBs
  3. Neural Graph Storage
  4. Neural Query Engine
  5. Neural Graph Reasoning for Query Engines
  6. Open Challenges for NGDBs
  7. Learn More

🍨Vanilla graph databases are just about in all places due to the ever-growing graphs in production, flexible graph data models, and expressive query languages. Classical, symbolic graph DBs are fast and funky under one essential assumption:

Completeness. Query engines assume that graphs in classical graph DBs are complete.

Under the completeness assumption, we are able to construct indexes, store the graphs in quite a lot of read/write-optimized formats and expect the DB would return .

But this assumption does rarely hold in practice (we’d say, doesn’t hold way too often). If we take a look at some outstanding knowledge graphs (KGs): in Freebase, 93.8% of individuals haven’t any place of origin and 78.5% haven’t any nationality, about 68% of individuals wouldn’t have any occupation, while in Wikidata, about 50% of artists haven’t any date of birth, and only 0.4% of known buildings have details about height. And that’s for the most important KG openly curated by a whole bunch of enthusiasts. Surely, 100M nodes and 1B statements should not the most important ever graph within the industry, so you’ll be able to imagine the degree of incompleteness there.

Clearly, to account for incompleteness, along with we’ve got to also ask (or “what may be there?”). Let’s take a look at the instance:

(a) – input query; (b) — incomplete graph with predicted edges (dashed lines); (c) — a SPARQL query returning one answer (UofT) via graph traversal; (d) — neural execution that recovers missing edges and returns two latest answers (UdeM, NYU). Image by Authors.

Here, given an incomplete graph (edges (Turing Award, win, Bengio) and (Deep Learning, field, LeCun) are missing) and a question “At what universities do the Turing Award winners in the sector of Deep Learning work?” (expressed in a logical form or in some language like SPARQL), a symbolic graph DB would return just one answer reachable by graph traversal. We discuss with such answers as easy answers, or existing answers. Accounting for missing edges, we’d recuperate two more answers and (hard answers, or inferred answers).

The way to infer missing edges?

  • In classical DBs, we don’t have much alternative. RDF-based databases have some formal semantics and may be backed by hefty OWL ontologies but, depending on graph size and complexity of inference, it’d take an infinite period of time to finish the inference in SPARQL entailment regimes. Labeled Property Graph (LPG) graph databases wouldn’t have built-in means for inferring missing edges in any respect.
  • Due to the advances in Graph Machine Learning, we are able to often perform link prediction in a latent (embedding) space in linear time! We are able to then extend this mechanism to executing complex, database-like queries right within the embedding space.

Neural Graph Databases mix the benefits of traditional graph DBs with modern graph machine learning.

That’s, DB principles like (1) graphs as a first-class citizen, (2) efficient storage, and (3) uniform querying interface at the moment are backed by Graph ML techniques similar to (1) geometric representations, (2) robustness to noisy inputs, (3) large-scale pretraining and fine-tuning to be able to bridge the incompleteness gap and enable neural graph reasoning and inference.

Normally, the design principles for NGDBs are:

  • The — the underlying data may need missing information on node-, link-, and graph-levels which we would really like to infer and leverage in query answering;
  • — just like traditional databases that allow updates and quick querying, representation learning algorithms for constructing graph latents must be inductive and generalize to unseen data (latest entities and relation at inference time) within the zero-shot (or few-shot) manner to stop costly re-training (as an example, of shallow node embeddings);
  • — the power of latent representations to encode logical and semantic relations in the information akin to FOL (or its fragments) and leverage them in query answering. Practically, the set of supported logical operators for neural reasoning must be near or comparable to standard graph database languages like SPARQL or Cypher;
  • beyond knowledge graphs — any graph-structured data that may be stored as a node or record in classical databases (consisting, for instance, of images, texts, molecular graphs, or timestamped sequences) and may be imbued with a vector representation is a sound source for the Neural Graph Storage and Neural Query Engine.

The important thing methods to handle the NGDB principles are:

  • — while traditional graph DBs hash the adjacency matrix (or edge list) in lots of indexes, the incompleteness assumption implies that each given edges graph latents (vector representations) grow to be the sources of truth within the Neural Graph Storage;
  • –, basic operations similar to edge traversal can’t be performed solely symbolically resulting from the incompleteness assumption. As an alternative, the Neural Query Engine operates on each adjacency and graph latents to include possibly missing data into query answering;

Actually, by answering queries within the latent space (and never sacrificing traversal performance) we are able to ditch symbolic database indexes altogether.

The primary difference between symbolic graph DBs and neural graph DBs: traditional DBs answer the query “What’s there?” by edge traversal while neural graph DBs also answer “What’s missing?”. Image by Authors.

Before diving into NGDBs, let’s take a take a look at normally — seems they’ve been around for some time and you would possibly have noticed that. Many machine learning systems already operate on this paradigm when data is encoded into model parameters and querying is comparable to a forward pass that may output a latest representation or prediction for a downstream task.

What’s the present state of neural databases? What are the differences between its kinds and what’s special about NGDBs?

Differences between Vector DBs, natural language DBs, and neural graph DBs. Image by Authors
  1. belong to the family of storage-oriented systems commonly built around approximate nearest neighbor libraries (ANN) like Faiss or ScaNN (or custom solutions) to reply distance-based queries using Maximum Inner-Product Search (MIPS), L1, L2, or other distances. Being encoder-independent (that’s, any encoder yielding vector representations could be a source like a ResNet or BERT), vector databases are fast but lack complex query answering capabilities.
  2. With the recent rise of large-scale pretrained models — or, foundation models — we’ve got witnessed their huge success in natural language processing and computer vision tasks. We argue that such foundation models are also a outstanding example of neural databases. There, the storage module is likely to be presented directly with model parameters or outsourced to an external index often utilized in retrieval-augmented models since encoding all world knowledge even into billions of model parameters is tough. The query module performs in-context learning either via filling within the blanks in encoder models (BERT or T5 style) or via prompts in decoder-only models (GPT-style) that may span multiple modalities, eg, learnable tokens for vision applications and even calling external tools.
  3. introduced by Thorne et al model atomic elements as textual facts encoded to a vector via a pre-trained language model (LM). Queries to NLDB are sent as natural language utterances that get encoded to vectors and query processing employs the retriever-reader approach.

Neural Graph Databases shouldn’t be a novel term — many graph ML approaches tried to mix graph embeddings with database indexes, perhaps RDF2Vec and LPG2Vec are a number of the most outstanding examples how embeddings may be plugged into graph DBs and run on top of symbolic indexes.

In contrast, we posit that NGDBs can right within the latent space. As we show below, there exist ML algorithms that may simulate exact edge traversal-like behavior in embedding space to retrieve “” in addition to perform neural reasoning to reply “”.

A conceptual scheme of Neural Graph Databases. An input query is processed by the Neural Query Engine where the Planner derives a computation graph of the query and the Executor executes the query within the latent space. The Neural Graph Storage employs Graph Store and Feature Store to acquire latent representations within the Embedding Store. The Executor communicates with the embedding store to retrieve and return results. Image by Authors

On a better level, NGDB comprises two primary components, and . The query answering pipeline starts with the query sent by some application or downstream task already in a structured format (obtained, for instance, via semantic parsing if an initial query is in natural language to rework it right into a structured format).

The query first arrives to the Neural Query Engine, and, specifically, to the Query Planner module. The duty of the Query Planner is to derive an efficient computation graph of atomic operations (projections and logical operations) with respect to the query complexity, prediction tasks, and underlying data storage similar to possible graph partitioning.

The derived plan is then sent to the Query Executor that encodes the query in a latent space, executes the atomic operations over the underlying graph and its latent representations, and aggregates the outcomes of atomic operations right into a final answer set. The execution is finished via the Retrieval module that communicates with the Neural Graph Storage.

The storage layer consists of

1️⃣ Graph Store for keeping the multi-relational adjacency matrix in space- and time-efficient manner (eg, in various sparse formats like COO and CSR);

2️⃣ Feature Store for keeping node- and edge-level multimodal features related to the underlying graph.

3️⃣ Embedding Store that leverages an Encoder module to supply graph representations in a latent space based on the underlying adjacency and associated features.

The Retrieval module queries the encoded graph representations to construct a distribution of potential answers to atomic operations.

In traditional graph DBs (right), queries are optimized right into a plan (often, a tree of join operators) and executed against the storage of DB indexes. In Neural Graph DBs (left), we encode the query (or its steps) in a latent space and execute against the latent space of the underlying graph. Image by Authors.

In traditional graph DBs, storage design often is dependent upon the graph modeling paradigm.

The 2 hottest paradigms are Resource Description Framework (RDF) graphs and Labeled Property Graphs (LPG). We posit, nonetheless, that the brand new RDF-star (and accompanying SPARQL-star) goes to unify those two merging logical expressiveness of RDF graphs with attributed nature of LPG. Many existing KGs already follow the RDF-star (-like) paradigm like hyper-relational KGs and Wikidata Statement Model.

If we’re to check the backbone graph modeling paradigm in the subsequent years, we’d go for RDF-star.

Within the Neural Graph Storage, each the input graph and its vector representations are sources of truth. For answering queries within the latent space, we’d like:

  • Query Encoder
  • Graph Encoder
  • Retrieval mechanism to match query representation against the graph representation

The graph encoding (embedding) process may be viewed as a compression step however the semantic and structure similarity of entities/relations is kept. The gap between entities/relations within the embedding space must be positively correlated with the semantic/structure similarity. There are numerous options for the architecture of the encoder — and we recommend sticking to ones to stick to the NGDB design principles. In our recent NeurIPS 2022 work, we presented two such inductive models.

Query encoding is normally matched with the character graph encoding such that each of them shall be in the identical space. Once we’ve got latent representations, the Retrieval module kicks in to extract relevant answers.

The retrieval process may be seen as a nearest neighbor search of the input vector within the embedding space and has 3 direct advantages:

  1. Confidence scores for every retrieved item — due to a predefined distance function within the embedding space
  2. Different definitions of the latent space and the space function — catering for various graphs, eg, tree-like graphs are easier to work in hyperbolic spaces
  3. Efficiency and scalability — retrieval scales to extremely large graphs with billions of nodes and edges
Query planning in NGDBs (left) and traditional graph DBs (right). The NGDB planning (assuming incomplete graphs) may be performed autoregressively step-by-step (1) or generated entirely in a single step (2). The normal DB planning is cost-based and resorts to metadata (assuming complete graphs and extracted from them) similar to the variety of intermediate answers to construct a tree of join operators. Image by Authors

In traditional DBs, a typical query engine performs three major operations. (1) to confirm syntax correctness (often enriched with a deeper semantic evaluation of query terms); (2) and optimization to derive an efficient query plan (often, a tree of relational operators) that minimizes computational costs; (3) that scans the storage and processes intermediate results in keeping with the query plan.

It’s relatively straightforward to increase those operations to NGDBs.

1️⃣ Query Parsing may be achieved via semantic parsing to a structured query format. We intentionally leave the discussion on a question language for NGDBs for future works and heated public discussions 😉

2️⃣ Query Planner derives an efficient query plan of atomic operations (projections and logical operators) maximizing completeness (all answers over existing edges should be returned) and inference (of missing edges predicted on the fly) taking into consideration query complexity and underlying graph.

3️⃣ Once the query plan is finalized, the Query Executor encodes the query (or its parts) right into a latent space, communicates with the Graph Storage and its Retrieval module, and aggregates intermediate results into the ultimate answer set. There exist two common mechanisms for query execution:

  • Atomic, resembling traditional DBs, when a question plan is executed sequentially by encoding atomic patterns, retrieving their answers, and executing logical operators as intermediate steps;
  • Global, when the complete query graph is encoded and executed in a latent space in a single step.

The primary challenge for neural query execution is matching query expressiveness to that of symbolic languages like SPARQL or Cypher — up to now, neural methods can execute queries near First-Order Logic expressiveness, but we’re somewhat halfway there to symbolic languages.

The literature on neural methods for complex logical query answering (aka, query embedding) has been growing since 2018 and the seminal NeurIPS work of Hamilton et al on (GQE). GQE was capable of answer conjunctive queries with intersections and predict missing links on the fly.

GQE may be regarded as the primary tackle Neural Query Engines for NGDBs.

GQE began the entire subfield of Graph Machine Learning followed by some outstanding examples like Query2Box (ICLR 2020) and Continuous Query Decomposition (ICLR 2021). We undertook a serious effort categorizing all those (about 50) works along 3 primary directions:

⚛️ — what’s the underlying structure against which we answer queries;
🛠️ — how we answer queries and which inductive biases are employed;
🗣️ — what we answer, what are the query structures and what are the expected answers.

The taxonomy of neural approaches for complex logical query answering. See the for more details. Image by Authors

⚛️ Talking about , we further break them down into (classic triple-only graphs, hyper-relational graphs, hypergraphs, and more), (discrete entities or including continuous outputs), and (how neural encoders capture higher-order relationships like OWL ontologies).

🛠️ In , we follow the Encoder-Processor-Decoder paradigm classifying inductive biases of existing models, eg, transductive or inductive encoders with neural or neuro-symbolic processors.

🗣️ In , we aim at mapping the set of queries answerable by neural methods with that of symbolic graph query languages. We speak about (going beyond standard And/Or/Not), (from chain-like queries to DAGs and cyclic patterns), and (your favorite relational algebra ).

Analyzing the taxonomy we discover that there isn’t any silver bullet for the time being, eg, most processors can only work in discrete mode with tree-based queries. However it also means there’s a number of room for future work — possibly your contribution!

To be more precise, listed below are the primary NGDB challenges for the next years.

Along the branch:

  • : Supporting more graph modalities: from classic triple-only graphs to hyper-relational graphs, hypergraphs, and multimodal sources combining graphs, texts, images, and more.
  • : Supporting logical reasoning and neural query answering over temporal and continuous (textual and numerical) data — literals constitute a serious portion of graphs in addition to relevant queries over literals.
  • : Supporting complex axioms and formal semantics that encode higher-order relationships between (latent) classes of entities and their hierarchies, eg, enabling neural reasoning over description logics and OWL fragments.

Within the branch:

  • : Inductive encoders supporting unseen relation at inference time — this a key for (1) updatability of neural databases without the necessity of retraining; (2) enabling the pretrain-finetune strategy generalizing query answering to custom graphs with custom relational schema.
  • : Expressive processor networks capable of effectively and efficiently execute complex query operators akin to SPARQL and Cypher operators. Improving sample efficiency of neural processors is crucial for the training time vs quality tradeoff — reducing training time while maintaining high predictive qualities.
  • : To this point, all neural query answering decoders operate exclusively on discrete nodes. Extending the range of answers to continuous outputs is crucial for answering real-world queries.
  • : Because the primary computational bottleneck of processor networks is the dimensionality of embedding space (for purely neural models) and/or the variety of nodes (for neuro-symbolic), latest efficient algorithms for neural logical operators and retrieval methods are the important thing to scaling NGDBs to billions of nodes and trillions of edges.

In :

  • : Neuralizing more complex query operators matching the expressiveness of declarative graph query languages, e.g., supporting Kleene plus and star, property paths, filters.
  • : Answering more complex patterns beyond tree-like queries. This includes DAGs and cyclic graphs.
  • : Allowing projecting greater than a final leaf node entity, that’s, allowing returning intermediate variables, relations, and multiple variables organized in tuples (bindings).
  • : Answering queries outside easy EPFO and EFO fragments and aiming for the expressiveness of database languages.

Finally, in and :

  • The necessity for larger and covering more graph modalities, more expressive query semantics, more query operators, and query patterns.
  • As the present evaluation protocol appears to be limited (focusing only on inferring hard answers) there’s a necessity for a more covering various facets of the query answering workflow.

Pertaining to the Neural Graph Storage and NGDB normally, we discover the next challenges:

  • The necessity for a mechanism to scale neural reasoning to graphs of billions of nodes. Retrieval is tightly connected to the Query Processor and its modeling priors. Existing scalable ANN libraries can only work with basic L1, L2, and cosine distances that limit the space of possible processors within the neural query engine.
  • Currently, all complex query datasets provide a hardcoded query execution plan which may not be optimal. There may be a necessity for a that might transform an input query into an optimal execution sequence taking into consideration prediction tasks, query complexity, kind of the neural processor, and configuration of the Storage layer.

Because of encoder inductiveness and updatability without retraining, there’s a must alleviate the problems of , , and when running inference on much larger graphs than training ones.

NGDB remains to be an emerging concept with many open challenges for future research. If you ought to learn more about NGDB, be at liberty to ascertain out

  • 📜 our paper (arxiv),
  • 🌐 our website,
  • 🔧 GitHub repo with the latest list of relevant papers, datasets, and categorization, be at liberty to open issues and PRs.

We may even be organizing workshops, stay tuned for the updates!


What are your thoughts on this topic?
Let us know in the comments below.


0 0 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

Would love your thoughts, please comment.x