the cognistx blog

Open Domain Question Answering Series: Part 3

January 10, 2022
By
Kaushik Shakkari

Part 3: Introduction to Knowledge Graphs for Question Answering

Original post | LinkedIn profile

Have you ever wondered how Google shows the “About” panel when you search for a person or a place?

They use Knowledge Graphs to do it!

Google’s About Panel shows facts and broader topics about Tony Stark when someone searches about him. The information is extracted by inferring adjacent nodes and relationships of the Tony Stark node in the knowledge graph.

In general, there are two types of question answering. In the previous two articles (Introduction to Machine Reading Comprehension and Machine Reading Comprehension at Scale), I discussed how to build a Neural Question Answering Model using Language Models and Transfer Learning on unstructured data like documents. I will introduce Knowledge Graphs Question Answering (KGQA) on structured data in this article.

Knowledge Graphs were first introduced in 2012 by Google. A Knowledge Graph represents text as a graph using entities as nodes and relationships between entities as edges.

Let us take consider the following sentence,

Tony Stark was co-created by writer and editor Stan Lee, developed by scripter Larry Lieber, and designed by artists Don Heck and Jack Kirby.

The picture above is the Knowledge Graph for the above sentence.

Many companies like Airbnb, eBay, Microsoft, and Comcast build their domain-specific knowledge graph platform and maintain them using their customer and product data to make data-driven business decisions.

Integration of all product data within a knowledge graph leads to process cost savings of up to 65% and enables product managers to make well-founded decisions. — Deloitte

The main three NLP tasks required for constructing knowledge graphs are Named Entity Extraction, Relationship Extraction, and Coreference Resolution.

Named Entity Extraction:

Named Entity Extraction is the task of predicting named entities or proper names like a person, an organization, or a location for a given text.

Named Entity Extraction from AllenNLP

Relationship Extraction:

Relationship Extraction is the task of extracting relationships between provided entities or between a subject and an object. Dependency Parsing is a way to find relationships and entities using sentence structure and grammar. For example, the below dependency parser detected Stark as a proper noun and a subject (entity), weapon and tests as nouns and objects (entities), and is conducting as verbs (relation).

Dependency Parsing from AllenNLP

Coreference Resolution:

Coreference Resolution is the task of predicting if different words in the text refer to the same entity. In the below picture, the model found Tony Stark, him, he, his, and Stark as the same entity. (tagged with number 0 and color blue)

Coreference Resolution from AllenNLP

Apart from Coreference Resolution, we can also perform mapping the Alternative Labels (CEO and Chief Executive Officer refers to the same entity), mapping Synonyms (House, Home, and Residence refer to the same entity), using Transitive Property (Sundar Pichai is the CEO and a CEO is an employee → Sundar Pichai is an employee). Note: Transitive Property might not always be true (Lion eats sheep and sheep eats grass → lion doesn’t eat grass).

We can get entities and relationship triplets (Subject, Object, and Relation) using the above-listed methods. We can create a Directed Acyclic Graph (Knowledge Graph) using triplets and open source libraries like Networkx. It is a good idea to store knowledge graphs at scale in a robust graph-based database like GraphDB.

Question Answering over Knowledge Graphs:

Knowledge Graphs can help answer factual questions effectively. When a question like “Where was Nelson Mandela born?” is asked,e expect a place as the answer. Similarly, when a question like “When did India get its independence?” is asked, we expect a date as the answer.

When a user asks a query, it is parsed to get relevant entities and their relationships to extract the answer from the knowledge graph. KGQA can achieve this by converting natural language questions into structural queries like SPARQL and Cypher. For example, when a question like “Where was Nelson Mandela born in South Africa?” is asked, “Nelson Mandela” and “South Africa” are extracted as entities, and “was born” and “is located” are extracted as the relationships to arrive at the answer — “Mvezo.”

Entities and relationship parsing on user queries are performed by simple template-based entity and relationship detection methods. However, recent research works like Querying knowledge graphs in natural language (Jan 2021) proposed a system to translate natural language questions into SPARQL queries using a Tree-LSTM-based neural network. Other research works like A User Interface for Exploring and Querying Knowledge Graphsintroduced language and visual systems for non-technical users to avoid writing complex queries on Knowledge Graphs.

The biggest advantages of Knowledge Graph Question Answering over Neural Question Answering are the ability to extract logical reasoning between different entities and easy interpretability for root cause analysis. However, conventional KGQA fails on sparse or small knowledge graphs as they are incomplete to answer user questions correctly. Moreover, Knowledge Graphs cannot capture broader knowledge that Neural or Language Model-based Question Answering captures on large unstructured text. Hence there has been active research in Question Answering to combine the complementary strengths of Language Models and Knowledge Graphs for Question Answering.

Conclusion:

Kudos, you completed Introduction to Knowledge Graphs for Question Answering. In the first two articles, I discussed factoid and extractive question answering modeling, where models point to the exact text in a document as an answer for a given question. But there can be scenarios where different documents must be referred-to to generate answers for a given query. The future articles will introduce Generative Question Answering Models for complex queries.

Stay tuned for more articles in the Open Domain Question Answering Series!

Past Blog Posts