Information Extraction

Functions to extract various elements of interest from documents already parsed by spaCy, such as n-grams, named entities, subject-verb-object triples, and acronyms.

textacy.extract.words(doc, *, filter_stops=True, filter_punct=True, filter_nums=False, include_pos=None, exclude_pos=None, min_freq=1)[source]

Extract an ordered sequence of words from a document processed by spaCy, optionally filtering words by part-of-speech tag and frequency.

Parameters
  • doc (spacy.tokens.Doc or spacy.tokens.Span) –

  • filter_stops (bool) – If True, remove stop words from word list.

  • filter_punct (bool) – If True, remove punctuation from word list.

  • filter_nums (bool) – If True, remove number-like words (e.g. 10, “ten”) from word list.

  • include_pos (str or Set[str]) – Remove words whose part-of-speech tag IS NOT included in this param.

  • exclude_pos (str or Set[str]) – Remove words whose part-of-speech tag IS in the specified tags.

  • min_freq (int) – Remove words that occur in doc fewer than min_freq times.

Yields

spacy.tokens.Token – Next token from doc passing specified filters in order of appearance in the document.

Raises

TypeError – if include_pos or exclude_pos is not a str, a set of str, or a falsy value

Note

Filtering by part-of-speech tag uses the universal POS tag set; for details, check spaCy’s docs: https://spacy.io/api/annotation#pos-tagging

textacy.extract.ngrams(doc, n, *, filter_stops=True, filter_punct=True, filter_nums=False, include_pos=None, exclude_pos=None, min_freq=1)[source]

Extract an ordered sequence of n-grams (n consecutive words) from a spacy-parsed doc, optionally filtering n-grams by the types and parts-of-speech of the constituent words.

Parameters
  • doc (spacy.tokens.Doc or spacy.tokens.Span) –

  • n (int) – number of tokens per n-gram; 2 => bigrams, 3 => trigrams, etc.

  • filter_stops (bool) – if True, remove ngrams that start or end with a stop word

  • filter_punct (bool) – if True, remove ngrams that contain any punctuation-only tokens

  • filter_nums (bool) – if True, remove ngrams that contain any numbers or number-like tokens (e.g. 10, ‘ten’)

  • include_pos (str or Set[str]) – remove ngrams if any of their constituent tokens’ part-of-speech tags ARE NOT included in this param

  • exclude_pos (str or Set[str]) – remove ngrams if any of their constituent tokens’ part-of-speech tags ARE included in this param

  • min_freq (int) – remove ngrams that occur in doc fewer than min_freq times

Yields

spacy.tokens.Span – the next ngram from doc passing all specified filters, in order of appearance in the document

Raises
  • ValueError – if n < 1

  • TypeError – if include_pos or exclude_pos is not a str, a set of str, or a falsy value

Note

Filtering by part-of-speech tag uses the universal POS tag set; for details, check spaCy’s docs: https://spacy.io/api/annotation#pos-tagging

textacy.extract.entities(doc, *, include_types=None, exclude_types=None, drop_determiners=True, min_freq=1)[source]

Extract an ordered sequence of named entities (PERSON, ORG, LOC, etc.) from a Doc, optionally filtering by entity types and frequencies.

Parameters
  • doc (spacy.tokens.Doc) –

  • include_types (str or Set[str]) – remove entities whose type IS NOT in this param; if “NUMERIC”, all numeric entity types (“DATE”, “MONEY”, “ORDINAL”, etc.) are included

  • exclude_types (str or Set[str]) – remove entities whose type IS in this param; if “NUMERIC”, all numeric entity types (“DATE”, “MONEY”, “ORDINAL”, etc.) are excluded

  • drop_determiners (bool) –

    Remove leading determiners (e.g. “the”) from entities (e.g. “the United States” => “United States”).

    Note

    Entities from which a leading determiner has been removed are, effectively, new entities, and not saved to the Doc from which they came. This is irritating but unavoidable, since this function is not meant to have side-effects on document state. If you’re only using the text of the returned spans, this is no big deal, but watch out if you’re counting on determiner-less entities associated with the doc downstream.

  • min_freq (int) – remove entities that occur in doc fewer than min_freq times

Yields

spacy.tokens.Span – the next entity from doc passing all specified filters in order of appearance in the document

Raises

TypeError – if include_types or exclude_types is not a str, a set of str, or a falsy value

textacy.extract.noun_chunks(doc, *, drop_determiners=True, min_freq=1)[source]

Extract an ordered sequence of noun chunks from a spacy-parsed doc, optionally filtering by frequency and dropping leading determiners.

Parameters
  • doc (spacy.tokens.Doc) –

  • drop_determiners (bool) – remove leading determiners (e.g. “the”) from phrases (e.g. “the quick brown fox” => “quick brown fox”)

  • min_freq (int) – remove chunks that occur in doc fewer than min_freq times

Yields

spacy.tokens.Span – the next noun chunk from doc in order of appearance in the document

textacy.extract.pos_regex_matches(doc, pattern)[source]

Extract sequences of consecutive tokens from a spacy-parsed doc whose part-of-speech tags match the specified regex pattern.

Parameters
  • doc (spacy.tokens.Doc or spacy.tokens.Span) –

  • pattern (str) –

    Pattern of consecutive POS tags whose corresponding words are to be extracted, inspired by the regex patterns used in NLTK’s nltk.chunk.regexp. Tags are uppercase, from the universal tag set; delimited by < and >, which are basically converted to parentheses with spaces as needed to correctly extract matching word sequences; white space in the input doesn’t matter.

    Examples (see constants.POS_REGEX_PATTERNS):

    • noun phrase: r’<DET>? (<NOUN>+ <ADP|CONJ>)* <NOUN>+’

    • compound nouns: r’<NOUN>+’

    • verb phrase: r’<VERB>?<ADV>*<VERB>+’

    • prepositional phrase: r’<PREP> <DET>? (<NOUN>+<ADP>)* <NOUN>+’

Yields

spacy.tokens.Span – the next span of consecutive tokens from doc whose parts-of-speech match pattern, in order of apperance

Warning

DEPRECATED! For similar but more powerful and performant functionality, use textacy.extract.matches() instead.

textacy.extract.matches(doc, patterns, *, on_match=None)[source]

Extract Span s from a Doc matching one or more patterns of per-token attr:value pairs, with optional quantity qualifiers.

Parameters
  • doc (spacy.tokens.Doc) –

  • patterns (str or List[str] or List[dict] or List[List[dict]]) –

    One or multiple patterns to match against doc using a spacy.matcher.Matcher.

    If List[dict] or List[List[dict]], each pattern is specified as attr: value pairs per token, with optional quantity qualifiers:

    • [{"POS": "NOUN"}] matches singular or plural nouns, like “friend” or “enemies”

    • [{"POS": "PREP"}, {"POS": "DET", "OP": "?"}, {"POS": "ADJ", "OP": "?"}, {"POS": "NOUN", "OP": "+"}] matches prepositional phrases, like “in the future” or “from the distant past”

    • [{"IS_DIGIT": True}, {"TAG": "NNS"}] matches numbered plural nouns, like “60 seconds” or “2 beers”

    • [{"POS": "PROPN", "OP": "+"}, {}] matches proper nouns and whatever word follows them, like “Burton DeWilde yaaasss”

    If str or List[str], each pattern is specified as one or more per-token patterns separated by whitespace where attribute, value, and optional quantity qualifiers are delimited by colons. Note that boolean and integer values have special syntax — “bool(val)” and “int(val)”, respectively — and that wildcard tokens still need a colon between the (empty) attribute and value strings.

    • "POS:NOUN" matches singular or plural nouns

    • "POS:PREP POS:DET:? POS:ADJ:? POS:NOUN:+" matches prepositional phrases

    • "IS_DIGIT:bool(True) TAG:NNS" matches numbered plural nouns

    • "POS:PROPN:+ :" matches proper nouns and whatever word follows them

    Also note that these pattern strings don’t support spaCy v2.1’s “extended” pattern syntax; if you need such complex patterns, it’s probably better to use a List[dict] or List[List[dict]], anyway.

  • on_match (callable) – Callback function to act on matches. Takes the arguments matcher, doc, i and matches.

Yields

spacy.tokens.Span – Next matching Span in doc, in order of appearance

Raises
textacy.extract.subject_verb_object_triples(doc)[source]

Extract an ordered sequence of subject-verb-object (SVO) triples from a spacy-parsed doc. Note that this only works for SVO languages.

Parameters

doc (spacy.tokens.Doc or spacy.tokens.Span) –

Yields

Tuple[spacy.tokens.Span] – The next 3-tuple of spans from doc representing a (subject, verb, object) triple, in order of appearance.

textacy.extract.acronyms_and_definitions(doc, known_acro_defs=None)[source]

Extract a collection of acronyms and their most likely definitions, if available, from a spacy-parsed doc. If multiple definitions are found for a given acronym, only the most frequently occurring definition is returned.

Parameters
  • doc (spacy.tokens.Doc or spacy.tokens.Span) –

  • known_acro_defs (dict) – if certain acronym/definition pairs are known, pass them in as {acronym (str): definition (str)}; algorithm will not attempt to find new definitions

Returns

unique acronyms (keys) with matched definitions (values)

Return type

dict

References

Taghva, Kazem, and Jeff Gilbreth. “Recognizing acronyms and their definitions.” International Journal on Document Analysis and Recognition 1.4 (1999): 191-198.

textacy.extract.semistructured_statements(doc, entity, *, cue='be', ignore_entity_case=True, min_n_words=1, max_n_words=20)[source]

Extract “semi-structured statements” from a spacy-parsed doc, each as a (entity, cue, fragment) triple. This is similar to subject-verb-object triples.

Parameters
  • doc (spacy.tokens.Doc) –

  • entity (str) – a noun or noun phrase of some sort (e.g. “President Obama”, “global warming”, “Python”)

  • cue (str) – verb lemma with which entity is associated (e.g. “talk about”, “have”, “write”)

  • ignore_entity_case (bool) – if True, entity matching is case-independent

  • min_n_words (int) – min number of tokens allowed in a matching fragment

  • max_n_words (int) – max number of tokens allowed in a matching fragment

Yields

(spacy.tokens.Span or spacy.tokens.Token, spacy.tokens.Span or spacy.tokens.Token, spacy.tokens.Span) – where each element is a matching (entity, cue, fragment) triple

Notes

Inspired by N. Diakopoulos, A. Zhang, A. Salway. Visual Analytics of Media Frames in Online News and Blogs. IEEE InfoVis Workshop on Text Visualization. October, 2013.

Which itself was inspired by by Salway, A.; Kelly, L.; Skadiņa, I.; and Jones, G. 2010. Portable Extraction of Partially Structured Facts from the Web. In Proc. ICETAL 2010, LNAI 6233, 345-356. Heidelberg, Springer.

textacy.extract.direct_quotations(doc)[source]

Baseline, not-great attempt at direction quotation extraction (no indirect or mixed quotations) using rules and patterns. English only.

Parameters

doc (spacy.tokens.Doc) –

Yields

(spacy.tokens.Span, spacy.tokens.Token, spacy.tokens.Span) – next quotation in doc represented as a (speaker, reporting verb, quotation) 3-tuple

Notes

Loosely inspired by Krestel, Bergler, Witte. “Minding the Source: Automatic Tagging of Reported Speech in Newspaper Articles”.

TODO: Better approach would use ML, but needs a training dataset.

Keyterm Extraction

TextRank

textacy.ke.textrank.textrank(doc, *, normalize='lemma', include_pos=('NOUN', 'PROPN', 'ADJ'), window_size=2, edge_weighting='binary', position_bias=False, topn=10)[source]

Extract key terms from a document using the TextRank algorithm, or a variation thereof. For example:

  • TextRank: window_size=2, edge_weighting="binary", position_bias=False

  • SingleRank: window_size=10, edge_weighting="count", position_bias=False

  • PositionRank: window_size=10, edge_weighting="count", position_bias=True

Parameters
  • doc (spacy.tokens.Doc) – spaCy Doc from which to extract keyterms.

  • normalize (str or callable) – If “lemma”, lemmatize terms; if “lower”, lowercase terms; if None, use the form of terms as they appeared in doc; if a callable, must accept a Token and return a str, e.g. textacy.spacier.utils.get_normalized_text().

  • include_pos (str or Set[str]) – One or more POS tags with which to filter for good candidate keyterms. If None, include tokens of all POS tags (which also allows keyterm extraction from docs without POS-tagging.)

  • window_size (int) – Size of sliding window in which term co-occurrences are determined.

  • edge_weighting ({"count", "binary"}) – : If “count”, the nodes for all co-occurring terms are connected by edges with weight equal to the number of times they co-occurred within a sliding window; if “binary”, all such edges have weight = 1.

  • position_bias (bool) – If True, bias the PageRank algorithm for weighting nodes in the word graph, such that words appearing earlier and more frequently in doc tend to get larger weights.

  • topn (int or float) – Number of top-ranked terms to return as key terms. If an integer, represents the absolute number; if a float, value must be in the interval (0.0, 1.0], which is converted to an int by int(round(len(set(candidates)) * topn)).

Returns

Sorted list of top topn key terms and their corresponding TextRank ranking scores.

Return type

List[Tuple[str, float]]

References

  • Mihalcea, R., & Tarau, P. (2004, July). TextRank: Bringing order into texts. Association for Computational Linguistics.

  • Wan, Xiaojun and Jianguo Xiao. 2008. Single document keyphrase extraction using neighborhood knowledge. In Proceedings of the 23rd AAAI Conference on Artificial Intelligence, pages 855–860.

  • Florescu, C. and Cornelia, C. (2017). PositionRank: An Unsupervised Approach to Keyphrase Extraction from Scholarly Documents. In proceedings of ACL*, pages 1105-1115.

YAKE

textacy.ke.yake.yake(doc, *, normalize='lemma', ngrams=(1, 2, 3), include_pos=('NOUN', 'PROPN', 'ADJ'), window_size=2, topn=10)[source]

Extract key terms from a document using the YAKE algorithm.

Parameters
  • doc (spacy.tokens.Doc) – spaCy Doc from which to extract keyterms. Must be sentence-segmented; optionally POS-tagged.

  • normalize (str) –

    If “lemma”, lemmatize terms; if “lower”, lowercase terms; if None, use the form of terms as they appeared in doc.

    Note

    Unlike the other keyterm extraction functions, this one doesn’t accept a callable for normalize.

  • ngrams (int or Set[int]) – n of which n-grams to consider as keyterm candidates. For example, (1, 2, 3)` includes all unigrams, bigrams, and trigrams, while 2 includes bigrams only.

  • include_pos (str or Set[str]) – One or more POS tags with which to filter for good candidate keyterms. If None, include tokens of all POS tags (which also allows keyterm extraction from docs without POS-tagging.)

  • window_size (int) – Number of words to the right and left of a given word to use as context when computing the “relatedness to context” component of its score. Note that the resulting sliding window’s full width is 1 + (2 * window_size).

  • topn (int or float) – Number of top-ranked terms to return as key terms. If an integer, represents the absolute number; if a float, value must be in the interval (0.0, 1.0], which is converted to an int by int(round(len(candidates) * topn))

Returns

Sorted list of top topn key terms and their corresponding scores.

Return type

List[Tuple[str, float]]

References

Campos, Mangaravite, Pasquali, Jorge, Nunes, and Jatowt. (2018). A Text Feature Based Automatic Keyword Extraction Method for Single Documents. Advances in Information Retrieval. ECIR 2018. Lecture Notes in Computer Science, vol 10772, pp. 684-691.

sCAKE

textacy.ke.scake.scake(doc, *, normalize='lemma', include_pos=('NOUN', 'PROPN', 'ADJ'), topn=10)[source]

Extract key terms from a document using the sCAKE algorithm.

Parameters
  • doc (spacy.tokens.Doc) – spaCy Doc from which to extract keyterms. Must be sentence-segmented; optionally POS-tagged.

  • normalize (str or callable) – If “lemma”, lemmatize terms; if “lower”, lowercase terms; if None, use the form of terms as they appeared in doc; if a callable, must accept a Token and return a str, e.g. textacy.spacier.utils.get_normalized_text().

  • include_pos (str or Set[str]) – One or more POS tags with which to filter for good candidate keyterms. If None, include tokens of all POS tags (which also allows keyterm extraction from docs without POS-tagging.)

  • topn (int or float) – Number of top-ranked terms to return as key terms. If an integer, represents the absolute number; if a float, value must be in the interval (0.0, 1.0], which is converted to an int by int(round(len(candidates) * topn))

Returns

Sorted list of top topn key terms and their corresponding scores.

Return type

List[Tuple[str, float]]

References

Duari, Swagata & Bhatnagar, Vasudha. (2018). sCAKE: Semantic Connectivity Aware Keyword Extraction. Information Sciences. 477. https://arxiv.org/abs/1811.10831v1

SGRank

textacy.ke.sgrank.sgrank(doc, *, normalize='lemma', ngrams=(1, 2, 3, 4, 5, 6), include_pos=('NOUN', 'PROPN', 'ADJ'), window_size=1500, topn=10, idf=None)[source]

Extract key terms from a document using the SGRank algorithm.

Parameters
  • doc (spacy.tokens.Doc) – spaCy Doc from which to extract keyterms.

  • normalize (str or callable) – If “lemma”, lemmatize terms; if “lower”, lowercase terms; if None, use the form of terms as they appeared in doc; if a callable, must accept a Span and return a str, e.g. textacy.spacier.utils.get_normalized_text()

  • ngrams (int or Set[int]) – n of which n-grams to include; (1, 2, 3, 4, 5, 6) (default) includes all ngrams from 1 to 6; 2 if only bigrams are wanted

  • include_pos (str or Set[str]) – One or more POS tags with which to filter for good candidate keyterms. If None, include tokens of all POS tags (which also allows keyterm extraction from docs without POS-tagging.)

  • window_size (int) – Size of sliding window in which term co-occurrences are determined to occur. Note: Larger values may dramatically increase runtime, owing to the larger number of co-occurrence combinations that must be counted.

  • topn (int or float) – Number of top-ranked terms to return as keyterms. If int, represents the absolute number; if float, must be in the open interval (0.0, 1.0), and is converted to an integer by int(round(len(candidates) * topn))

  • idf (dict) – Mapping of normalize(term) to inverse document frequency for re-weighting of unigrams (n-grams with n > 1 have df assumed = 1). Results are typically better with idf information.

Returns

Sorted list of top topn key terms and their corresponding SGRank scores

Return type

List[Tuple[str, float]]

Raises

ValueError – if topn is a float but not in (0.0, 1.0] or window_size < 2

References

Danesh, Sumner, and Martin. “SGRank: Combining Statistical and Graphical Methods to Improve the State of the Art in Unsupervised Keyphrase Extraction.” Lexical and Computational Semantics (* SEM 2015) (2015): 117.

Keyterm Extraction Utils

textacy.ke.utils.normalize_terms(terms, normalize)[source]

Transform a sequence of terms from spaCy Token or Span s into strings, normalized by normalize.

Parameters
  • terms (Sequence[spacy.tokens.Token or spacy.tokens.Span]) –

  • normalize (str or Callable) – If “lemma”, lemmatize terms; if “lower”, lowercase terms; if falsy, use the form of terms as they appear in terms; if a callable, must accept a Token or Span and return a str, e.g. textacy.spacier.utils.get_normalized_text().

Yields

str

textacy.ke.utils.aggregate_term_variants(terms, *, acro_defs=None, fuzzy_dedupe=True)[source]

Take a set of unique terms and aggregate terms that are symbolic, lexical, and ordering variants of each other, as well as acronyms and fuzzy string matches.

Parameters
  • terms (Set[str]) – set of unique terms with potential duplicates

  • acro_defs (dict) – if not None, terms that are acronyms will be aggregated with their definitions and terms that are definitions will be aggregated with their acronyms

  • fuzzy_dedupe (bool) – if True, fuzzy string matching will be used to aggregate similar terms of a sufficient length

Returns

each item is a set of aggregated terms

Return type

List[Set[str]]

Notes

Partly inspired by aggregation of variants discussed in Park, Youngja, Roy J. Byrd, and Branimir K. Boguraev. “Automatic glossary extraction: beyond terminology identification.” Proceedings of the 19th international conference on Computational linguistics-Volume 1. Association for Computational Linguistics, 2002.

textacy.ke.utils.get_longest_subsequence_candidates(doc, match_func)[source]

Get candidate keyterms from doc, where candidates are longest consecutive subsequences of tokens for which all match_func(token) is True.

Parameters
  • doc (spacy.tokens.Doc) –

  • match_func (callable) – Function applied sequentially to each Token in doc that returns True for matching (“good”) tokens, False otherwise.

Yields

Tuple[spacy.tokens.Token] – Next longest consecutive subsequence candidate, as a tuple of constituent tokens.

textacy.ke.utils.get_ngram_candidates(doc, ns, *, include_pos=('NOUN', 'PROPN', 'ADJ'))[source]

Get candidate keyterms from doc, where candidates are n-length sequences of tokens (for all n in ns) that don’t start/end with a stop word or contain punctuation tokens, and whose constituent tokens are filtered by POS tag.

Parameters
  • doc (spacy.tokens.Doc) –

  • ns (int or Tuple[int]) – One or more n values for which to generate n-grams. For example, 2 gets bigrams; (2, 3) gets bigrams and trigrams.

  • include_pos (str or Set[str]) – One or more POS tags with which to filter ngrams. If None, include tokens of all POS tags.

Yields

Tuple[spacy.tokens.Token] – Next ngram candidate, as a tuple of constituent Tokens.

textacy.ke.utils.get_pattern_matching_candidates(doc, patterns)[source]

Get candidate keyterms from doc, where candidates are sequences of tokens that match any pattern in patterns

Parameters
  • doc (spacy.tokens.Doc) –

  • patterns (str or List[str] or List[dict] or List[List[dict]]) – One or multiple patterns to match against doc using a spacy.matcher.Matcher.

Yields

Tuple[spacy.tokens.Token] – Next pattern-matching candidate, as a tuple of constituent Tokens.

textacy.ke.utils.get_filtered_topn_terms(term_scores, topn, *, match_threshold=None)[source]

Build up a list of the topn terms, filtering out any that are substrings of better-scoring terms and optionally filtering out any that are sufficiently similar to better-scoring terms.

Parameters
  • term_scores (List[Tuple[str, float]]) – List of (term, score) pairs, sorted in order from best score to worst. Note that this may be from high to low value or low to high, depending on the algorithm.

  • topn (int) – Maximum number of top-scoring terms to get.

  • match_threshold (float) – Minimal edit distance between a term and previously seen terms, used to filter out terms that are sufficiently similar to higher-scoring terms. Uses textacy.similarity.token_sort_ratio().

Returns

List[Tuple[str, float]]

textacy.ke.utils.most_discriminating_terms(terms_lists, bool_array_grp1, *, max_n_terms=1000, top_n_terms=25)[source]

Given a collection of documents assigned to 1 of 2 exclusive groups, get the top_n_terms most discriminating terms for group1-and-not-group2 and group2-and-not-group1.

Parameters
  • terms_lists (Iterable[Iterable[str]]) – Sequence of documents, each as a sequence of (str) terms; used as input to doc_term_matrix()

  • bool_array_grp1 (Iterable[bool]) – Ordered sequence of True/False values, where True corresponds to documents falling into “group 1” and False corresponds to those in “group 2”.

  • max_n_terms (int) – Only consider terms whose document frequency is within the top max_n_terms out of all distinct terms; must be > 0.

  • top_n_terms (int or float) – If int (must be > 0), the total number of most discriminating terms to return for each group; if float (must be in the interval (0, 1)), the fraction of max_n_terms to return for each group.

Returns

Top top_n_terms most discriminating terms for grp1-not-grp2 List[str]: Top top_n_terms most discriminating terms for grp2-not-grp1

Return type

List[str]

References

King, Gary, Patrick Lam, and Margaret Roberts. “Computer-Assisted Keyword and Document Set Discovery from Unstructured Text.” (2014). http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.458.1445&rep=rep1&type=pdf