2

I have a dictionary file of medical phrases and a corpus of raw texts. I'm trying to use the dictionary file to select the relevant phrases from the text. Phrases, in this case, are 1 to 5-word n-grams. In the end, I would like the selected phrases in a dataframe with two columns: doc_id, phrase

I've been trying to use the quanteda package to do this but haven't been successful. Below is some code to reproduce my latest attempt. I'd appreciate any advice you have...I've tried a variety of methods but keep getting back only single-word matches.

version  R version 3.6.2 (2019-12-12)
os       Windows 10 x64              
system   x86_64, mingw32             
ui       RStudio 
Packages:
dbplyr   1.4.2 
quanteda 1.5.2

library(quanteda)
library(dplyr)
raw <- data.frame("doc_id" = c("1", "2", "3"), 
                  "text" = c("diffuse intrinsic pontine glioma are highly aggressive and difficult to treat brain tumors found at the base of the brain.", 
                             "magnetic resonance imaging (mri) is a medical imaging technique used in radiology to form pictures of the anatomy and the physiological processes of the body.", 
                             "radiation therapy or radiotherapy, often abbreviated rt, rtx, or xrt, is a therapy using ionizing radiation, generally as part of cancer treatment to control or kill malignant cells and normally delivered by a linear accelerator."))

term = c("diffuse intrinsic pontine glioma", "brain tumors", "brain", "pontine glioma", "mri", "medical imaging", "radiology", "anatomy", "physiological processes", "radiation therapy", "radiotherapy", "cancer treatment", "malignant cells")
medTerms = list(term = term)
dict <- dictionary(medTerms)

corp <- raw %>% group_by(doc_id) %>% summarise(text = paste(text, collapse=" "))
corp <- corpus(corp, text_field = "text")

dfm <- dfm(corp,
           tolower = TRUE, stem = FALSE, remove_punct = TRUE,
           remove = stopwords("english"))
dfm <- dfm_select(dfm, pattern = phrase(dict))

What I'd eventually like to get back is something like the following:

doc_id        term
1       diffuse intrinsice pontine glioma
1       pontine glioma
1       brain tumors
1       brain
2       mri
2       medical imaging
2       radiology
2       anatomy
2       physiological processes
3       radiation therapy
3       radiotherapy
3       cancer treatment
3       malignant cells
Konrad Rudolph
  • 530,221
  • 131
  • 937
  • 1,214
Obed
  • 403
  • 3
  • 12

2 Answers2

2

If you want to match-multi word patterns from a dictionary, you can do so by constructing your dfm using ngrams.

library(quanteda)
library(dplyr)
library(tidyr)

raw$text <- as.character(raw$text) # you forgot to use stringsAsFactors = FALSE while constructing the data.frame, so I convert your factor to character before continuing
corp <- corpus(raw, text_field = "text")

dfm <- tokens(corp) %>% 
  tokens_ngrams(1:5) %>% # This is the new way of creating ngram dfms. 1:5 means to construct all from unigram to 5-grams
  dfm(tolower = TRUE, 
      stem = FALSE,
      remove_punct = TRUE) %>% # I wouldn't remove stopwords for this matching task
  dfm_select(pattern = dict)

Now we just have to convert the dfm to a data.frame and bring it into a long format:

convert(dfm, "data.frame") %>% 
  pivot_longer(-document, names_to = "term") %>% 
  filter(value > 0)
#> # A tibble: 13 x 3
#>    document term                             value
#>    <chr>    <chr>                            <dbl>
#>  1 1        brain                                2
#>  2 1        pontine_glioma                       1
#>  3 1        brain_tumors                         1
#>  4 1        diffuse_intrinsic_pontine_glioma     1
#>  5 2        mri                                  1
#>  6 2        radiology                            1
#>  7 2        anatomy                              1
#>  8 2        medical_imaging                      1
#>  9 2        physiological_processes              1
#> 10 3        radiotherapy                         1
#> 11 3        radiation_therapy                    1
#> 12 3        cancer_treatment                     1
#> 13 3        malignant_cells                      1

You could remove the value column but it might be of interest later on.

JBGruber
  • 11,727
  • 1
  • 23
  • 45
2

You could form all ngrams from 1 to 5 in length, and then select all out. But for large texts, this would be very inefficient. Here's a more direct way. I've reproduced the entire problem here with a few modifications (such as stringsAsFactors = FALSE and skipping some unnecessary steps).

Granted, this does not double count the terms as in your expected example, but I submit that you probably did not want this. Why count "brain" if it occurred within "brain tumor"? You would be better counting "brain tumor" when it occurs as that phrase, and "brain" only when it occurs without "tumor". The code below does that.

library(quanteda)
## Package version: 2.0.1

raw <- data.frame(
  "doc_id" = c("1", "2", "3"),
  "text" = c(
    "diffuse intrinsic pontine glioma are highly aggressive and difficult to treat brain tumors found at the base of the brain.",
    "magnetic resonance imaging (mri) is a medical imaging technique used in radiology to form pictures of the anatomy and the physiological processes of the body.",
    "radiation therapy or radiotherapy, often abbreviated rt, rtx, or xrt, is a therapy using ionizing radiation, generally as part of cancer treatment to control or kill malignant cells and normally delivered by a linear accelerator."
  ),
  stringsAsFactors = FALSE
)

dict <- dictionary(list(
  term = c(
    "diffuse intrinsic pontine glioma",
    "brain tumors", "brain", "pontine glioma", "mri", "medical imaging",
    "radiology", "anatomy", "physiological processes", "radiation therapy",
    "radiotherapy", "cancer treatment", "malignant cells"
  )
))

Here's the key to the answer: using the dictionary first to select the tokens, then to concatenate them, then to reshape them one dictionary match per new "document". The last step creates the data.frame you want.

toks <- corpus(raw) %>%
  tokens() %>%
  tokens_select(dict) %>% # select just dictionary values
  tokens_compound(dict, concatenator = " ") %>% # turn phrase into single "tokens"
  tokens_segment(pattern = "*") # make one token per "document"

# make into data.frame
data.frame(
  doc_id = docid(toks), term = as.character(toks),
  stringsAsFactors = FALSE
)
##    doc_id                             term
## 1       1 diffuse intrinsic pontine glioma
## 2       1                     brain tumors
## 3       1                            brain
## 4       2                              mri
## 5       2                  medical imaging
## 6       2                        radiology
## 7       2                          anatomy
## 8       2          physiological processes
## 9       3                radiation therapy
## 10      3                     radiotherapy
## 11      3                 cancer treatment
## 12      3                  malignant cells
Ken Benoit
  • 14,454
  • 27
  • 50