1

I have a large txt file and I'm trying to pull out every instance of a specific word, as well as the 15 words on either side. I'm running into a problem when there are two instances of that word within 15 words of each other, which I'm trying to get as one large snippet of text.

I'm trying to get chunks of text to analyze about a specific topic. So far, I have working code for all instances except the scenario mentioned above.

def occurs(word1, word2, filename):
    import os

    infile = open(filename,'r')     #opens file, reads, splits into lines
    lines = infile.read().splitlines()
    infile.close()
    wordlist = [word1, word2]       #this list allows for multiple words
    wordsString = ''.join(lines)      #splits file into individual words
    words = wordsString.split()

    f = open(filename, 'w')
    f.write("start")
    f.write(os.linesep)

    for word in wordlist:       
        matches = [i for i, w in enumerate(words) if w.lower().find(word) != -1] 

        for m in matches:        
            l = " ".join(words[m-15:m+16])
            f.write(f"...{l}...")       #writes the data to the external file
            f.write(os.linesep)
    f.close

So far, when two of the same word are too close together, the program just doesn't run on one of them. Instead, I want to get out a longer chunk of text that extends 15 words behind and in front of furthest back and forward words

Andrej Kesely
  • 168,389
  • 15
  • 48
  • 91
max
  • 11
  • 4

2 Answers2

0

This snippet will get number of words around the chosen keyword. If there are some keywords together, it will join them:

s = '''xxx I have a large txt file and I'm xxx trying to pull out every instance of a specific word, as well as the 15 words on either side. I'm running into a problem when there are two instances of that word within 15 words of each other, which I'm trying to get as one large snippet of text.
I'm trying to xxx get chunks of text to analyze about a specific topic. So far, I have working code for all instances except the scenario mentioned above. xxx'''

words = s.split()

from itertools import groupby, chain

word = 'xxx'

def get_snippets(words, word, l):
    snippets, current_snippet, cnt = [], [], 0
    for v, g in groupby(words, lambda w: w != word):
        w = [*g]
        if v:
            if len(w) < l:
                current_snippet += [w]
            else:
                current_snippet += [w[:l] if cnt % 2 else w[-l:]]
                snippets.append([*chain.from_iterable(current_snippet)])
                current_snippet = [w[-l:] if cnt % 2 else w[:l]]
                cnt = 0
            cnt += 1
        else:
            if current_snippet:
                current_snippet[-1].extend(w)
            else:
                current_snippet += [w]

    if current_snippet[-1][-1] == word or len(current_snippet) > 1:
        snippets.append([*chain.from_iterable(current_snippet)])

    return snippets

for snippet in get_snippets(words, word, 15):
    print(' '.join(snippet))

Prints:

xxx I have a large txt file and I'm xxx trying to pull out every instance of a specific word, as well as the 15
other, which I'm trying to get as one large snippet of text. I'm trying to xxx get chunks of text to analyze about a specific topic. So far, I have working
topic. So far, I have working code for all instances except the scenario mentioned above. xxx

With the same data and different lenght:

for snippet in get_snippets(words, word, 2):
    print(' '.join(snippet))

Prints:

xxx and I'm
I have xxx trying to
trying to xxx get chunks
mentioned above. xxx
Andrej Kesely
  • 168,389
  • 15
  • 48
  • 91
0

As always, a variety of solutions avaliable here. A fun one would a be a recursive wordFind, where it searches the next 15 words and if it finds the target word it can call itself.

A simpler, though perhaps not efficient, solution would be to add words one at a time:

for m in matches:        
            l = " ".join(words[m-15:m])
            i = 1
            while i < 16:
                        if (words[m+i].lower() == word):
                                    i=1
                        else:
                                    l.join(words[m+(i++)])
            f.write(f"...{l}...")       #writes the data to the external file
            f.write(os.linesep)

Or if you're wanting the subsequent uses to be removed...

bExtend = false;
for m in matches:
        if (!bExtend):
                    l = " ".join(words[m-15:m])
                    f.write("...")
        bExtend = false
        i = 1
        while (i < 16):
                    if (words[m].lower() == word):
                                l.join(words[m+i])
                                bExtend = true
                                break
                    else:
                                l.join(words[m+(i++)])
        f.write(l)
        if (!bExtend):
                    f.write("...") 
                    f.write(os.linesep)

Note, have not tested so may require a bit of debugging. But the gist is clear: add words piecemeal and extend the addition process when a target word is encountered. This also allows you to extend with other target words other than the current one with a bit of addition to to the second conditional if.