Here's the completed code:
```
# Import necessary modules
import nltk
from nltk.tokenize import sent_tokenize, word_tokenize
# Tokenize the article into sentences: sentences
sentences = sent_tokenize(article)
# Tokenize each sentence into words: token_sentences
token_sentences = [word_tokenize(sent) for sent in sentences]
# Tag each tokenized sentence into parts of speech: pos_sentences
pos_sentences = [nltk.pos_tag(sent) for sent in token_sentences]
# Create the named entity chunks: chunked_sentences
chunked_sentences = nltk.ne_chunk_sents(pos_sentences, binary=True)
# Test for stems of the tree with 'NE' tags
for sent in chunked_sentences:
for chunk in sent:
if hasattr(chunk, "label") and chunk.label() == "NE":
print(chunk)
```
The code first imports `nltk` and `sent_tokenize`, `word_tokenize` from `nltk.tokenize`. It then tokenizes `article` into sentences using `sent_tokenize()` and assigns it to `sentences`. It tokenizes each sentence into words using a