Here's the completed code:
```
# Import Counter
from collections import Counter
from nltk.tokenize import word_tokenize
# Tokenize the article: tokens
tokens = word_tokenize(article)
# Convert the tokens into lowercase: lower_tokens
lower_tokens = [t.lower() for t in tokens]
# Create a Counter with the lowercase tokens: bow_simple
bow_simple = Counter(lower_tokens)
# Print the 10 most common tokens
print(bow_simple.most_common(10))
```
The code imports `Counter` from `collections` and `word_tokenize` from `nltk.tokenize`. It then tokenizes the article using `word_tokenize()` and converts all the tokens to lowercase using a list comprehension. It creates a bag-of-words counter called `bow_simple` by passing `lower