Here's the completed code:
```
# Import the necessary modules
from nltk.tokenize import word_tokenize, regexp_tokenize
# Define the German text
german_text = "Wann gehen wir Pizza essen? ? Und fährst du mit Über? ?"
# Tokenize and print all words in german_text
all_words = word_tokenize(german_text)
print(all_words)
# Tokenize and print only capital words
capital_words = r"[A-ZÜ]\w+"
print(regexp_tokenize(german_text, capital_words))
# Tokenize and print only emoji
emoji = "['\U0001F300-\U0001F5FF'|'\U0001F600-\U0001F64F'|'\U0001F680-\U0001F6FF'|'\u2600-\u26FF\u2700-\u27BF']"
print(regexp_tokenize(german_text, emoji))
```
The code uses `word_tokenize()` to tokenize all the words in `german_text` and prints the result. It then defines a pattern called `capital_words` that matches only capital words (including the German Ü) and uses `regexp_tokenize()` to tokenize only those words in `german_text`. Finally, it defines a pattern called `emoji` using the unicode ranges for emoji given in the assignment text and uses `