No. Not all of the variability in the data may be explained in the features you have selected. Imagine you are classifying whether it is a good day to play tennis. Your features are temp, wind, precip. Those may be good descriptors, but you didn't train on whether there is a parade in town! On the parade days, the tennis court is blocked off, so even though your features should do a good job of explaining the known data, there are outliers that didn't fit the features.
Generally, there is randomness in data that you will not be able to capture 100%.
Updated per comments below
The question is whether training and testing on the same dataset will be 100% accurate, which I think we both agree will not work (they didn't ask what the assumptions of the NB were). Here is a sample dataset demonstrating the scenario above:
import pandas as pd
import numpy as np
from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
df = pd.DataFrame([[1,1,0],[1,0,0],[0,0,1],[1,0,1],[1,1,0]], columns = ['hot','windy','rainy'])
targets = [1,1,0,0,0]
preds = gnb.fit(df, targets).predict(df)
print preds
array([1, 1, 0, 0, 1])
Notice that the first case and the last case are identical, but the classifier missed the prediction for the last case. This is because the data at hand does not always perfectly describe accompanying classification. There are many other assumptions to NB,which could also describe cases where it fails, (which you excellently pointed out below) but my goal was just to come up with a quick demonstration that they would hopefully understand and would answer the question.