These smaller linguistic units are usually easier to deal with computationally and semantically. Sentence Tokenization¶. from nltk.tokenize import sent_tokenize
確定! 回上一頁