tokenize( input ). Tokenizes a tensor of UTF-8 strings on whitespaces. The strings are split on ICU defined whitespace characters.
確定! 回上一頁