split_text_on_tokens# langchain_text_splitters.base.split_text_on_tokens( *, text: str, tokenizer: Tokenizer, ) → list[str][source]# Split incoming text and return chunks using tokenizer. Parameters: text (str) tokenizer (Tokenizer) Return type: list[str]