mirror of
https://github.com/ml-explore/mlx-examples.git
synced 2025-08-30 10:56:38 +08:00
smol modification
This commit is contained in:
parent
7e4413b1dd
commit
e6d35301bd
@ -277,38 +277,18 @@ def stream_generate(
|
|||||||
**kwargs,
|
**kwargs,
|
||||||
) -> Union[str, Generator[str, None, None]]:
|
) -> Union[str, Generator[str, None, None]]:
|
||||||
"""
|
"""
|
||||||
A generator producing token ids based on the given prompt from the model.
|
A generator producing text based on the given prompt from the model.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
prompt (mx.array): The input prompt.
|
prompt (mx.array): The input prompt.
|
||||||
model (nn.Module): The model to use for generation.
|
model (nn.Module): The model to use for generation.
|
||||||
temp (float): The temperature for sampling, if 0 the argmax is used.
|
max_tokens (int): The maximum number of tokens. Default: ``100``.
|
||||||
Default: ``0``.
|
max_tokens_per_sec (float, optional): If set, limits generation speed to approximately max_tokens_per_sec. May go slightly over this limit.
|
||||||
repetition_penalty (float, optional): The penalty factor for repeating
|
kwargs: The remaining options get passed to :func:`generate_step`.
|
||||||
tokens.
|
See :func:`generate_step` for more details.
|
||||||
repetition_context_size (int, optional): The number of tokens to
|
|
||||||
consider for repetition penalty. Default: ``20``.
|
|
||||||
top_p (float, optional): Nulceus sampling, higher means model considers
|
|
||||||
more less likely words.
|
|
||||||
min_p (float, optional): The minimum value (scaled by the top token's
|
|
||||||
probability) that a token probability must have to be considered.
|
|
||||||
min_tokens_to_keep (int, optional): Minimum number of tokens that cannot
|
|
||||||
be filtered by min_p sampling.
|
|
||||||
prefill_step_size (int): Step size for processing the prompt.
|
|
||||||
max_kv_size (int, optional): Maximum size of the key-value cache. Old
|
|
||||||
entries (except the first 4 tokens) will be overwritten.
|
|
||||||
prompt_cache (List[Any], optional): A pre-computed prompt cache. Note, if
|
|
||||||
provided, the cache will be updated in place.
|
|
||||||
logit_bias (dictionary, optional): Additive logit bias.
|
|
||||||
logits_processor (List[Callable[[mx.array, mx.array], mx.array]], optional):
|
|
||||||
A list of functions that take tokens and logits and return the processed
|
|
||||||
logits. Default: ``None``.
|
|
||||||
max_tokens_per_sec (float, optional): If set, limits generation speed to approximately
|
|
||||||
this many tokens per second by adding delays between tokens. Useful for thermal/power
|
|
||||||
management. Default: None (no limit).
|
|
||||||
Yields:
|
Yields:
|
||||||
Generator[Tuple[mx.array, mx.array], None, None]: A generator producing
|
Generator[Tuple[mx.array, mx.array]]: A generator producing text.
|
||||||
one token and a vector of log probabilities.
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
if not isinstance(tokenizer, TokenizerWrapper):
|
if not isinstance(tokenizer, TokenizerWrapper):
|
||||||
|
Loading…
Reference in New Issue
Block a user