Commit Graph

117 Commits

Author SHA1 Message Date
Goekdeniz-Guelmez
2f95b361a8 removed the custom Mamba2Cache adn updated the existing MambaCache but still only one input Token and outputs gibberish 2024-11-10 16:57:03 +01:00
Gökdeniz Gülmez
49d3f188f8
Merge branch 'ml-explore:main' into adding-support-for-mamba2 2024-11-10 16:36:02 +01:00
Goekdeniz-Guelmez
3a499f9735 fixed inference slowness but it cant handle multible Token inputs and is generateing gibberish 2024-11-10 16:35:07 +01:00
Goekdeniz-Guelmez
800b60239c save checkpoint 2024-11-10 14:36:26 +01:00
Goekdeniz-Guelmez
906f972d36 save push 2024-11-06 16:35:46 +01:00
Angelos Katharopoulos
ed9e81dd58
Fix rotating kv cache size (#1093) 2024-11-05 10:24:24 -08:00
ilyasch2
3b526f0aa1
Add support for falcon-mamba (#1074)
* Add support for falcon-mamba

* nits

* nit

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2024-11-04 12:23:30 -08:00
Alex Barron
85ffd2c96a
Quantized KV Cache (#1075)
* add QuantizedKVCache

* simplify

* add tests

* single sdpa function

* fix sed

* in place

* fix tests

* support different k and v head dims
2024-10-31 16:59:52 -07:00
Goekdeniz-Guelmez
58b448dc0b updates 2024-10-30 21:23:13 +01:00
Gökdeniz Gülmez
3b70708201
Merge branch 'ml-explore:main' into adding-support-for-mamba2 2024-10-25 08:57:37 +02:00
Goekdeniz-Guelmez
7c8849e795 update 2024-10-24 16:16:42 +02:00
Awni Hannun
9000e280ae
fix mamba models conversion (#1065) 2024-10-22 15:44:08 -07:00
Goekdeniz-Guelmez
a677638c4b inference works but is hella slow 2024-10-22 23:06:06 +02:00
Goekdeniz-Guelmez
9ab581d678 notes 2024-10-22 22:10:53 +02:00
Goekdeniz-Guelmez
e43a2ab229 not working, incorrect handling with cache probably 2024-10-22 22:04:25 +02:00
Goekdeniz-Guelmez
55485b98e8 update 2024-10-22 21:23:47 +02:00
Goekdeniz-Guelmez
758597eaa8 adding multi token input and correct cache handling in ssm step 2024-10-22 20:44:23 +02:00
Awni Hannun
66e7bcb886
override dtype with quant (#1062) 2024-10-22 09:56:45 -07:00
Goekdeniz-Guelmez
b9c57cd429 generation works! trying training now 2024-10-22 18:25:59 +02:00
Goekdeniz-Guelmez
c1634ce81b still generating gibberish 2024-10-20 18:41:28 +02:00
Goekdeniz-Guelmez
ab4cf1d1cf generation works but outputs gibberish 2024-10-20 18:04:34 +02:00
Goekdeniz-Guelmez
4ab5139c05 quick save 2024-10-20 16:11:39 +02:00
Goekdeniz-Guelmez
cd036ccfb5 fix generation works too (almost) 2024-10-16 21:13:36 +02:00
Goekdeniz-Guelmez
181d6abedc Merge branch 'adding-support-for-mamba2' of https://github.com/Goekdeniz-Guelmez/mlx-examples into adding-support-for-mamba2 2024-10-16 21:09:42 +02:00
Goekdeniz-Guelmez
8073cb486c adding debug statements (somehiw generating only goes through the fist MambaMixer block pass) 2024-10-16 21:09:30 +02:00
Gökdeniz Gülmez
855fcc4327
Merge branch 'ml-explore:main' into adding-support-for-mamba2 2024-10-16 18:57:55 +02:00
Awni Hannun
8dca1a2f60
Tokenizer updates + tests (#1024)
* tokenizer updates + tests

* nit

* add can_trim_prompt_cache

* nits
2024-10-14 10:48:46 -07:00
Gökdeniz Gülmez
3f1c1dde6a
Merge branch 'ml-explore:main' into adding-support-for-mamba2 2024-10-14 16:32:00 +02:00
Shunta Saito
7612c646f3
Fix PLaMo model to support Grouped Query Attention (#1037) 2024-10-12 15:26:50 -07:00
Goekdeniz-Guelmez
00ba27fe6c adding debug statements 2024-10-11 21:36:41 +02:00
Goekdeniz-Guelmez
6f88dd59d7 quick clean up and fix 2024-10-11 21:08:13 +02:00
Goekdeniz-Guelmez
4e1236cbf6 fixing loading the model 2024-10-11 20:53:29 +02:00
Awni Hannun
fca087be49
More cache improvements (#1015)
* fix rotating kv cache for chat use case

* reorg + fixes to caching, unify prompt caching across types and use cases for e.g. caching during a chat

* nit in chat

* fix tests

* fix tests

* fix tests

* docs

* chat command

* comments + docs

* Define meta_state on all Cache implementations

* fixes + trim_prompt_cache api

* fix default model

---------

Co-authored-by: Angelos Katharopoulos <a_katharopoulos@apple.com>
2024-10-07 20:45:51 -07:00
Goekdeniz-Guelmez
264ba43707 update trainer/lora.py and adding DepthWiseConv1d because mlx 0.18.0 doesnt axepts groups parameter 2024-10-02 19:19:32 +02:00
Gökdeniz Gülmez
49b9fc1a4c
Create mamba2.py 2024-10-02 12:48:15 +02:00
Gökdeniz Gülmez
76710f61af
Adding support for mamba (#940)
* initial commit

* initial commit

* Adding first lines

* adding x, and dt projection layers

* adding the clamping mechanism

* First succesful inference

* last commit for today - added custom geenrate function and it works as expected, will try training and then with loading a model from the hub

* clean up

* save up

* almost

* update

* update

* fixed cache handeling

* fixed loading

* added seperate generat_step method in the model and also in the utils to automaticaly use the generate step mthod in the model class

* quick update

* still not working

* save

* still not working

* initial commit

* utils.py logits = logits[:, -1, :] TypeError: tuple indices must be integers or slices, not tuple

* update

* update

* Fixing the Batching Depfwise Comnvolution and multi token input

* fixing generate and logits outputs

* Done!

* Fixing the cache handling, generating works now trying training

* update ACKNOWLEDGEMENTS

* removing the model_type if stuff in the _step loop in generate_step and adding MambaCache in base.py for training easier generations and removing mamba in tuner/utils.

* quick clean up

* update trainer/utils for right initialisation of the layers for LoRA, but not working.

* clean up

* Forther update to trainer/utils for correct layer selection. Successfull training

* removing extra mamba-infer.py file

* clean up, reformating will come later

* reformat and big clean up, final commit

* some speedups and cleanups

* fix test

* nits

* nits

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2024-09-28 07:02:53 -07:00
L
fc93c55723
feat(mlx_lm): Nemotron (#949)
* feat: Nemotron

https://huggingface.co/nvidia/Minitron-4B-Base

This is basically Llama with partial RoPE and LayerNorm instead of
BatchNorm. Also they add 1 to the LayerNorm weight for some reason.

* fixup! feat: Nemotron

* nits

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2024-08-29 21:08:57 -07:00
Angelos Katharopoulos
1003a8b2dd
Add the ability to load the KV cache from a file (#956) 2024-08-28 22:11:45 -07:00
Prince Canuma
b5e18ef1e3
Add Phi-3.5-MoE (#946)
* add phimoe

* add phimoe to tunner

* add switch_mlp

* fix SuScaled args

* nits

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2024-08-24 06:52:33 -07:00
Awni Hannun
6731254e76
Use fast rope (#945)
* use fast rope

* fix llama

* use fast rope for llama3.1

* requires unreleased mlx

* fix su

* fix deepseek v2

* only one of base or freqs

* nit

* fix

* hard code freqs
2024-08-23 13:18:51 -07:00
L
0164d2058b
feat: DeepSeek MoE v1 (#942)
* feat: deepseek v1

DeepSeek is still releasing models on the DeepSeek V1 architecture.

```sh
mlx_lm.convert --hf-path deepseek-ai/DeepSeek-Prover-V1.5-RL --mlx-path DeepSeek-Prover-V1.5-RL-8bit --q-bits 8 -q
mlx_lm.generate --model DeepSeek-Prover-V1.5-RL-8bit --ignore-chat-template --max-tokens 512 --prompt 'import Mathlib
import Aesop

set_option maxHeartbeats 0

open BigOperators Real Nat Topology Rat

/-- The second and fourth terms of a geometric sequence are $2$ and $6$. Which of the following is a possible first term?
Show that it is $\frac{2\sqrt{3}}{3}$.-/
theorem amc12b_2003_p6 (a r : ℝ) (u : ℕ → ℝ) (h₀ : ∀ k, u k = a * r ^ k) (h₁ : u 1 = 2)
  (h₂ : u 3 = 6) : u 0 = 2 / Real.sqrt 3 ∨ u 0 = -(2 / Real.sqrt 3) := by'
```

* nits

* nits

* nits

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2024-08-17 07:18:09 -07:00
Awni Hannun
7be292c0c9
Handle longer prompt/generation (#931)
* rebase

* nits

* nit

* fix rotating cache with step prefill

* update version
2024-08-16 15:28:39 -07:00
otriscon
46da74fea2
Unify attention mask in LLMs (#911)
* Unify attention mask creation in LLMs.

Currently, each model implementation in `mlx-examples/llms/models` has ad-hoc
code to create a mask for the attention mechanism. This usually takes the form:

```
    mask = None
    if h.shape[1] > 1:
        mask = nn.MultiHeadAttention.create_additive_causal_mask(h.shape[1])
        mask = mask.astype(h.dtype)
```

This correctly creates a mask only if the input consists of more than one token.
But this code assumes the multi-token input is at the beginning of inference.
If, for example, we are evaluating multiple tokens because of speculative
decoding or prompt cache reuse, this mask will not have the correct shape and
and will cause the raising of an exception in the attention computation.

Some of the models correctly implement the mask creation with code like this:

```
    mask = None
    if h.shape[1] > 1:
        mask = create_additive_causal_mask(
            h.shape[1], cache[0].offset if cache is not None else 0
        )
        mask = mask.astype(h.dtype)
```

This commit unifies the attention mask creation for all models with a new
function `create_attention_mask`, reducing code duplication and helping all
models support inference performance enhancements like those mentioned above.

* Allow batches in LLM key-value cache

The current implementation of the LLM key-value cache assumes that
the input batch is of size 1. Input batching (evaluating multiple
alterative inputs at the same time) can be a valuable tool for
speculative sampling and other techniques.

This change removes the hard-coded batch size from the code that
resizes the key-value cache.

* Simplify causal mask creation

Use the same codepath regardless of whether there's an offset or
not. Addresses [this comment](https://github.com/ml-explore/mlx-examples/pull/911#discussion_r1691459717).

* Use old-style type annotation to avoid linter error
2024-07-25 16:45:22 -07:00
Alex Cheema
cd8efc7fbc
Add support for Llama-3.1 (#907)
* add dynamicNTK scaling rope

* remove unused var

* fix rope base

* llama3.1 fixes

* TODO for rope eval

* vectorise llama3 base freq calculation

* removed the arbitrary 2.0 rope_scale default case

* fix slow llama3.1 generation by evaluating stateless part of DynamicNTKScalingRoPE in init

* nits + format

* use mx.pi

* fix tests and add test for 3.1

---------

Co-authored-by: Prince Canuma <prince.gdt@gmail.com>
Co-authored-by: Awni Hannun <awni@apple.com>
2024-07-23 13:21:32 -07:00
Prince Canuma
3f337e0f0a
Add Mistral NeMo (fix) (#895)
* fix head_dim

* Update llms/mlx_lm/models/llama.py

* fix kv error

* formatting

* Delete test.py

---------

Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
2024-07-22 06:09:24 -07:00
Prince Canuma
3d365b612a
Add support for InternLM-2.5 (#871)
* fix internlm-2

* formatting

* add dynamic ntk rope

* formatting

* move dynamic scaling rope to intermlm2.py

* add default max_position_embeddings
2024-07-17 16:38:22 -07:00
Anchen
561dcf5643
Add support for deepseek coder v2 lite (#882)
* feat: add support for deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct

* fix softmax + some cleanup

* more nits

* fix rope

* fix original_max_position_embeddings in rope

* fix original_max_position_embeddings in rope config

* add group greedy

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2024-07-17 07:23:28 -07:00
JosefAlbers
bfc1f2763b
longrope (#886) 2024-07-12 07:19:11 -07:00
nicolov
fbe3247772
Add GPT-neox model (#863) 2024-07-11 06:13:17 -07:00
Awni Hannun
68e88d42fb
Fix server for openai package (#877)
* fix

* fixes for 9b
2024-07-08 12:34:31 -07:00