Home Machine Learning Maixtchup: Make Your Personal Combination of Consultants with Mergekit

Maixtchup: Make Your Personal Combination of Consultants with Mergekit

0
Maixtchup: Make Your Personal Combination of Consultants with Mergekit

[ad_1]

The rise of the MoEs

Picture by the creator — Generated with DALL-E

For the reason that launch of Mixtral-8x7B by Mistral AI, there was a renewed curiosity within the combination of skilled (MoE) fashions. This structure exploits skilled sub-networks amongst which solely a few of them are chosen and activated by a router community throughout inference.

MoEs are so easy and versatile that it’s simple to make a customized MoE. On the Hugging Face Hub, we are able to now discover a number of trending LLMs which are customized MoEs, equivalent to mlabonne/phixtral-4x2_8.

Nevertheless, most of them should not conventional MoEs constituted of scratch, they merely use a mix of already fine-tuned LLMs as specialists. Their creation was made simple with mergekit (LGPL-3.0 license). As an illustration, Phixtral LLMs have been made with mergekit by combining a number of Phi-2 fashions.

On this article, we are going to see how Phixtral was created. We are going to apply the identical course of to create our personal combination of specialists, Maixtchup, utilizing a number of Mistral 7B fashions.

To shortly perceive the high-level structure of a mannequin, I wish to print it. As an illustration, for mlabonne/phixtral-4x2_8 (MIT license):

from transformers import AutoModelForCausalLM
mannequin = AutoModelForCausalLM.from_pretrained(
"mlabonne/phixtral-4x2_8",
torch_dtype="auto",
load_in_4bit=True,
trust_remote_code=True
)
print(mannequin)

It prints:

PhiForCausalLM(
(transformer): PhiModel(
(embd): Embedding(
(wte): Embedding(51200, 2560)
(drop): Dropout(p=0.0, inplace=False)
)
(h): ModuleList(
(0-31): 32 x ParallelBlock(
(ln): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
(resid_dropout): Dropout(p=0.1, inplace=False)
(mixer): MHA(
(rotary_emb): RotaryEmbedding()
(Wqkv): Linear4bit(in_features=2560, out_features=7680, bias=True)
(out_proj): Linear4bit(in_features=2560, out_features=2560, bias=True)
(inner_attn): SelfAttention(
(drop): Dropout(p=0.0, inplace=False)
)
(inner_cross_attn): CrossAttention(
(drop): Dropout(p=0.0, inplace=False)
)
)
(moe): MoE(
(mlp): ModuleList(…

[ad_2]