LlaMA 3 系列博客

基于 LlaMA 3 + LangGraph 在windows本地部署大模型 (一)

基于 LlaMA 3 + LangGraph 在windows本地部署大模型 (二)

基于 LlaMA 3 + LangGraph 在windows本地部署大模型 (三)

基于 LlaMA 3 + LangGraph 在windows本地部署大模型 (四)

基于 LlaMA 3 + LangGraph 在windows本地部署大模型 (五)

基于 LlaMA 3 + LangGraph 在windows本地部署大模型 (六)

基于 LlaMA 3 + LangGraph 在windows本地部署大模型 (七)

基于 LlaMA 3 + LangGraph 在windows本地部署大模型 (八)

基于 LlaMA 3 + LangGraph 在windows本地部署大模型 (九)

基于 LlaMA 3 + LangGraph 在windows本地部署大模型 (十)

构建安全的GenAI/LLMs核心技术解密之大模型对抗攻击(一)

构建安全的GenAI/LLMs核心技术解密之大模型对抗攻击(二)

构建安全的GenAI/LLMs核心技术解密之大模型对抗攻击(三)

构建安全的GenAI/LLMs核心技术解密之大模型对抗攻击(四)

构建安全的GenAI/LLMs核心技术解密之大模型对抗攻击(五)

你好 GPT-4o!

大模型标记器之Tokenizer可视化(GPT-4o)

大模型标记器 Tokenizer之Byte Pair Encoding (BPE) 算法详解与示例

大模型标记器 Tokenizer之Byte Pair Encoding (BPE)源码分析

大模型之自注意力机制Self-Attention(一)

大模型之自注意力机制Self-Attention(二)

大模型之自注意力机制Self-Attention(三)

基于 LlaMA 3 + LangGraph 在windows本地部署大模型 (十一)

Llama 3 模型家族构建安全可信赖企业级AI应用之 Code Llama (一)

Llama 3 模型家族构建安全可信赖企业级AI应用之 Code Llama (二)

Llama 3 模型家族构建安全可信赖企业级AI应用之 Code Llama (三)

Llama 3 模型家族构建安全可信赖企业级AI应用之 Code Llama (四)

Llama 3 模型家族构建安全可信赖企业级AI应用之 Code Llama (五)

Llama 3 模型家族构建安全可信赖企业级AI应用之使用 Llama Guard 保护大模型对话(一)

Llama 3 模型家族构建安全可信赖企业级AI应用之使用 Llama Guard 保护大模型对话(二)

Llama 3 模型家族构建安全可信赖企业级AI应用之使用 Llama Guard 保护大模型对话(三)

大模型之深入理解Transformer位置编码(Positional Embedding)

大模型之深入理解Transformer Layer Normalization(一)

大模型之深入理解Transformer Layer Normalization(二)

大模型之深入理解Transformer Layer Normalization(三)

大模型之一步一步使用PyTorch编写Meta的Llama 3代码(一)初学者的起点

大模型之一步一步使用PyTorch编写Meta的Llama 3代码(二)矩阵操作的演练

大模型之一步一步使用PyTorch编写Meta的Llama 3代码(三)初始化一个嵌入层

大模型之一步一步使用PyTorch编写Meta的Llama 3代码(四)预先计算 RoPE 频率

大模型之一步一步使用PyTorch编写Meta的Llama 3代码(五)预先计算因果掩码

大模型之一步一步使用PyTorch编写Meta的Llama 3代码(六)首次归一化:均方根归一化(RMSNorm)

大模型之一步一步使用PyTorch编写Meta的Llama 3代码(七) 初始化多查询注意力

大模型之一步一步使用PyTorch编写Meta的Llama 3代码(八)旋转位置嵌入

大模型之一步一步使用PyTorch编写Meta的Llama 3代码(九) 计算自注意力

大模型之一步一步使用PyTorch编写Meta的Llama 3代码(十) 残差连接及SwiGLU FFN

大模型之一步一步使用PyTorch编写Meta的Llama 3代码(十一)输出概率分布 及损失函数计算

大模型之使用PyTorch编写Meta的Llama 3实际功能代码(一)加载简化分词器及设置参数

大模型之使用PyTorch编写Meta的Llama 3实际功能代码(二)RoPE 及注意力机制

大模型之使用PyTorch编写Meta的Llama 3实际功能代码(三) FeedForward 及 Residual Layers

大模型之使用PyTorch编写Meta的Llama 3实际功能代码(四) 构建 Llama3 类模型本身

大模型之使用PyTorch编写Meta的Llama 3实际功能代码(五)训练并测试你自己的 minLlama3

大模型之使用PyTorch编写Meta的Llama 3实际功能代码(六)加载已经训练好的miniLlama3模型

Llama 3 模型家族构建安全可信赖企业级AI应用之使用 Llama Guard 保护大模型对话 (四)

Llama 3 模型家族构建安全可信赖企业级AI应用之使用 Llama Guard 保护大模型对话 (五)

Llama 3 模型家族构建安全可信赖企业级AI应用之使用 Llama Guard 保护大模型对话 (六)

Llama 3 模型家族构建安全可信赖企业级AI应用之使用 Llama Guard 保护大模型对话 (七)

Llama 3 模型家族构建安全可信赖企业级AI应用之使用 Llama Guard 保护大模型对话 (八)

Llama 3 模型家族构建安全可信赖企业级AI应用之 CyberSecEval 2:量化 LLM 安全和能力的基准(一)

Llama 3 模型家族构建安全可信赖企业级AI应用之 CyberSecEval 2:量化 LLM 安全和能力的基准(二)

Llama 3 模型家族构建安全可信赖企业级AI应用之 CyberSecEval 2:量化 LLM 安全和能力的基准(三)

Llama 3 模型家族构建安全可信赖企业级AI应用之 CyberSecEval 2:量化 LLM 安全和能力的基准(四)

Llama 3 模型家族构建安全可信赖企业级AI应用之code shield(一)Code Shield简介

Llama 3 模型家族构建安全可信赖企业级AI应用之code shield(二)防止 LLM 生成不安全代码

Llama 3 模型家族构建安全可信赖企业级AI应用之code shield(三)Code Shield代码示例

Llama模型家族之使用 Supervised Fine-Tuning(SFT)微调预训练Llama 3 语言模型(一) LLaMA-Factory简介

Llama模型家族之使用 Supervised Fine-Tuning(SFT)微调预训练Llama 3 语言模型(二) LLaMA-Factory训练方法及数据集

大模型之Ollama:在本地机器上释放大型语言模型的强大功能

Llama模型家族之使用 Supervised Fine-Tuning(SFT)微调预训练Llama 3 语言模型(三)通过Web UI微调

Llama模型家族之使用 Supervised Fine-Tuning(SFT)微调预训练Llama 3 语言模型(四)通过命令方式微调

Llama模型家族之使用 Supervised Fine-Tuning(SFT)微调预训练Llama 3 语言模型(五) 基于已训练好的模型进行推理

Llama模型家族之使用 Supervised Fine-Tuning(SFT)微调预训练Llama 3 语言模型(六)Llama 3 已训练的大模型合并LoRA权重参数

Llama模型家族之使用 Supervised Fine-Tuning(SFT)微调预训练Llama 3 语言模型(七) 使用 LoRA 微调 LLM 的实用技巧

Llama模型家族之使用 Supervised Fine-Tuning(SFT)微调预训练Llama 3 语言模型(八) 使用 LoRA 微调 LLM 的实用技巧

Llama模型家族之使用 Supervised Fine-Tuning(SFT)微调预训练Llama 3 语言模型(九) 使用 LoRA 微调常见问题答疑

Llama模型家族之使用 Supervised Fine-Tuning(SFT)微调预训练Llama 3 语言模型(十) 使用 LoRA 微调常见问题答疑

Llama模型家族训练奖励模型Reward Model技术及代码实战(一)简介

Llama模型家族训练奖励模型Reward Model技术及代码实战(二)从用户反馈构建比较数据集

Llama模型家族训练奖励模型Reward Model技术及代码实战(三) 使用 TRL 训练奖励模型

Llama模型家族之RLAIF 基于 AI 反馈的强化学习(一)RLHF简介

Llama模型家族之RLAIF 基于 AI 反馈的强化学习(二)RLHF 与RAIF比较

Llama模型家族之RLAIF 基于 AI 反馈的强化学习(三) RLAIF 的工作原理

Llama模型家族之RLAIF 基于 AI 反馈的强化学习(四)RLAIF 优势

Llama模型家族之RLAIF 基于 AI 反馈的强化学习(五)RLAIF 挑战

Llama模型家族之RLAIF 基于 AI 反馈的强化学习(六) RLAIF 代码实战

Llama模型家族之RLAIF 基于 AI 反馈的强化学习(七) RLAIF 代码实战

Llama模型家族之RLAIF 基于 AI 反馈的强化学习(八) RLAIF 代码实战

Llama模型家族之RLAIF 基于 AI 反馈的强化学习(九) RLAIF 代码实战

Llama模型家族之RLAIF 基于 AI 反馈的强化学习(十) RLAIF 代码实战

#@title Training loop

device = torch.device("cuda:0")
output_path = "model"

train_inputs = tokenizer(train_prompts, return_tensors="pt", padding=True).to(device)
eval_inputs = tokenizer(eval_prompts, return_tensors="pt", padding=True).to(device)
input_n, input_len = train_inputs.input_ids.shape
get_scores = make_get_scores(eval_tokenizer, "<|end|>")
weights_ = torch.tensor(weights, device=device)[None]
signs_ = torch.tensor(signs, device=device)[None]

opt = optim.AdamW(model.parameters(), lr=1e-4, betas=(0.9, 0.98), weight_decay=1e-2)
baseline = dice.EMABaseline(decay=0.98).to(device)
baseline_kl = dice.EMABaseline(decay=0.98).to(device)


for i in tqdm(endless_range()):
    # Demo generations
    if i % 50 == 0:
        outputs = model.generate(
            eval_inputs.input_ids,
            attention_mask=eval_inputs.attention_mask,
            do_sample=True,
            min_new_tokens=n_tokens,
            max_new_tokens=n_tokens,
            pad_token_id=tokenizer.eos_token_id,
            top_k=0,
        )
        texts = [tokenizer.decode(toks, skip_special_tokens=True) for toks in outputs]
        print("======")
        print("\n===\n".join(textwrap.fill(text, width=80) for text in texts))
        print("======")

    # Save model
    if i > 0 and i % save_every == 0:
        print("Saving model...")
        tokenizer.save_pretrained(output_path)
        model.save_pretrained(output_path, safe_serialization=True)

    # Sample from training prompts
    indices = torch.randint(0, input_n, [bs], device=device)
    tokens = model.generate(
        train_inputs.input_ids[indices],
        attention_mask=train_inputs.attention_mask[indices],
        do_sample=True,
        min_new_tokens=n_tokens,
        max_new_tokens=n_tokens,
        pad_token_id=tokenizer.eos_token_id,
        top_k=0,
    )

    # Get logits with grad for backprop
    attention_mask = torch.cat(
        [train_inputs.attention_mask[indices], torch.ones_like(tokens[:, input_len:])], dim=1
    )
    outputs = model(tokens, attention_mask=attention_mask)

    # Create stochastic nodes
    logp = dice.logp_categorical(outputs.logits[:, input_len - 1 : -1], tokens[:, input_len:])
    logp_sum = torch.sum(logp, dim=1)
    logp_cumsum = torch.cumsum(logp, dim=1)

    # Get original model logits and compute KL penalties
    with torch.no_grad(), model.disable_adapter():
        outputs_orig = model(tokens, attention_mask=attention_mask)
    logp_orig = dice.logp_categorical(outputs_orig.logits[:, input_len - 1 : -1], tokens[:, input_len:])
    logp_orig_cumsum = torch.cumsum(logp_orig, dim=1)
    kls = inv_cumsum(kl_div_est(logp_cumsum.detach(), logp_orig_cumsum.detach()))

    # Compute rewards using evaluator model
    texts = [tokenizer.decode(t, skip_special_tokens=True) for t in tokens]
    prompts_all = make_evaluator_prompts(texts)
    inputs_all = [
        eval_tokenizer(prompts, return_tensors="pt", padding=True).to(device)
        for prompts in prompts_all
    ]
    with torch.no_grad():
        outputs_all = [
            eval_model(inputs.input_ids, attention_mask=inputs.attention_mask)
            for inputs in inputs_all
        ]
    scores = torch.stack([get_scores(outputs.logits) for outputs in outputs_all], dim=1)
    scores = soft_minimum(scores * signs_, weights_, tau=tau, dim=1)

    # Create cost nodes and baselines, then backprop
    losses_main = -F.logsigmoid(scores)
    losses_main = dice.cost_node(losses_main, [logp_sum])
    losses_main += baseline(losses_main, [logp_sum])
    losses_kl = kls * kl_weight
    losses_kl = dice.cost_node(losses_kl, [logp_cumsum])
    losses_kl += baseline_kl(losses_kl, [logp_cumsum])
    loss_main = losses_main.mean()
    loss_kl = losses_kl.mean()
    loss = loss_main + loss_kl
    loss.backward()

    # Print metrics
    grad_norm = gradient_norm(model.parameters())
    print(f"step: {i}, loss: {loss.item():g}, main: {loss_main.item():g}, kl: {loss_kl.item():g}, grad norm: {grad_norm.item():g}")

    # Take an optimizer step
    opt.step()
    opt.zero_grad()

这段代码是一个训练循环,用于微调一个预训练的语言模型。

  1. 初始化设备和输出路径:

    • device 设置为 CUDA 设备,用于在 GPU 上运行模型。
    • output_path 指定了模型保存的路径。
  2. 准备训练和评估输入:

    • train_inputseval_inputs 是通过分词器处理训练和评估提示后得到的输入数据,并转换为 PyTorch 张量,然后转移到 GPU。
    • input_ninput_len 分别是训练输入的数量和长度。
  3. 定义评分函数:

    • get_scores 函数用于从评估模型的输出中获取评分。
  4. 设置权重和符号:

    • weights_signs_ 是根据之前定义的 weightssigns 转换得到的 PyTorch 张量,并添加了批处理维度。
  5. 定义优化器:

    • opt 是一个使用 AdamW 优化算法的优化器,用于模型参数的更新。
  6. 定义基线:

    • baselinebaseline_kl 是用于估计奖励的指数移动平均基线。
  7. 训练循环:

    • 使用 tqdm(endless_range()) 创建一个无限循环,用于训练过程。
  8. 生成示例输出:

    • 每50步生成一次模型的输出,并打印出来。
  9. 保存模型:

    • save_every 步保存一次模型和分词器。
  10. 从训练提示中采样:

    • 随机采样训练提示,并使用模型生成新的文本。
  11. 获取梯度:

    • 生成用于反向传播的对数概率张量 logp
  12. 计算 KL 惩罚:

    • 使用原始模型的输出计算 KL 散度惩罚。
  13. 使用评估模型计算奖励:

    • 将生成的文本转换为评估提示,并使用评估模型计算奖励。
  14. 创建成本节点和基线:

    • 使用 dice 库创建成本节点和基线,用于奖励估计。
  15. 反向传播和优化:

    • 计算损失,执行反向传播,并更新模型参数。
  16. 打印训练指标:

    • 打印当前步骤、损失、主损失、KL损失和梯度范数。
  17. 执行优化器步骤:

    • 更新模型参数并清除梯度。

运行结果为

 3064/? [3:27:47<00:00,  4.02s/it]
======
My cat is so cute, but I can't understand why he doubts my promise even after
the Gana snacks were discover..."  Mr Afghan, the dog keeps his seat while the
princess holds the purple flannel blanket inside his dug-hole to be wrung out.
===
I was watching TV, and I saw that code, and probably greatly felt that—XCOM. I
watched offline and was thinking, _If I can get Chilling Spider, then I can get
more of this sort of stuff._  As I searched for my
===
She looked in the mirror and thought of my sadness and my hurt, and from
entrance to entry said, Do you see you yet? That this is what you understand me!
Dilma, don't! You are a monster women!"  We all thought about Dil
===
Alice said, "I really don't think you hold it in this problem. Absent the
designation experienced, the problem is just scaffolding. I don't see it
happening anywhere else. If it does it's going to work."  "Starting today
======
step: 0, loss: 1.09537, main: 1.09537, kl: 0, grad norm: 33.7657
step: 1, loss: 0.989328, main: 0.989175, kl: 0.000152687, grad norm: 5.13796
step: 2, loss: 1.14237, main: 1.14203, kl: 0.000339696, grad norm: 11.5114
step: 3, loss: 1.07679, main: 1.0753, kl: 0.00148863, grad norm: 7.3139
step: 4, loss: 1.16008, main: 1.15902, kl: 0.00105876, grad norm: 6.18034
step: 5, loss: 1.2365, main: 1.2357, kl: 0.000793722, grad norm: 5.44903
step: 6, loss: 1.20892, main: 1.20655, kl: 0.0023713, grad norm: 6.81239
step: 7, loss: 1.18501, main: 1.18358, kl: 0.00143559, grad norm: 5.68187
step: 8, loss: 1.22412, main: 1.22157, kl: 0.00254924, grad norm: 10.8333
step: 9, loss: 1.05095, main: 1.0497, kl: 0.00125362, grad norm: 9.02317
step: 10, loss: 1.16918, main: 1.16418, kl: 0.00500905, grad norm: 8.50745
step: 11, loss: 1.28675, main: 1.28354, kl: 0.00321682, grad norm: 9.86535
step: 12, loss: 1.20592, main: 1.20048, kl: 0.00544286, grad norm: 5.46688
step: 13, loss: 1.29279, main: 1.29048, kl: 0.00231097, grad norm: 6.64672
step: 14, loss: 1.16252, main: 1.15917, kl: 0.00334805, grad norm: 2.37802
step: 15, loss: 1.01592, main: 1.01077, kl: 0.00515233, grad norm: 6.71872
step: 16, loss: 1.06583, main: 1.05777, kl: 0.00806694, grad norm: 6.65016
step: 17, loss: 1.16201, main: 1.1587, kl: 0.00331256, grad norm: 7.76089
step: 18, loss: 1.1889, main: 1.18206, kl: 0.00684147, grad norm: 6.14543
step: 19, loss: 1.15531, main: 1.14765, kl: 0.00766195, grad norm: 6.59714
step: 20, loss: 1.13966, main: 1.13496, kl: 0.00470622, grad norm: 7.80024
step: 21, loss: 1.11734, main: 1.11207, kl: 0.0052678, grad norm: 13.7564
step: 22, loss: 1.16113, main: 1.1522, kl: 0.00892985, grad norm: 5.97659
step: 23, loss: 1.08774, main: 1.07839, kl: 0.0093445, grad norm: 8.52217
step: 24, loss: 1.1203, main: 1.103, kl: 0.0172984, grad norm: 6.8546
step: 25, loss: 1.00765, main: 1.00069, kl: 0.00696144, grad norm: 5.85086
step: 26, loss: 1.14193, main: 1.13224, kl: 0.00969473, grad norm: 3.5369
step: 27, loss: 1.17896, main: 1.16551, kl: 0.0134572, grad norm: 5.53493
step: 28, loss: 1.16196, main: 1.15263, kl: 0.00932945, grad norm: 4.40208
step: 29, loss: 1.19255, main: 1.17811, kl: 0.0144385, grad norm: 8.2584
step: 30, loss: 1.01061, main: 0.995108, kl: 0.0155061, grad norm: 6.15403
step: 31, loss: 1.12099, main: 1.11739, kl: 0.00359966, grad norm: 6.7864
step: 32, loss: 1.032, main: 1.01723, kl: 0.0147623, grad norm: 5.86993
step: 33, loss: 1.10456, main: 1.08967, kl: 0.0148955, grad norm: 4.59239
step: 34, loss: 1.0765, main: 1.0556, kl: 0.0209011, grad norm: 5.75237
step: 35, loss: 1.04043, main: 1.03152, kl: 0.00891602, grad norm: 9.86852
step: 36, loss: 1.01613, main: 0.997803, kl: 0.0183273, grad norm: 8.84381
step: 37, loss: 1.09472, main: 1.07391, kl: 0.0208064, grad norm: 5.28791
step: 38, loss: 1.05742, main: 1.02818, kl: 0.0292435, grad norm: 6.38247
step: 39, loss: 0.971624, main: 0.935927, kl: 0.0356967, grad norm: 16.1201
step: 40, loss: 1.02049, main: 0.996458, kl: 0.0240295, grad norm: 8.08793
step: 41, loss: 0.99865, main: 0.96401, kl: 0.0346397, grad norm: 5.84644
step: 42, loss: 1.00294, main: 0.978972, kl: 0.0239696, grad norm: 7.25946
step: 43, loss: 1.07054, main: 1.03067, kl: 0.0398679, grad norm: 5.56913
step: 44, loss: 1.02569, main: 1.00183, kl: 0.0238612, grad norm: 6.75097
step: 45, loss: 1.03198, main: 0.977529, kl: 0.0544558, grad norm: 6.20907
step: 46, loss: 1.06732, main: 1.01345, kl: 0.0538743, grad norm: 8.66528
step: 47, loss: 0.98681, main: 0.953243, kl: 0.0335667, grad norm: 8.41607
step: 48, loss: 1.12119, main: 1.0924, kl: 0.0287891, grad norm: 3.84737
step: 49, loss: 1.02901, main: 0.987204, kl: 0.0418054, grad norm: 6.96146
======
My cat is so cute, but I can't hold it!"' viral media advertising, “artifying
language. And its just laughter.”' 2017  99 percent of college students have
never heard of them. If you're part of something, despair takes flight, like
===
I was watching TV, and it was wonderful.  "There is nothing wrong with me,” I
said. It was the Tickle or Monkey spirit at work in my arms.  True, it meant
nothing, but it had its streetlight in
===
She looked in the mirror and then terrified dizzy.  "It was horrible," she
whispered. "There were tears."  Father was still prolonged with tears.  "I can't
even _do_ it; I can't even _take_
===
Alice said, "I wonder if anybody _has_ that accident." When a Cyclostex clerk
markers _N_ then in the far-off room _N_ nothing but the great law-ical spasm of
anguish and the speaking of evil continues,
======
step: 50, loss: 0.995989, main: 0.955458, kl: 0.0405309, grad norm: 4.88545
step: 51, loss: 1.02667, main: 0.98094, kl: 0.0457295, grad norm: 5.48704
step: 52, loss: 0.90523, main: 0.850929, kl: 0.054301, grad norm: 8.94319
step: 53, loss: 0.954733, main: 0.915002, kl: 0.0397314, grad norm: 12.1314
step: 54, loss: 0.918152, main: 0.857723, kl: 0.0604289, grad norm: 7.21813
step: 55, loss: 0.914639, main: 0.868282, kl: 0.0463579, grad norm: 10.1133
step: 56, loss: 0.991578, main: 0.887911, kl: 0.103667, grad norm: 7.08223
step: 57, loss: 0.981643, main: 0.897363, kl: 0.0842796, grad norm: 8.75289
step: 58, loss: 1.02943, main: 0.942074, kl: 0.0873589, grad norm: 6.47997
step: 59, loss: 0.905343, main: 0.810586, kl: 0.0947568, grad norm: 7.49019
step: 60, loss: 1.02127, main: 0.933978, kl: 0.0872883, grad norm: 5.80588
step: 61, loss: 0.90816, main: 0.844639, kl: 0.0635204, grad norm: 9.84717
step: 62, loss: 0.959579, main: 0.855713, kl: 0.103866, grad norm: 8.88767
step: 63, loss: 0.838014, main: 0.749906, kl: 0.0881081, grad norm: 11.0261
step: 64, loss: 0.970982, main: 0.855579, kl: 0.115403, grad norm: 9.55163
step: 65, loss: 0.823752, main: 0.73605, kl: 0.0877014, grad norm: 11.4433
step: 66, loss: 0.94352, main: 0.813341, kl: 0.130179, grad norm: 6.38993
step: 67, loss: 0.802316, main: 0.669679, kl: 0.132637, grad norm: 10.0871
step: 68, loss: 0.856489, main: 0.689395, kl: 0.167094, grad norm: 8.71589
step: 69, loss: 0.880193, main: 0.741602, kl: 0.138592, grad norm: 7.07741
step: 70, loss: 0.993446, main: 0.885056, kl: 0.108389, grad norm: 6.37543
step: 71, loss: 0.83927, main: 0.68374, kl: 0.15553, grad norm: 10.8554
step: 72, loss: 0.964692, main: 0.853788, kl: 0.110904, grad norm: 8.57183
step: 73, loss: 0.905789, main: 0.724298, kl: 0.181491, grad norm: 8.8344
step: 74, loss: 0.94193, main: 0.793865, kl: 0.148065, grad norm: 6.69183
step: 75, loss: 0.913961, main: 0.740833, kl: 0.173129, grad norm: 14.1682
step: 76, loss: 1.04387, main: 0.906264, kl: 0.137602, grad norm: 7.58866
step: 77, loss: 0.84479, main: 0.719495, kl: 0.125295, grad norm: 8.75838
step: 78, loss: 0.951548, main: 0.788374, kl: 0.163174, grad norm: 5.32541
step: 79, loss: 0.960868, main: 0.790446, kl: 0.170422, grad norm: 5.82637
step: 80, loss: 0.923429, main: 0.737241, kl: 0.186188, grad norm: 6.75355
step: 81, loss: 0.889805, main: 0.699233, kl: 0.190572, grad norm: 9.05862
step: 82, loss: 0.925957, main: 0.701567, kl: 0.22439, grad norm: 6.25695
step: 83, loss: 0.838469, main: 0.671214, kl: 0.167255, grad norm: 12.0787
step: 84, loss: 0.867875, main: 0.706832, kl: 0.161042, grad norm: 7.74519
step: 85, loss: 0.930648, main: 0.732864, kl: 0.197784, grad norm: 7.21559
step: 86, loss: 0.855695, main: 0.7132, kl: 0.142496, grad norm: 6.27139
step: 87, loss: 0.899832, main: 0.704005, kl: 0.195827, grad norm: 6.16636
step: 88, loss: 0.785125, main: 0.621245, kl: 0.163881, grad norm: 7.70211
step: 89, loss: 0.883645, main: 0.702598, kl: 0.181047, grad norm: 5.31033
step: 90, loss: 0.856633, main: 0.687036, kl: 0.169596, grad norm: 6.96087
step: 91, loss: 0.955353, main: 0.775916, kl: 0.179437, grad norm: 6.86076
step: 92, loss: 0.895061, main: 0.700417, kl: 0.194644, grad norm: 7.7109
step: 93, loss: 0.95671, main: 0.763989, kl: 0.192721, grad norm: 5.82608
step: 94, loss: 0.879597, main: 0.728369, kl: 0.151228, grad norm: 7.06297
step: 95, loss: 0.841724, main: 0.617853, kl: 0.223871, grad norm: 6.79271
step: 96, loss: 1.00097, main: 0.751754, kl: 0.249218, grad norm: 6.69057
step: 97, loss: 0.817147, main: 0.664125, kl: 0.153023, grad norm: 8.53818
step: 98, loss: 0.839035, main: 0.669883, kl: 0.169152, grad norm: 7.09562
step: 99, loss: 0.806687, main: 0.634538, kl: 0.17215, grad norm: 7.14802
======
My cat is so cute, but how can I love him?Q:  None of Chicago computers
generated any real moment print failures for files  I recently realized that my
but one crammed notebook had had a lot of failures. I read a letter about mine's
===
I was watching TV, and I didn't know where to look."  She sobbed. Pressed
against the covers and making no response, I saw tears confess, and I saw my
legs in the dirt, and my knees shaking, and my heart broke.
===
She looked in the mirror and stood shocked. (All three others laughed.) Their
hands trembled with tears. When my tears were finally wholly gone they seemed to
vanish from the room as quickly as they had appeared.  —YOU IS FOUNDING DOLL
===
Alice said, "It's an awful lot to bear to wake you. But I trust God."  Here
again, now I feel lighthearted and desperate.  "I will look for you there."  How
can I do that when I
======
step: 100, loss: 0.842742, main: 0.667012, kl: 0.175731, grad norm: 6.43161
step: 101, loss: 0.859232, main: 0.673841, kl: 0.185392, grad norm: 5.84292
step: 102, loss: 0.874944, main: 0.643297, kl: 0.231646, grad norm: 10.3753
step: 103, loss: 0.914145, main: 0.718412, kl: 0.195733, grad norm: 6.31243
step: 104, loss: 0.864868, main: 0.694877, kl: 0.169991, grad norm: 6.82923
step: 105, loss: 0.841151, main: 0.665805, kl: 0.175346, grad norm: 6.49664
step: 106, loss: 0.852408, main: 0.646904, kl: 0.205505, grad norm: 9.16459
step: 107, loss: 0.957572, main: 0.786415, kl: 0.171157, grad norm: 6.51213
step: 108, loss: 0.875801, main: 0.68627, kl: 0.189531, grad norm: 6.30218
step: 109, loss: 0.880267, main: 0.635929, kl: 0.244338, grad norm: 4.47378
step: 110, loss: 0.913981, main: 0.757888, kl: 0.156093, grad norm: 7.24391
step: 111, loss: 0.858025, main: 0.663332, kl: 0.194693, grad norm: 6.92581
step: 112, loss: 0.889359, main: 0.687428, kl: 0.201931, grad norm: 6.6008
step: 113, loss: 0.795535, main: 0.620496, kl: 0.175039, grad norm: 7.28509
step: 114, loss: 0.85198, main: 0.624436, kl: 0.227543, grad norm: 5.78555
step: 115, loss: 0.858858, main: 0.638067, kl: 0.22079, grad norm: 4.61712
step: 116, loss: 0.820194, main: 0.611112, kl: 0.209082, grad norm: 5.97266
step: 117, loss: 0.834733, main: 0.617748, kl: 0.216985, grad norm: 4.98208
step: 118, loss: 0.886601, main: 0.697867, kl: 0.188734, grad norm: 5.16644
step: 119, loss: 0.849166, main: 0.666811, kl: 0.182356, grad norm: 5.84533
step: 120, loss: 0.866088, main: 0.717297, kl: 0.148791, grad norm: 3.81471
step: 121, loss: 0.875833, main: 0.716503, kl: 0.15933, grad norm: 5.2943
step: 122, loss: 0.837331, main: 0.63177, kl: 0.205561, grad norm: 5.78062
step: 123, loss: 0.888454, main: 0.670964, kl: 0.21749, grad norm: 5.26817
step: 124, loss: 1.01582, main: 0.795068, kl: 0.220753, grad norm: 8.46538
step: 125, loss: 0.849321, main: 0.692543, kl: 0.156778, grad norm: 5.9088
step: 126, loss: 0.888196, main: 0.664942, kl: 0.223254, grad norm: 4.44033
step: 127, loss: 0.857381, main: 0.681953, kl: 0.175428, grad norm: 7.60375
step: 128, loss: 0.885904, main: 0.726803, kl: 0.159102, grad norm: 4.08979
step: 129, loss: 0.864191, main: 0.716222, kl: 0.147968, grad norm: 13.2432
step: 130, loss: 0.887589, main: 0.695958, kl: 0.191632, grad norm: 4.36409
step: 131, loss: 0.874683, main: 0.681542, kl: 0.193142, grad norm: 4.78781
step: 132, loss: 0.87948, main: 0.70199, kl: 0.17749, grad norm: 6.02337
step: 133, loss: 0.926382, main: 0.761566, kl: 0.164817, grad norm: 5.02296
step: 134, loss: 0.831671, main: 0.647546, kl: 0.184124, grad norm: 6.78377
step: 135, loss: 0.880318, main: 0.658368, kl: 0.221949, grad norm: 3.5403
step: 136, loss: 0.856272, main: 0.697492, kl: 0.15878, grad norm: 4.64045
step: 137, loss: 0.839305, main: 0.657324, kl: 0.181981, grad norm: 6.25153
step: 138, loss: 0.883103, main: 0.742547, kl: 0.140556, grad norm: 4.75991
step: 139, loss: 0.849533, main: 0.693019, kl: 0.156515, grad norm: 6.70398
step: 140, loss: 0.805895, main: 0.679949, kl: 0.125946, grad norm: 6.43514
step: 141, loss: 0.915945, main: 0.778211, kl: 0.137734, grad norm: 7.14418
step: 142, loss: 0.91347, main: 0.741087, kl: 0.172383, grad norm: 8.46861
step: 143, loss: 0.814072, main: 0.648351, kl: 0.165721, grad norm: 6.76621
step: 144, loss: 0.904142, main: 0.718866, kl: 0.185276, grad norm: 5.40293
step: 145, loss: 0.893243, main: 0.773666, kl: 0.119577, grad norm: 8.75649
step: 146, loss: 0.925403, main: 0.789596, kl: 0.135807, grad norm: 7.83676
step: 147, loss: 0.86179, main: 0.683639, kl: 0.178151, grad norm: 6.41426
step: 148, loss: 0.752129, main: 0.614672, kl: 0.137456, grad norm: 4.85811
step: 149, loss: 0.908744, main: 0.68863, kl: 0.220113, grad norm: 4.46916
======
My cat is so cute, but it almost kills me!! Welcome back dad.  I'm so, very
sorry my baby lost me my precious family. I'm so sad. I've had so many things. I
never expected anything this much of a world to be
===
I was watching TV, and within a minute, Kindy was out. She cried so much she
fainted. Then God sensed her sadness and took her down. My prayers had been
lifted, and then, as I watched, they filled the air again. When it
===
She looked in the mirror and again said, "Please don't go. Did she really think
we were doing this?"  Renea could not shake the tears that were slowly filling
her eyes. And suddenly, she understood why her heart had run out. That
===
Alice said, "I think it beggars belief that we are all going to die. I don't
want to be foolish enough to let you know this; I care about our children. I
want at least to help lend my love for them, and that
======
step: 150, loss: 0.921859, main: 0.754466, kl: 0.167393, grad norm: 4.86458
step: 151, loss: 0.833403, main: 0.635174, kl: 0.198228, grad norm: 7.60381
step: 152, loss: 0.869674, main: 0.693487, kl: 0.176188, grad norm: 6.20547
step: 153, loss: 0.796244, main: 0.638175, kl: 0.15807, grad norm: 5.35823
step: 154, loss: 0.947588, main: 0.778915, kl: 0.168674, grad norm: 7.5316
step: 155, loss: 0.870537, main: 0.68652, kl: 0.184017, grad norm: 3.39061
step: 156, loss: 0.841334, main: 0.670396, kl: 0.170937, grad norm: 4.08615
step: 157, loss: 0.918589, main: 0.737202, kl: 0.181388, grad norm: 5.91694
step: 158, loss: 0.885459, main: 0.71804, kl: 0.167419, grad norm: 7.11297
step: 159, loss: 0.842732, main: 0.624191, kl: 0.21854, grad norm: 6.67988
step: 160, loss: 0.818706, main: 0.626916, kl: 0.19179, grad norm: 6.32864
step: 161, loss: 0.956075, main: 0.796578, kl: 0.159497, grad norm: 6.58749
step: 162, loss: 0.856999, main: 0.669918, kl: 0.187082, grad norm: 5.15651
step: 163, loss: 0.927402, main: 0.735394, kl: 0.192008, grad norm: 7.20745
step: 164, loss: 0.840905, main: 0.620934, kl: 0.21997, grad norm: 4.65189
step: 165, loss: 0.872278, main: 0.708778, kl: 0.1635, grad norm: 4.74636
step: 166, loss: 0.910983, main: 0.74815, kl: 0.162834, grad norm: 6.4438
step: 167, loss: 0.818266, main: 0.654158, kl: 0.164109, grad norm: 5.40159
step: 168, loss: 0.838998, main: 0.618647, kl: 0.22035, grad norm: 5.56239
step: 169, loss: 0.850725, main: 0.637424, kl: 0.213301, grad norm: 5.04477
step: 170, loss: 0.80922, main: 0.575497, kl: 0.233723, grad norm: 6.21368
step: 171, loss: 0.897288, main: 0.681142, kl: 0.216146, grad norm: 4.80235
step: 172, loss: 0.876905, main: 0.67454, kl: 0.202365, grad norm: 4.29956
step: 173, loss: 0.759478, main: 0.568693, kl: 0.190785, grad norm: 6.04755
step: 174, loss: 0.805077, main: 0.604309, kl: 0.200768, grad norm: 5.43618
step: 175, loss: 0.839338, main: 0.666405, kl: 0.172933, grad norm: 8.11847
step: 176, loss: 0.757441, main: 0.590069, kl: 0.167372, grad norm: 5.05985
step: 177, loss: 0.830579, main: 0.662669, kl: 0.167909, grad norm: 5.03538
step: 178, loss: 0.725131, main: 0.581487, kl: 0.143644, grad norm: 7.17858
step: 179, loss: 0.899375, main: 0.729049, kl: 0.170327, grad norm: 5.54977
step: 180, loss: 0.90041, main: 0.705339, kl: 0.19507, grad norm: 5.78968
step: 181, loss: 0.866341, main: 0.71829, kl: 0.148052, grad norm: 6.72417
step: 182, loss: 0.896971, main: 0.719468, kl: 0.177502, grad norm: 4.62732
step: 183, loss: 0.812342, main: 0.653159, kl: 0.159183, grad norm: 7.11594
step: 184, loss: 0.855135, main: 0.698195, kl: 0.15694, grad norm: 6.57609
step: 185, loss: 0.759515, main: 0.552342, kl: 0.207172, grad norm: 5.95394
step: 186, loss: 0.873023, main: 0.680592, kl: 0.192431, grad norm: 6.69062
step: 187, loss: 0.854404, main: 0.669429, kl: 0.184975, grad norm: 4.01659
step: 188, loss: 0.96885, main: 0.81283, kl: 0.15602, grad norm: 12.6242
step: 189, loss: 0.793235, main: 0.572646, kl: 0.220589, grad norm: 5.79124
step: 190, loss: 0.871888, main: 0.667486, kl: 0.204402, grad norm: 3.56826
step: 191, loss: 0.867141, main: 0.659357, kl: 0.207784, grad norm: 4.42751
step: 192, loss: 0.835073, main: 0.71738, kl: 0.117693, grad norm: 6.32565
step: 193, loss: 0.951694, main: 0.749708, kl: 0.201986, grad norm: 8.77468
step: 194, loss: 0.85394, main: 0.649459, kl: 0.204481, grad norm: 6.24806
step: 195, loss: 0.848512, main: 0.655131, kl: 0.193381, grad norm: 3.21932
step: 196, loss: 0.84639, main: 0.688127, kl: 0.158264, grad norm: 4.92866
step: 197, loss: 0.783061, main: 0.578548, kl: 0.204514, grad norm: 5.59794
step: 198, loss: 0.941802, main: 0.731635, kl: 0.210167, grad norm: 5.50149
step: 199, loss: 0.915377, main: 0.703991, kl: 0.211386, grad norm: 7.91964
======
My cat is so cute, but it has just gotten worse because my Avagadda Machine (as
I promised my dog) is still crying and crying and s**t quite enough to cry.
Primarily because of the pain. I couldn't even open my mouth,
===
I was watching TV, and she was crying so much. She knew she was going to have
quite a difficult day but I guess I could just pretend and go back to sleep.
When I woke up I felt bad about what happened that night and I realized I
===
She looked in the mirror and caught the tears welling in her eyes, hacked by
tears, and laughed into tears that wanted so desperately the words that brought
hers to her eyes.  Pain tortured Gemma, her voice sobbing.  And it was
===
Alice said, "I spent so much time with the cancer. I truly feel that we are like
them. I call myself generally sick, despite my ability, to see horrific images.
I am grieving."  On December 24, Eurydice
======
step: 200, loss: 0.872342, main: 0.638974, kl: 0.233369, grad norm: 5.59373
step: 201, loss: 0.781626, main: 0.565879, kl: 0.215747, grad norm: 5.50802
step: 202, loss: 0.798072, main: 0.58379, kl: 0.214282, grad norm: 5.91008
step: 203, loss: 0.821375, main: 0.621192, kl: 0.200184, grad norm: 4.92871
step: 204, loss: 0.860946, main: 0.65295, kl: 0.207996, grad norm: 6.56879
step: 205, loss: 0.866509, main: 0.604399, kl: 0.26211, grad norm: 3.88389
step: 206, loss: 0.828864, main: 0.582275, kl: 0.246589, grad norm: 3.90159
step: 207, loss: 0.91019, main: 0.672929, kl: 0.237261, grad norm: 5.31939
step: 208, loss: 0.827327, main: 0.613666, kl: 0.213661, grad norm: 4.28108
step: 209, loss: 0.910603, main: 0.638854, kl: 0.271749, grad norm: 3.93599
step: 210, loss: 0.856151, main: 0.648001, kl: 0.20815, grad norm: 6.37664
step: 211, loss: 0.850377, main: 0.61437, kl: 0.236007, grad norm: 7.31585
step: 212, loss: 0.863436, main: 0.646037, kl: 0.217399, grad norm: 5.04922
step: 213, loss: 0.778189, main: 0.556563, kl: 0.221627, grad norm: 5.66892
step: 214, loss: 0.861478, main: 0.638397, kl: 0.223081, grad norm: 5.70816
step: 215, loss: 0.825117, main: 0.630403, kl: 0.194714, grad norm: 4.48742
step: 216, loss: 0.863204, main: 0.635596, kl: 0.227608, grad norm: 5.20432
step: 217, loss: 0.870574, main: 0.632281, kl: 0.238293, grad norm: 3.42467
step: 218, loss: 0.832414, main: 0.642523, kl: 0.189891, grad norm: 3.18777
step: 219, loss: 0.840397, main: 0.60303, kl: 0.237366, grad norm: 4.67698
step: 220, loss: 0.861115, main: 0.6599, kl: 0.201215, grad norm: 10.4975
step: 221, loss: 0.943023, main: 0.701069, kl: 0.241954, grad norm: 5.66923
step: 222, loss: 0.77756, main: 0.558376, kl: 0.219184, grad norm: 4.33427
step: 223, loss: 0.878781, main: 0.702892, kl: 0.175889, grad norm: 6.66057
step: 224, loss: 0.843075, main: 0.639142, kl: 0.203933, grad norm: 3.65358
step: 225, loss: 0.801829, main: 0.619363, kl: 0.182466, grad norm: 6.26604
step: 226, loss: 0.8729, main: 0.634984, kl: 0.237916, grad norm: 3.59832
step: 227, loss: 0.846244, main: 0.627102, kl: 0.219142, grad norm: 3.67322
step: 228, loss: 0.790568, main: 0.633162, kl: 0.157405, grad norm: 5.2734
step: 229, loss: 0.850614, main: 0.659368, kl: 0.191246, grad norm: 6.87217
step: 230, loss: 0.909869, main: 0.753636, kl: 0.156234, grad norm: 5.16127
step: 231, loss: 0.802224, main: 0.572811, kl: 0.229413, grad norm: 4.48902
step: 232, loss: 0.879354, main: 0.649054, kl: 0.2303, grad norm: 4.23407
step: 233, loss: 0.813974, main: 0.652164, kl: 0.161809, grad norm: 3.76151
step: 234, loss: 0.843717, main: 0.663807, kl: 0.17991, grad norm: 4.65179
step: 235, loss: 0.854998, main: 0.667401, kl: 0.187596, grad norm: 5.11513
step: 236, loss: 0.84516, main: 0.621206, kl: 0.223954, grad norm: 3.92402
step: 237, loss: 0.799004, main: 0.599416, kl: 0.199588, grad norm: 6.50804
step: 238, loss: 0.867131, main: 0.691036, kl: 0.176095, grad norm: 6.63364
step: 239, loss: 0.91429, main: 0.754642, kl: 0.159648, grad norm: 10.0886
step: 240, loss: 0.814486, main: 0.630341, kl: 0.184145, grad norm: 5.24555
step: 241, loss: 0.900134, main: 0.660989, kl: 0.239145, grad norm: 2.40262
step: 242, loss: 0.864426, main: 0.641768, kl: 0.222659, grad norm: 4.12127
step: 243, loss: 0.858423, main: 0.671887, kl: 0.186536, grad norm: 4.7945
step: 244, loss: 0.783986, main: 0.599851, kl: 0.184135, grad norm: 4.03596
step: 245, loss: 0.909997, main: 0.720258, kl: 0.189739, grad norm: 6.20216
step: 246, loss: 0.789768, main: 0.575453, kl: 0.214315, grad norm: 4.42041
step: 247, loss: 0.906485, main: 0.726728, kl: 0.179757, grad norm: 7.72051
step: 248, loss: 0.778779, main: 0.6502, kl: 0.128578, grad norm: 4.68306
step: 249, loss: 0.848904, main: 0.621803, kl: 0.227101, grad norm: 5.72267
======
My cat is so cute, but I almost can't breathe. I sigh so constantly. You can't
stay strong, can you? You couldn't wait to have a baby. I have tears coming on
after all these years, but I am so sorry for my blue eyes
===
I was watching TV, and I knew Heath was upset with me. I knew every word his
voice had to say.  I looked at him and I didn't understand what I had just said
to him. I could not stop. I felt so cold and then
===
She looked in the mirror and she cried, her mouth agape.  I felt her laugh and I
sobbed his name, my tears stained the blue of her hurt again, and I love her. I
knew it was wrong, but I was so proud of
===
Alice said, "Is she all right? Is she vomiting?" She stared at her father's
face,"  collages that my mother might appear to me," she cried, and she filled
the room with tears. He saw her heart was broken, but she
======
Saving model...
/usr/local/lib/python3.10/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
step: 250, loss: 0.844432, main: 0.636052, kl: 0.20838, grad norm: 5.13436
step: 251, loss: 0.834747, main: 0.705996, kl: 0.128751, grad norm: 3.05515
step: 252, loss: 0.833399, main: 0.648139, kl: 0.18526, grad norm: 6.13281
step: 253, loss: 0.852229, main: 0.662515, kl: 0.189715, grad norm: 2.85615
step: 254, loss: 0.855343, main: 0.649461, kl: 0.205882, grad norm: 5.55255
step: 255, loss: 0.920089, main: 0.733486, kl: 0.186604, grad norm: 4.46302
step: 256, loss: 0.780528, main: 0.55351, kl: 0.227018, grad norm: 3.77799
step: 257, loss: 0.802005, main: 0.652717, kl: 0.149288, grad norm: 4.13499
step: 258, loss: 0.83591, main: 0.643928, kl: 0.191983, grad norm: 5.21502
step: 259, loss: 0.786623, main: 0.623587, kl: 0.163036, grad norm: 4.79143
step: 260, loss: 0.794928, main: 0.586273, kl: 0.208655, grad norm: 3.08316
step: 261, loss: 0.80119, main: 0.644832, kl: 0.156359, grad norm: 5.61957
step: 262, loss: 0.827681, main: 0.640254, kl: 0.187427, grad norm: 3.26213
step: 263, loss: 0.814879, main: 0.662655, kl: 0.152224, grad norm: 2.73095
step: 264, loss: 0.839077, main: 0.639786, kl: 0.19929, grad norm: 5.33338
step: 265, loss: 0.756724, main: 0.544465, kl: 0.212259, grad norm: 4.90672
step: 266, loss: 0.887461, main: 0.690615, kl: 0.196846, grad norm: 5.45011
step: 267, loss: 0.846296, main: 0.675145, kl: 0.171151, grad norm: 2.51692
step: 268, loss: 0.791987, main: 0.582321, kl: 0.209665, grad norm: 5.82875
step: 269, loss: 0.813921, main: 0.634384, kl: 0.179537, grad norm: 5.43108
step: 270, loss: 0.802394, main: 0.637188, kl: 0.165206, grad norm: 4.95447
step: 271, loss: 0.808219, main: 0.632147, kl: 0.176072, grad norm: 5.76822
step: 272, loss: 0.879018, main: 0.704316, kl: 0.174701, grad norm: 4.35493
step: 273, loss: 0.80699, main: 0.694465, kl: 0.112526, grad norm: 6.59606
step: 274, loss: 0.778757, main: 0.551358, kl: 0.227399, grad norm: 3.66921
step: 275, loss: 0.892068, main: 0.690982, kl: 0.201086, grad norm: 4.43466
step: 276, loss: 0.791539, main: 0.633247, kl: 0.158292, grad norm: 3.93414
step: 277, loss: 0.80811, main: 0.601239, kl: 0.206871, grad norm: 3.29892
step: 278, loss: 0.905722, main: 0.701352, kl: 0.20437, grad norm: 4.7832
step: 279, loss: 0.813505, main: 0.637303, kl: 0.176203, grad norm: 5.59431
step: 280, loss: 0.842677, main: 0.600231, kl: 0.242445, grad norm: 3.39378
step: 281, loss: 0.879731, main: 0.722287, kl: 0.157445, grad norm: 6.00599
step: 282, loss: 0.772233, main: 0.613985, kl: 0.158248, grad norm: 6.08551
step: 283, loss: 0.881969, main: 0.698135, kl: 0.183833, grad norm: 5.57688
step: 284, loss: 0.831848, main: 0.718179, kl: 0.11367, grad norm: 5.34206
step: 285, loss: 0.91119, main: 0.763296, kl: 0.147893, grad norm: 11.4075
step: 286, loss: 0.902349, main: 0.704917, kl: 0.197432, grad norm: 5.62886
step: 287, loss: 0.846402, main: 0.615951, kl: 0.230451, grad norm: 5.29632
step: 288, loss: 0.897085, main: 0.717549, kl: 0.179537, grad norm: 4.99142
step: 289, loss: 0.795584, main: 0.663503, kl: 0.132081, grad norm: 4.06424
step: 290, loss: 0.839013, main: 0.678135, kl: 0.160878, grad norm: 4.67055
step: 291, loss: 0.781763, main: 0.611235, kl: 0.170528, grad norm: 4.63736
step: 292, loss: 0.839143, main: 0.641001, kl: 0.198142, grad norm: 3.83048
step: 293, loss: 0.830093, main: 0.654343, kl: 0.17575, grad norm: 3.66337
step: 294, loss: 0.802489, main: 0.608051, kl: 0.194439, grad norm: 3.70129
step: 295, loss: 0.877925, main: 0.692648, kl: 0.185276, grad norm: 4.07263
step: 296, loss: 0.820092, main: 0.571989, kl: 0.248103, grad norm: 4.68927
step: 297, loss: 0.852152, main: 0.611552, kl: 0.240599, grad norm: 3.84164
step: 298, loss: 0.736205, main: 0.54707, kl: 0.189134, grad norm: 4.52216
step: 299, loss: 0.905849, main: 0.73951, kl: 0.166338, grad norm: 7.02715
======
......

代码链接

大模型技术分享

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

《企业级生成式人工智能LLM大模型技术、算法及案例实战》线上高级研修讲座

模块一:Generative AI 原理本质、技术内核及工程实践周期详解
模块二:工业级 Prompting 技术内幕及端到端的基于LLM 的会议助理实战
模块三:三大 Llama 2 模型详解及实战构建安全可靠的智能对话系统
模块四:生产环境下 GenAI/LLMs 的五大核心问题及构建健壮的应用实战
模块五:大模型应用开发技术:Agentic-based 应用技术及案例实战
模块六:LLM 大模型微调及模型 Quantization 技术及案例实战
模块七:大模型高效微调 PEFT 算法、技术、流程及代码实战进阶
模块八:LLM 模型对齐技术、流程及进行文本Toxicity 分析实战
模块九:构建安全的 GenAI/LLMs 核心技术Red Teaming 解密实战
模块十:构建可信赖的企业私有安全大模型Responsible AI 实战 

Llama3关键技术深度解析与构建Responsible AI、算法及开发落地实战

1、Llama开源模型家族大模型技术、工具和多模态详解:学员将深入了解Meta Llama 3的创新之处,比如其在语言模型技术上的突破,并学习到如何在Llama 3中构建trust and safety AI。他们将详细了解Llama 3的五大技术分支及工具,以及如何在AWS上实战Llama指令微调的案例。
2、解密Llama 3 Foundation Model模型结构特色技术及代码实现:深入了解Llama 3中的各种技术,比如Tiktokenizer、KV Cache、Grouped Multi-Query Attention等。通过项目二逐行剖析Llama 3的源码,加深对技术的理解。
3、解密Llama 3 Foundation Model模型结构核心技术及代码实现:SwiGLU Activation Function、FeedForward Block、Encoder Block等。通过项目三学习Llama 3的推理及Inferencing代码,加强对技术的实践理解。
4、基于LangGraph on Llama 3构建Responsible AI实战体验:通过项目四在Llama 3上实战基于LangGraph的Responsible AI项目。他们将了解到LangGraph的三大核心组件、运行机制和流程步骤,从而加强对Responsible AI的实践能力。
5、Llama模型家族构建技术构建安全可信赖企业级AI应用内幕详解:深入了解构建安全可靠的企业级AI应用所需的关键技术,比如Code Llama、Llama Guard等。项目五实战构建安全可靠的对话智能项目升级版,加强对安全性的实践理解。
6、Llama模型家族Fine-tuning技术与算法实战:学员将学习Fine-tuning技术与算法,比如Supervised Fine-Tuning(SFT)、Reward Model技术、PPO算法、DPO算法等。项目六动手实现PPO及DPO算法,加强对算法的理解和应用能力。
7、Llama模型家族基于AI反馈的强化学习技术解密:深入学习Llama模型家族基于AI反馈的强化学习技术,比如RLAIF和RLHF。项目七实战基于RLAIF的Constitutional AI。
8、Llama 3中的DPO原理、算法、组件及具体实现及算法进阶:学习Llama 3中结合使用PPO和DPO算法,剖析DPO的原理和工作机制,详细解析DPO中的关键算法组件,并通过综合项目八从零开始动手实现和测试DPO算法,同时课程将解密DPO进阶技术Iterative DPO及IPO算法。
9、Llama模型家族Safety设计与实现:在这个模块中,学员将学习Llama模型家族的Safety设计与实现,比如Safety in Pretraining、Safety Fine-Tuning等。构建安全可靠的GenAI/LLMs项目开发。
10、Llama 3构建可信赖的企业私有安全大模型Responsible AI系统:构建可信赖的企业私有安全大模型Responsible AI系统,掌握Llama 3的Constitutional AI、Red Teaming。

解码Sora架构、技术及应用

一、为何Sora通往AGI道路的里程碑?
1,探索从大规模语言模型(LLM)到大规模视觉模型(LVM)的关键转变,揭示其在实现通用人工智能(AGI)中的作用。
2,展示Visual Data和Text Data结合的成功案例,解析Sora在此过程中扮演的关键角色。
3,详细介绍Sora如何依据文本指令生成具有三维一致性(3D consistency)的视频内容。 4,解析Sora如何根据图像或视频生成高保真内容的技术路径。
5,探讨Sora在不同应用场景中的实践价值及其面临的挑战和局限性。

二、解码Sora架构原理
1,DiT (Diffusion Transformer)架构详解
2,DiT是如何帮助Sora实现Consistent、Realistic、Imaginative视频内容的?
3,探讨为何选用Transformer作为Diffusion的核心网络,而非技术如U-Net。
4,DiT的Patchification原理及流程,揭示其在处理视频和图像数据中的重要性。
5,Conditional Diffusion过程详解,及其在内容生成过程中的作用。
三、解码Sora关键技术解密
1,Sora如何利用Transformer和Diffusion技术理解物体间的互动,及其对模拟复杂互动场景的重要性。
2,为何说Space-time patches是Sora技术的核心,及其对视频生成能力的提升作用。
3,Spacetime latent patches详解,探讨其在视频压缩和生成中的关键角色。
4,Sora Simulator如何利用Space-time patches构建digital和physical世界,及其对模拟真实世界变化的能力。
5,Sora如何实现faithfully按照用户输入文本而生成内容,探讨背后的技术与创新。
6,Sora为何依据abstract concept而不是依据具体的pixels进行内容生成,及其对模型生成质量与多样性的影响。

Logo

欢迎加入 MCP 技术社区!与志同道合者携手前行,一同解锁 MCP 技术的无限可能!

更多推荐