LLaDA
Public

Official PyTorch implementation for "Large Language Diffusion Models"

Cocurrent Candidate Generation#47

Open
Opened 3/15/20250 commentsby djx2726889
djx2726889

In the ARM model, we can set the beam size to generate K candidates in parallel. However, how can we sample the top K candidates in LLaDA? Is the only way to achieve this by performing the forward process K times?

AI Analysis

This issue appears to be discussing a feature request or bug report related to the repository. Based on the content, it seems to be still under discussion. The issue was opened by djx2726889 and has received 0 comments.

Add a comment
Comment form would go here