Issues: facebookresearch/llama
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
AssertionError: Loading a checkpoint for MP=0 but world size is 2
#182
opened Mar 12, 2023 by
jamestch
Plain pytorch LLaMA implementation (no fairscale, use as many GPUs as you want)
#179
opened Mar 11, 2023 by
galatolofederico
The first load of the model is very slow, and the second load is very fast
#174
opened Mar 10, 2023 by
Valdanitooooo
Run 65B on 2PC with 4 GPU each, distribution inference failed
#173
opened Mar 10, 2023 by
sophieyl820
To Meta: If I release an app with the weights embedded will you take me to court? 🤔
#171
opened Mar 10, 2023 by
knightofdoom
An attempt to make LLaMA to act like ChatGPT - success! Amazing result from scratch!
#162
opened Mar 8, 2023 by
randaller
Not actually open source and incompatible with other GPL 3 projects
#161
opened Mar 8, 2023 by
redhog
Inquiry about the maximum number of tokens that Llama can handle
#148
opened Mar 7, 2023 by
magicknight
We have encountered some problems while trying to do the inference via two NVIDIA A10 GPUs
#146
opened Mar 7, 2023 by
KUANWB
Previous Next
ProTip!
What’s not been updated in a month: updated:<2023-02-12.

