Issues: ggerganov/llama.cpp
Improve Alpaca integration to match it's trained prompt syntax
#302
opened Mar 19, 2023 by
nitram147
Open
9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[Question] Can I load a the huggingface llama model aswell?
model
Model specific
#708
opened Apr 2, 2023 by
maxkraft7
Windows page fault disk i/o slow on first load
performance
Speed related topics
windows
Issues specific to Windows
#705
opened Apr 2, 2023 by
x02Sylvie
Update *-to-ggml.py scripts for new ggjt model format
script
Script related
#704
opened Apr 2, 2023 by
sw
Disk bottleneck in 65B model
linux
Issues specific to Linux
need more info
The OP should provide more details about the issue
performance
Speed related topics
#702
opened Apr 2, 2023 by
fgdfgfthgr-fox
How to convert old ALPACA q4_0 model into ggjt format?
#701
opened Apr 2, 2023 by
multimediaconverter
Regression: "The first main on the moon was "
question
Further information is requested
#693
opened Apr 1, 2023 by
simplejackcoder
How to make llama.cpp return control to add additional context?
#692
opened Apr 1, 2023 by
simplejackcoder
Alpaca model is running very slow in llama.cpp compared to alpaca.cpp
#677
opened Apr 1, 2023 by
robinnarsinghranabhat
4 tasks done
[User] examples/chat-13B.sh sometimes continues my question instead of answering
#667
opened Apr 1, 2023 by
patrakov
4 tasks done
Previous Next
ProTip!
Follow long discussions with comments:>50.

