-
Notifications
You must be signed in to change notification settings - Fork 801
Issues: Mozilla-Ocho/llamafile
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Hugging face repository does not show the version of the llamafile you are downloading
upstream bug
#459
opened May 29, 2024 by
norteo
AMD - tinyBLAS windows prebuilt support stopped working with 0.8.5
amd
#441
opened May 25, 2024 by
jeromew
Is it possible for llamafile to use Vulkan or OpenCL Acceleration?
question
#438
opened May 23, 2024 by
Ff-c109
CUDA kernel vec_dot_q4_K_q8_1_impl_vmmq has no device code compatible with CUDA arch 600
bug
#434
opened May 22, 2024 by
coder-vig
Add explanation for Windows user to how to Create EXE files
awaiting response
documentation
#419
opened May 15, 2024 by
fabiomatricardi
llamafile as LLM server for Mantella mod and Skyrim, is working nice but there is a little problem.
bug
performance
#415
opened May 12, 2024 by
amonpaike
Would it be possible to support
n_probs
/ logprobs
in chat completion API?
enhancement
question
#409
opened May 10, 2024 by
cbowdon
Fails to load custom UI on Apple Silicon (M1 Pro) - Shows "File not found" on localhost:8080
bug
#397
opened May 5, 2024 by
towardmay
Feature Request: Option to specify base URL for server mode
enhancement
#388
opened Apr 30, 2024 by
vlasky
Previous Next
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.