Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

yup, I've been using llama.cpp for that on my PC, but on my Mac I found some cases where MLX models work best. haven't tried MLX with llama.cpp, so not sure how that will work out (or if it's even supported yet).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: