Hacker Newsnew | past | comments | ask | show | jobs | submit | maxloo1976's commentslogin

I just tried this: User: what is the precise name of the model answering this query called in the API? Not "chatgpt with browsing" but the specific model name.

chatgpt answer: The specific model name answering this query is "gpt-4.5-turbo".


I've tried running Llamafile on my Lenovo Legion Pro 5 laptop with 8GB VRAM, but it has a dashboard that shows the GPU and CPU utilisation in real time, and almost all the processing is done on the CPU. Is there a way to shift the processing to the GPU like using gpt-fast?

https://pytorch.org/blog/accelerating-generative-ai-2/


I'm wondering if gpt-fast has a version that can be run from Windows Command Prompt or Powershell?

https://github.com/pytorch-labs/gpt-fast/issues/45


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: