Hacker Newsnew | past | comments | ask | show | jobs | submit | PriNova's commentslogin

For the sake of clarity, Sourcegraph: Cody did RAG-style context fetching. However, Amp does not use RAG for context fetching.


This is not how you think Mojo works. If you copy&paste python code into Mojo, you will not benefit from optimizations. You need to refactor your python code into Mojo code to gain compiler efficiency. But if you look at the refactored code in the end, it is very ugly syntax (this is my personal opinion and might change with the evolution of Mojo)


Look at Taichi at Github. This library for Python seems not very popular and unaware. Maybe, because it is a Chinese development, but Taichi is simple and compiles directly down to kernels on CUDA, GPU, Metal, Vulkan and has batteries included. Beats the fastest Mojo implementation of the Mandelbrot set about 260 times faster. https://github.com/taichi-dev/taichi


> Beats the fastest Mojo implementation of Mandelbrot by x260 *on the GPU*.

Important detail that it runs on GPU.


but the chinese just steal american technology. look at huawei mate 60.

sarcastic


now we're talking!


They use Claude 2 as AI backend with a 100K context window. In my experience Claude is better at programming than GPT-4 because it has a much younger cut-off date and because of the context window to process knowledge. Halluzination happens with all LLM's.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: