I do writing with RAG and it can be implemented to suprisingly good if you already have your own writing that the text is being generated from. FAQs etc can be pretty easy when your content is context for the AI.
After a few rounds of AI generating AI content from AI content, I'm sure it could eventually become slop...like the model collapse lol idk.
Dear Notion employees, please, if you're advertise mail and calendar as features, add them into your app and do not make them open in new tabs. I want an all in one thing, why is that so hard, you already have tabs built in? Thanks!
okay but what about "c" being nearly the same as "z", neither of which look like the character and are nearly(?) identical. Is our brain supposed to just be able to figure it out?
With narrow spacing and poor kerning it can get much worse, especially if you're reading printed text; I've seen some extremely bad fonts used in print, (usually in italics or titles, but sometimes in the body text as well):
m and rn, cl and d, lo and b, jo and p, ijl1, GC0OQ, italic Q2.
"A large new study published in the International Journal of Hygiene and Environmental Health provides evidence that exposure to certain workplace chemicals among parents may influence the severity of autism spectrum disorder (ASD) symptoms and contribute to behavioral, cognitive, and adaptive challenges in their children. The findings suggest that occupational exposures—especially to plastics, ethylene oxide, phenols, and pharmaceutical agents—may have broader developmental effects beyond autism diagnosis alone."
"The findings suggest that workplace exposures to several specific chemical classes were associated with worse outcomes in children with ASD. One of the strongest and most consistent patterns involved plastics and polymer chemicals. Fathers’ exposure to plastics was associated with lower scores across all cognitive and adaptive skill domains, including language, motor coordination, daily living skills, and overall functioning. When both parents were exposed, the deficits appeared to compound.
“I was surprised how strongly and consistently plastics and polymers stood out as being linked with multiple developmental and behavioral outcomes including irritability, hyperactivity, and daily living,” McCanlies told PsyPost.
Exposure to ethylene oxide—commonly used in hospital sterilization—was also linked to more severe autism symptoms, lower expressive language abilities, and poorer adaptive functioning. Similarly, parental exposure to phenol (used in construction, automotive, and some consumer products) and pharmaceuticals was associated with increased ASD severity and more pronounced behavioral challenges, especially hyperactivity and stereotyped behavior.
While the results do not imply that all children exposed to these chemicals will develop more severe symptoms, the patterns suggest that early life exposure to workplace toxicants may amplify certain developmental difficulties in children who already meet criteria for ASD. The study provides one of the most detailed looks to date at how parental occupation may relate not just to diagnosis, but to variation in how autism is expressed.
“Our findings suggest that certain parental workplace exposures may be related not just to autism, but to worse symptoms and autism behaviors,” McCanlies explained."
Its taking 21 gb of memory on my 64 gb mbp, still tuning it and settling on context size, temp, and other settings.
My comment from yesterday:
"thanks openai for being open ;) Surprised there are no official MLX versions and only one mention of MLX in this thread. MLX basically converst the models to take advntage of mac unified memory for 2-5x increase in power, enabling macs to run what would otherwise take expensive gpus (within limits).
So FYI to any one on mac, the easiest way to run these models right now is using LM Studio (https://lmstudio.ai/), its free. You just search for the model, usually 3rd party groups mlx-community or lmstudio-community have mlx versions within a day or 2 of releases. I go for the 8-bit quantizations (4-bit faster, but quality drops). You can also convert to mlx yourself...
Once you have it running on LM studio, you can chat there in their chat interface, or you can run it through api that defaults to http://127.0.0.1:1234
You can run multiple models that hot swap and load instantly and switch between them etc.
Its surpassingly easy, and fun.There are actually a lot of cool niche models comings out, like this tiny high-quality search model released today as well (and who released official mlx version) https://huggingface.co/Intelligent-Internet/II-Search-4B
Other fun ones are gemma 3n which is model multi-modal, larger one that is actually solid model but takes more memory is the new Qwen3 30b A3B (coder and instruct), Pixtral (mixtral vision with full resolution images), etc. Look forward to playing with this model and see how it compares."
thanks openai for being open ;) Surprised there are no official MLX versions and only one mention of MLX in this thread. MLX basically converst the models to take advntage of mac unified memory for 2-5x increase in power, enabling macs to run what would otherwise take expensive gpus (within limits).
So FYI to any one on mac, the easiest way to run these models right now is using LM Studio (https://lmstudio.ai/), its free. You just search for the model, usually 3rd party groups mlx-community or lmstudio-community have mlx versions within a day or 2 of releases. I go for the 8-bit quantizations (4-bit faster, but quality drops). You can also convert to mlx yourself...
Once you have it running on LM studio, you can chat there in their chat interface, or you can run it through api that defaults to http://127.0.0.1:1234
You can run multiple models that hot swap and load instantly and switch between them etc.
Its surpassingly easy, and fun.There are actually a lot of cool niche models comings out, like this tiny high-quality search model released today as well (and who released official mlx version) https://huggingface.co/Intelligent-Internet/II-Search-4B
Other fun ones are gemma 3n which is model multi-modal, larger one that is actually solid model but takes more memory is the new Qwen3 30b A3B (coder and instruct), Pixtral (mixtral vision with full resolution images), etc. Look forward to playing with this model and see how it compares.
In the repo is a metal port they made, that’s at least something… I guess they didn’t want to cooperate with Apple before the launch but I am sure it will be there tomorrow.
If you're on Mac you can download LM Studio and get the MLX version (edit - links below). I am running it on 64 gb M1 and it takes about ~30 gb ram. I've been on the hunt for a local orchestrator model that interprets input with speech to text (STT) from WhisperX, then can decide what to do. I have only been running it for a day, but it may be overkill for my setup.
For simple tasks it can quickly respond and then understand to use MCP servers for tasks and other things, but offloading all the heavy lifting to claude code via sdk and cli, then bringing the results back in a summary or with clarifying questions as text to speech (TTS). I'm playing with Kyutai TTs b/c have great models that sound real and can do conversational streaming with VAD (though my mbp is too slow with it for now but see https://unmute.sh/ for demo).
I am looking for an orchestrator model that runs on 10-15 gb of ram and can do really good tool calling and model routing. I'm will likely move to something even smaller designed specifically for this, like Jan Nano and then spin up an intermediate model like Qwen if needed, or try a smaller Qwen. https://github.com/menloresearch/jan?tab=readme-ov-file
Ultimately, I want something that can see my screen and know what is going on and have full context and be live, so I was excited about Gemma 3N multi-modal, but its not really available yet fully with vision at least for MLX. https://deepmind.google/models/gemma/gemma-3n/
Next 6 months in this area is going to be pretty wild though.
After a few rounds of AI generating AI content from AI content, I'm sure it could eventually become slop...like the model collapse lol idk.
"AI models collapse when trained on recursively generated data" - https://www.nature.com/articles/s41586-024-07566-y