It's completely wrecked. While previous versions were very usable, the beta 3 update is broken for all use cases. Extremely simple prompts trigger guardrails. Completely inert inputs.
What is the population of Sweden?
What is the size of New York?
What is the size of Sweden?
Yeah all these seem hyper focused on "voice cloning" so on replicate VoiceCraft doesn't even let you try normal TTS unless you provide a reference voice so I noped out.
Usually, if I feel this, it means I have to get my AirPods cleaned (do it at the store, it's worth it). I have, subjectively, always felt that clean AirPods blow the Mac mic out of the water.
How do the multiple startups that import YouTube video into their platform work - one's that need the video/audio files? There seems to be a lot of webapps supporting this but I always wondered if they have a goto API for doing this. I myself have used microlink at some points for automated video source extraction.
Holy hell, was shitting bricks, considering I JUST migrated most services to Azure OpenAI (unaffected by outage) — right before our launch about 48 hours back. What a relief.
It is a separate service where you can deploy your own selected models and expose the API to access those through some url. It is easy to setup but the access to the newest chatgpt is lagging behind a bit.
But with different, separate content filtering or moderation. I have deployed in prod and managed migration to Azure Openai form Openai, and had to go through content filter issues.
It absolutely blows my mind that we've all just shrugged and accepted that we're not permitted to use LLMs to generate swearing or fiction that contains violence. What happened to treating users like adults instead of toddlers!? Actually, thinking about it, a typical Grimm fairytale has more death and violence in it than either Azure or OpenAI will allow!
Just today I wanted to translate a news article about the war in Gaza and Microsoft refused because the content was "too violent" for my delicate human brain.
Really cool to see the Assistants API's nuanced document retrieval methods. Do you index over the text besides chunking it up and generating embeddings? I'm curious about the indexing and the depth of analysis for longer docs, like assessing an author's tone chapter by chapter—vector search might have its limits there. Plus, the process to shape user queries into retrievable embeddings seems complex. Eager to hear more about these strategies, at least what you can spill!
- Switches out Requests and AIOHTTP packages for httpx.
- Switches to explicit client instantiation
Feel free to discuss what you expect this to/not to fix. So many Cloudflare 502 issues and bad gateway errors which we still get billed for, have caused the openai dev experience to be rather poor. The async/sync request timeouts don't even work as you'd expect. These may be fixed in this beta.
What is the population of Sweden? What is the size of New York? What is the size of Sweden?
These all fail to generate a response.