The domain + company name trick is solid — wish I'd known that earlier. And agreed on parsing page source for tech stack detection, it's surprisingly reliable and the cost is literally zero.
The last_engaged timestamp filter is a good catch. We had a similar issue with our scheduling system where a stuck "busy" state caused the entire pipeline to freeze for over a day. Ended up adding a simple guard: any early exit from a busy state must release the lock first. Same philosophy — one cheap filter prevents a cascade of wasted work.
Thanks for the insight on prompt drift. You're right — flat prose breaks down fast across iterations. We ended up with a similar solution: a modular spec system where the engine rules (shared across all brands) and theme files (brand-specific: colors, fonts, narrative structure) are separate named documents. The LLM reads both before generating, so roles never bleed across runs. Essentially named blocks, just at the file level instead of XML tags. Will check out flompt — the visual decomposition could be useful for auditing where drift starts.
"No dedicated QA bot, but there's a feedback loop built in.
Every post's performance data — likes, reposts, views, replies — gets fed back into the prompt templates. We run A/B variations on copy style and posting times, then the system automatically adjusts the prompts based on what's actually working.
So quality control is less about checking output before it ships, and more about iterating on what the audience responds to. The data drives the prompt evolution.
Guilty as charged on this one. The dashboard visuals are CSS animations — the data behind them isn't live yet. I've been trying to pipe real systemd logs into it but haven't cracked the architecture cleanly enough to ship it. It's on my list, just not done.
Should've been clearer about that in the post. Thanks for pushing on it.
Fair challenge on the ROI question.
Honest origin story: I work in financial services. Every day I need to post updates, share market info, and stay visible to clients — it's part of the job. I built MindThread because I was spending hours on scheduling tools with terrible UX instead of actually talking to people. I was my own first customer.
After launching, I realized the same problem exists across Taiwan's financial and insurance industry — thousands of advisors doing the same manual posting grind every day. That's the real market.
My view: social media time should be spent on actual conversations, not fighting bad interfaces. The agents handle the repetitive publishing. The human interaction stays human.
I want to address the skepticism directly.
Yes, my English is poor. I write everything in Chinese first, then use AI to translate. Every word is mine — the ideas, the context, the intent. The AI is just the bridge. If that feels like cheating, I understand. But it's the only way I can participate in conversations like this one.
On whether my products are useful: probably not to most people here. Taiwan has one of the most advanced tech industries in the world, but 80%+ of the population still doesn't use AI in their daily work — or only uses it as a chatbot. The market here is flooded with people selling AI courses and prompt-engineering services. I'm trying to do something different: actually build with it, open-source the playbook, and make real tools accessible to people who aren't engineers.
I came to HN because I wanted real technical feedback — even brutal feedback. You can't get that in Taiwan right now. The criticism here is exactly what I was looking for.
I'm a real person. These are real products. I'm just learning in public, one translation at a time.
3 months ago I had zero experience with any of this — no AI development, no automation, no open source. I'm a solo founder in Taiwan where smartphone penetration is nearly universal but AI adoption in daily work is still very early.
As I started building, I realized these tools can genuinely change how people work and live — not just for developers, but for small businesses and solo operators who don't have engineering teams. So I packaged what I learned into services and open-sourced the playbook.
This is also my first time posting on HN. My English isn't strong enough to write essays natively, so I use AI to help with the writing — but the ideas and intent are mine. Seemed fitting given what I'm building.
I'm not trying to sell anything here. I just want to show that you can run real operations with AI agents on zero budget, and I want to make these tools more accessible in my country.
The animations are CSS-driven, but the data behind them is real — agent heartbeats, task counts, and activity logs are pulled from actual systemd timer outputs. It's not a mock dashboard, though I'll admit the visual polish probably makes it look more "produced" than a typical monitoring tool.
My setup is WSL2 on a regular Windows machine — no GPU, so local inference would be painfully slow
Gemini 2.5 Flash free tier is genuinely good enough — 1,500 req/day, I use ~105. Quality is solid for content generation and analysis tasks
$0/month is hard to beat — I did accidentally rack up $127 when I used a billing-enabled API key (wrote a blog post about that lesson), but with free tier properly configured, it's been zero cost for months
If I needed more throughput or privacy-sensitive processing, I'd consider local models. But for my current scale, free tier Gemini handles everything.
Thanks! The dashboard is one of my favorite parts of this project. It actually pulls real agent activity data — the visualizations are live, not pre-rendered. Working on making it a product: customizable agent dashboards for other teams running multi-agent setups.
The last_engaged timestamp filter is a good catch. We had a similar issue with our scheduling system where a stuck "busy" state caused the entire pipeline to freeze for over a day. Ended up adding a simple guard: any early exit from a busy state must release the lock first. Same philosophy — one cheap filter prevents a cascade of wasted work.