Hacker Newsnew | past | comments | ask | show | jobs | submit | piterrro's commentslogin

In what extent this is a metabase alternative? I'm a heavy Metabase user and there's nothing to compare really in this product.

We've (https://www.definite.app/) replaced quite a few metabase accounts now and we have a built-in lakehouse using duckdb + ducklake, so I feel comfortable calling us a "duckdb-based metabase alternative".

When I see the title here, I think "BI with an embedded database", which is what we're building at Definite. A lot of people want dashboards / AI analysis without buying Snowflake, Fivetran, BI and stitching them all together.


Not open source though?

hi, dev building Shaper here. Both, Shaper and Metabase, can be used to build dashboards for business intelligence functionality and embedded analytics. But the use cases are different: Metabase is feature-rich and has lots of functionality for self-serve that allows non-technical users to easily build their own dashboards and drill down as they please. With Shaper you define everything as code in SQL. It's much more minimal in terms of what you can configure, but if you like the SQL-based approach it can be pretty productive to treat dashboards as code.

sorry, so it ain't an alternative in any way. Its like saying a bicycle is an alternative to an airplane, both have seats...

My mental model is ignoring people who complain about free stuff


Ohhhh it's free! Let's shove it up the arse!!!!

Yeah yeah, like someone is doing charity here.


True, how free is something really when it’s full of advertisement, trackers and popups


Who remembers Graphite and Carbon? This was 2010 era…


Is it beneficial for logs compression assuming you log to JSON but you dont know schema upfront? Im workong on a logs compression tool and Im wondering whether OpenZL fits there

[0] https://logdy.dev/logdy-pro



I've been developing AI apps for the past year and encountered a recurring issue. Non-tech individuals often asked me to adjust the prompts, seeking a more professional tone or better alignment with their use case. Each request involved diving into the code, making changes to hardcoded prompts, and then testing and deploying the updated version. I also wanted to experiment with different AI providers, such as OpenAI, Claude, and Ollama, but switching between them required additional code modifications and deployments, creating a cumbersome process. Upon exploring existing solutions, I found them to be too complex and geared towards enterprise use, which didn't align with my lightweight requirements. So, I created Hypersigil, a user-friendly UI for prompt management that enables centralized prompt control, facilitates non-tech user input, allows seamless prompt updates without app redeployment, and supports prompt testing across various providers simultaneously.

GH: https://github.com/hypersigilhq/hypersigil

Docs: hypersigilhq.github.io/hypersigil/introduction/


It worth taking a look at the prompts in the repo if you are keen understand how apps like these work. It's interesting to see that I basically have a similar process/rules fed to LLM when building locally. I also have similar process for the backend and a nice flow for connecting FE and BE with API contracts - work perfectly.


Nice tool! Im working on something similar but focused on repeatability and testing on multiple models/test data points.


Do you have a link? I'd like to see it.

Any specific feedback so far?


After building several full-stack applications, I discovered that Large Language Models (LLMs) face significant challenges when implementing features that span both backend and frontend components, particularly around API interfaces.

The core issues I observed:

- API Contract Drift: LLMs struggle to maintain consistency when defining an API endpoint and then implementing its usage in the frontend

- Context Loss: Without a clear, shared contract, LLMs lack the contextual assistance needed to ensure proper integration between client and server

- Integration Errors: The disconnect between backend definitions and frontend consumption leads to runtime errors that could be prevented

The Solution: Leverage TypeScript's powerful type system to provide real-time feedback and compile-time validation for both LLMs and developers. By creating a shared contract that enforces consistency across the entire stack, we eliminate the guesswork and reduce integration issues. A small NPM module with only dependency of Zod:

https://github.com/PeterOsinski/ts-typed-api

I already used it in a couple of projects and so far so good. LLMs don't get lost even when implementing changes to APIs with dozens of endpoints. I can share a prompt I'm using that instructs LLM how to leverage definitions and find implementations.

Let me know what you think, feedback welcome!


who are you?


He's a "Growth Engineer" from ElevenLabs. I'm not sure what that entails, but then I'm not familiar with that area of tech, so maybe someone else can explain it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: