Hacker Newsnew | past | comments | ask | show | jobs | submit | patcher99's commentslogin

Very cool, would this also support moderations api for self-hosted LLMs by any chance?


Hi HN!

I'm preparing to relaunch my AI App development tool, OpenLIT, on Product Hunt. During our first attempt, we made it to #10 on Product of the Day. While that was a solid outcome, there's room for improvement, and we can aim for the top spot.

Many users have recommended that we focus more on marketing, but my expertise is in coding, not marketing. I'd love to hear your thoughts and advice on how to effectively relaunch OpenLIT and reach #1 on Product Hunt. What strategies, tips, or tricks would you suggest for maximizing our impact?

Thanks for sharing your insights!


Love this blog and really glad that you found OpenLIT useful for your blog.


Last week we showcased our open-source project, OpenLIT (https://github.com/openlit/openlit), here, and thanks to this incredible community, we hit 300 stars in just a couple of days!

One of my mentors, a core lead on OpenTelemetry, suggested we consider adding a Contributor License Agreement (CLA) to our project, similar to what has been done with OpenTelemetry.

I understand the potential legal benefits a CLA offers, such as ensuring contributions can be freely used and distributed, which could be crucial for the project's long-term viability and to avoid legal complications.

However, I’m equally concerned about the potential downsides, especially regarding community contributions. I worry that a CLA might stop new contributors who prefer to avoid legal hurdles or are reluctant to sign documents. Since OpenLIT aims to be truly open-source and community-driven, keeping the contribution process as straightforward as possible is essential to me.

So, I’m turning to you, HN community, for guidance:

- Have you implemented a CLA for your project? What impact did it have on contributions? - As a contributor, do you find CLAs off-putting? Why or why not? - What recommendations do you have for a CLA that isn't too restrictive but still provides necessary legal protections?

I'm also open to tool suggestions for managing CLAs or examples from your own open-source projects that I can learn from.

Thanks in advance for your wisdom and advice!


Yeah, 100%, I couldn't agree more on this!

Though if you'd like to see something more, Lemme know!

Thanks!


@xyst

https://discord.gg/RbNPvG54

Thanks for the suggestion on community (Not Slack). I Created a Discord server now, Feel free to spam questions, suggestions! Thanks for the feedback!


Yup, You are on point! We do see the UI/dashnoarding layer to find other usage a bit more as we add more tools(Compare/Evaluate) to help the overall LLM Developer Experience.

Also, I totally get the existing tools like Grafana are great, which is why we built our SDK in a very vendor neutral way using OTEL, So that yiu can send these traces and meyrics to our choice of OTLP backend.

Thanks!


Yup, The Dashboard is entirely powered by OTEL traces and yeah, you can see full prompts and responses including cost and request metadata.

The UI is pretty easy to run, Its one single image.

Lemme know if you need any help on instrumentation, If you have a PR, Lemme know, Ill try to assist on this


Yup thats correct, Love as more tools come up! We though tried to follow the actual OpenTelemetry's Semantic Convention for GenAI, the others dont.

Additionally none of these libraries "then"(Can check again) seemed to send the cost attribute which was what most of our user's mentioned, So we added that.


Langtrace core maintainer here. We capture input/output/total tokens for both streaming and non streaming across various LLM providers. For streaming we calculate using tiktoken library as OpenAI does not show that with their outputs. Cost is calculated on the client side as it may keep changing. But all you need is the cost table for the llm vendor and you can get the cost for token usage


Hey! On the metrics front, we do basic metrics like requests, tokens and cost for now as all of the usefull information about llm is included directly in the Span Attributes of the Trace.

So what a lot of users have told me is the want to track their usage(cost/tokens) and want to keep an eye on the user interactions like prompts and responses, We are looking to add more accuracy based metrics too which is very highly asked to us.

Also re on UI: UI is optional, You can use the sdk to directly send otel traces and meyrics to your preferred destination. https://docs.openlit.io/latest/connections/intro


On the span is an interesting attribute to view, but had you considered using otel meters/metrics for that instead?

I think cardinality in span attributes can be a problem, and meters are better for aggregating and graphing


We do Otel metrics for the main things needed in dashboarding 1. Requests (Counter) 2. Tokens (Counter) (Seperate metric for Prompt Tokens, Completion and Total Tokens) 3. Cost (Histogram)

I did attach a Grafana dashboard too (Works for Grafana Cloud, Ill get something for OSS this week) https://docs.openlit.io/latest/connections/grafanacloud

Since Otel doesn't yet support Synchronous gauge + Users wanted a trace of the LLM RAG application, We opted to Traces which is quite standard now in LLM Observability tool.

Lemme know if you had something more in mind, Love getting feedback from amazing folks like you!


Hey! Kudos to them firstly! Its the same problem statement but different solutions.

They seem to use tracing and tracing generally adds a lot of overhead on the application. Our method tries to avoid that plus added latency is ~0.01s in our case. Tracing seems really useful wen using RAG based approach so we are working on adding it as an option too but for simple fine tuned approach, We believe Doku should do a good job.

We prioritise self-hosting a lot more. Setting up Doku is very very simple(2 Doku components and ClickHouse) for both Docker and we do have a Helm chart for Kubernetes which I generally found a bit tough in the other tools + self-hosting IMO is easier as data regulations are not a big headache.

We also allow you to add multiple clickHouse (where we store the LLM Monitoring data) to Doku so that you can easily separate staging and production data. If you are already using ClickHouse for any other purposes, You can easily connect that too!

and as you said our connections, IMO everyone is using some sort of an observability tool, We don't want users to learn and educate to use Doku UI in those cases, Just simple connect and use your existing observability platform (So if you dont wanna use Doku UI, thats one less component you need to run)

rest I think is the difference in what LLMs we can monitor but thats not big as most users Ive talked to right now are still at a stage where they are experimenting somehow with OpenAI

Would love for you to try it out and understand the things you feel we should add!

Thanks!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: