I do like Antigravity a lot and I use it on a daily basis. Are you referring to that or something else? I do find their marketing weird though and seems like their internal teams are not yet fully aligned. (Jules, Antigravity, Gemini CLI, ...)
I've come across only a few of those. I'm not familiar with Antigravity. That's what I mean with poor marketing. I've tried out AI Studio, which is alright but not quite Codex level good as far as I can see.
Honestly this smells a bit like the situation they had a few years ago with a lot of different teams in Google inventing new chat and video call tools. All of which then flopped and got discontinued. Google exporting their own internal chaos. How many coding tools can you have in one company?!
OpenAI and Anthropic seem more focused currently. I mostly focus on OpenAI's codex because I don't want to maintain two subscriptions. Switching tools is a bit disruptive and not something I want to do every few days/weeks.
Wouldn’t it make a lot of sense to have a single subscription that gives access to all coding agents, with the ability to switch between them based on preference or task? Constantly juggling tools and subscriptions is pretty disruptive.
20 euro per month is pretty hard to beat for Chat GPT Plus (which includes Codex). Right now a lot of this stuff in this space is highly experimental and constantly changing. I've been using codex web since before the summer, Codex cli for the last few months and Codex desktop app for the last few days. Before that, I was copy pasting blobs of code from Chat GPT and only looking at single code files.
The whole agentic coding revolution didn't really start moving until Claude Code rolled out almost a year ago now. Anthropic deserves kudos for that. Initially it was limited by small context sizes and wasn't that effective on large code bases. So, I stuck with ChatGPT and OpenAI; mainly because Claude for Desktop was a bit underwhelming and I felt OpenAI had their shit together a bit more in terms of UX, which I think matters at least as much as model quality for being able to effectively use AI. Arguably, since about the summer, Codex and Claude Code are well matched in terms of features/capabilities. Some prefer one or the other or use both. Coded had a reputation of being maybe slightly better with larger code bases. I don't think that is valid anymore as of the last few model releases.
Since about the gpt 5 generation of models released around last summer, I'm able to work on whole git repositories with codex. Our backend is about five years old; about 85K lines of code. Not huge but big enough that there was no way in hell LLMs were able to make sense of it before that. I only did the first big pull requests with codex on this backend in the last two months. This stuff is still very new.
I'm sure that with a lot of tool juggling and experimentation I could have gotten there a few weeks/months earlier But not much more. And I don't actually have time to be constantly fiddling with tools and trying out a lot of stuff. I don't need to be the first to try everything out.
I'm trying to do a deep dive on RAG/LLM based apps since I didn't had the chance yet.
I'm building a chat-assistant that will be able to discuss with you and find more about what's your current work status and what you're looking for. Then it will suggest best suited roles for you from the HN Who's Hiring threads.
It's still very early, I've managed to index the latest thread and there's a CLI chat tool to discuss with the LLM.
It's been great because I learned a ton already, about LLM deployments, RAG evaluation, prompt engineering, LLM internals, etc.
I'm thinking that GDP is best used for longer term periods and not for a goverments' short term term, 4-10 years. For example, a goverement could be recklessly spending money that could have an effect on GDP but that doesn't show the whole picture.
Regardless, GDP is always useful so def one of the right metrics for economic performance in general.
That's very interesting, I've never exposed to such a development environment. Is there maybe a GH repository or something that I can see the above in action? Thank you
I see you have a good suggestion but I haven’t really got what the problem is. Is it the weekly release cadence, you mentioned branch hell but tbh I’m not familiar with it, how does it look like? What are the biggest pain points?
I’m only trying to make sure I understand the problem otherwise I cannot evaluate the solution. Plus it would have make it very hard to sell it to me, if I was to take the decision.
Id say every 3rd deployment or so, especially when new features are added, there is a lot of stress around cutting a release and deploying. People on the team become extra scrutinous of people who break the release build or don't have the features completed, or havent submitted enough tickets. Which, as a manager I appreciate the peer pressure, but I doubt the stress is good for moral of the members who are on the receiving end of it and I think the same feedback without the urgency behind it would be pedagogically more valuable.
Second, when a release is made and a bug is found there are branches mad from main (not develop) to fix the bugs, that must then be merged into develop (referred to as a merge-back) which increases surface area for errors as well as more time spend untangling the emerging web of branches. Now, in the hands of a capable engineer this isnt a massive lift, but also there are processes to automate all of this and that capability would be better spent day dreaming about a side project as far as I am concerned.
It’s all about “listening” to the users with quotes because it doesn’t necessarily means face to face communication. Although f2f is best for relation building, it can’t really scale.
Data are everywhere these days and you should be gathering as much as you can(emphasis in can, it can be expensive). Take Google Analytics for example added to a blog, you can see your most read articles and write more of those, see where people exit and take action, setup interaction tracking and see what buttons are working best. You can do AB tests and compare results between two comment forms.
Same logic applies to a product as well.
But then it’s not just analytics and valuable information can be extracted from everywhere. Look at your DB and see what people have been storin (being cautious on prívacy laws). Gather NPS and see what feedback you’re getting from there.
Big companies usually have departments focused on data gathering and analysis, but even for smaller products, I would say that the best thing you can do is being data driven. Try to base your decisions with data that make sense.
Thank you for your insightful response and good wishes. I completely agree that "listening" to user behavior is essential for success. We need to collect and make sense of all the data we can. Being data-driven can lead our product to greater success. All points you've mentioned are pretty valuable. It's easy to say, but not much easy to do.
Same experience for me! Never worked as a barista but it was actually a super fun and educational journey to learn all things espresso and coffee.
I bought a Mignon Zero grinder and a Delonghi Dedica with a non-presurrised portafilter for about $500 for both. Both are beginners friendly and for home use. I've learned all about dialling an espresso, the amount of coffee to put it, the duration of the extraction, the amount of coffee to get out and what role the cut of the coffee plays.
I've calculated that the investment will pay back it-self in a few months, ~5-6.
reply