Hacker Newsnew | past | comments | ask | show | jobs | submit | more computomatic's commentslogin

Have you tried nothing at all? Had great success with this on a 150+ dev team. Much preferred to jira. Admittedly does require a different approach to work than a jira-centric team is going to be familiar with.


I 100%, sincerely agree that "nothing at all" is really a great option compared to Jira. I actually hired a whole person onto my project to make them responsible for ticket tracking, telling them they could use whatever they wanted so long as I never had to look at Jira again. They used Jira for a while because it's what other people were pressuring them to use, but ultimately they started using MS Planner. MS Planner! I mean, Planner is garbage too, but at least it's not Jira.


Where do you put the story points????


> I though of pasting in Perplexity's summary, saying it was from Perplexity but that I had checked and it was a good summary.

> Would that be OK or would that count as an AI written comment?

The rule seems written to answer this directly.

Absolutely nobody cares what Perplexity has to say about the case - summary or otherwise. If you mention what the case is, I can ask claude myself if I’m interested.

Better yet, post a link to an authoritative source on the case (helpful but not required).

At minimum, verify your info via another source. The community deserves that much at least.

An AI-generated summary adds nothing positive and actually detracts from the conversation.


I did post a link to the Supreme Court's decision at Cornell Law School's Legal Information Institute's archive of Supreme Court decisions.

I looked at the decision itself sufficiently to see that it was the case I remembered and that my recollection of the facts and the decision was correct.

I just didn't include a summary because I didn't find a good one I could link to. Normally I'd write a brief one myself but I found that hard to do when Perplexity's summary was sitting right there in the next window and it was embarrassingly better than what I would have written.


Not the parent but I infer “fresh” as meaning a new approach to an old problem (with the benefits of experience baked in). A synonym of “modern” without the baggage.


Fair. I changed the tagline on the website to "A modern alternative to Protocol Buffer". Thanks for the feedback.


100%, danke


Sorry, we’re only considering applicants with a proven track record of failing as CEO


Can't risk hiring an amateur and he accidently succeeded.


I've heard it's called falling, er, failing upwards.


16 random bytes is not a valid UUIDv4. I don’t think it needs to be in the standard library, but implementing the spec yourself is also not the right choice for 99% of cases.


Well that depends on your luck, it could be a valid one about 1/16th of the time.


1/64, actually, because RFC-compliant (variant 1) UUIDv4 requires fixed values for both the version nibble and two bits of the variant nibble.

The fact that we're discussing this at all is a reasonable argument for using a library function.


While it might be invalid, will most libraries choke if you give them a pseudo UUIDv4?


What do you mean? Are you talking about validation of UUIDs?


If you generate random bytes, which are unlikely to conform to the UUIDv4 spec, my guess is that most libraries will silently accept the id. That is, generating random bytes, will probably work just work.


But what libraries are you talking about? What is their purpose?


Nice, thanks and I agree.


I didn't say about 16 random bytes. But you're almost there. You generate 16 random bytes and perform few bitwise operations to set version and variant bits correctly.

Not that it matters. I don't even think that there's a single piece of software in the world which would actually care about these bits rather than treating the whole byte array as opaque thing.


Let's call it a valid UUIDv0 - all bits randomized including the version bits :)


What if I generate 16 random bytes and use that as id?


No problem, just don't call it UUID


Seems to me the most obvious answer is to return to sundials, no?


Only works during the day? Which, come to think of it, I'm not entirely clear how humans kept time at night long ago. I'm assuming they learned roughly where some constellations were?


I don't have a source to site, but I'm fairly certain the Canadian government is adopting (and presumably encouraging provinces to adopt) a general policy of explicitly not allowing US preferences to dictate our domestic policy moving forward. Of course, that is indeed in response to recent actions from the U.S. And in that light, this time change was an obvious early move as the only thing preventing it was the trigger based on the US states.


It's going to be hard for Toronto, home of our banking and stock exchange, to differ materially in operating hours from NYC.


Cable at least made sense on paper (if not obvious to the consumer). The channels were independent companies, they pay for the rights to content and get paid by ads. But they had the problem of how to actually get their feed into your home (over the air broadcast was the only D2C option).

The cable provider was just a delivery mechanism. So you pay them to deliver the feeds. But they didn’t get any revenue from the content providers (or their ads).

In other words, two different companies, two different services (content vs delivery), and two different revenue models.


If I say “you are our domain expert for X, plan this task out in great detail” to a human engineer when delegating a task, 9 times out of 10 they will do a more thorough job. It’s not that this is voodoo that unlocks some secret part of their brain. It simply establishes my expectations and they act accordingly.

To the extent that LLMs mimic human behaviour, it shouldn’t be a surprise that setting clear expectations works there too.


From reading tea leaves, I’m fairly convinced Microsoft sees streaming games as the future of Xbox.

There’s a clear line from AI to that when you consider how many slightly outdated but not useless GPUs will be in Microsoft data centres within the next couple years.


Since the A100 the x00 series GPUs dont have the ability to run graphics drivers. The hardware is optimized for FP8 and not much else. That's how they've gotten 'faster' for AI workloads, by dumping graphics.


Microsoft can really get it wrong, but getting streaming wrong after Stadia and Luna would be very, very impressive.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: