Hacker Newsnew | past | comments | ask | show | jobs | submit | dmarwicke's commentslogin

2FA publishing still doesn't work for me. just use legacy tokens at this point, gave up trying to figure out what's wrong


npm's package.json and package-lock.json get out of sync constantly on my team. at least go only has one file to mess up


how is this possible? isn't everyone using the same node version?

and checking in lockfile changes


this is just optimizing for token windows. flat code = less context. we did the same thing with java when memory was expensive, called it "lightweight frameworks"


does it handle skewed distributions? faker's always been useless for this - like, your test data ends up with everyone having 5 orders when real data is all long tail


Not yet, but you're the second person in this thread to call out distribution control as a gap. It's on our radar now. Thanks for the feedback.


how does this decide what's safe to delete? i've nuked docker caches before and broken builds in annoying ways


Fair point, yeet doesn't really decide what's safe. It just scans a hardcoded list of known cache locations and lets you pick what to delete. The assumption is that these are "caches" that can be regenerated, but you're right that some are more painful than others. For Docker specifically, we include paths like /var/lib/docker which is pretty aggressive, that's images, build cache, and volumes. Probably shouldn't be in there since `docker system prune handles` that way better. Good feedback, will tighten up what we scan for


curious what the token costs look like on a real codebase. opus ain't cheap and C++ headers get big fast


we had to restrict ours to views only because it kept trying to run updates. still breaks sometimes when it hallucinates column names but at least it can't do anything destructive


couldn't find anything about invalidation in the docs. how does that work? usually where these abstractions fall apart for me


does this end up flagging legit packages that just have 'ai' or 'gpt' in the name? feels like half of pypi would trigger at this point


Great question! No, Phantom Guard won't flag legit packages like openai, langchain-openai, or gpt-engineer.

  The primary signal is whether the package exists on the registry. We query PyPI/npm directly:
  - If a package exists → it gets a low/safe risk score
  - If a package doesn't exist → that's the main red flag for slopsquatting

  Pattern matching (like AI-related terms) is just one of many weighted signals, and it's far outweighed by existence. In fact, popular packages get a negative weight that actively reduces their risk score.

  The attack we're detecting is when an LLM hallucinates a package name like flask-gpt-utils that sounds plausible but doesn't exist. A real attacker could then register that name and wait for developers to pip install it.

  We test against the top 1000 PyPI packages and target <5% false positive rate. If you're importing openai or transformers, you're fine.


curious about the startup latency in practice. docker containers even with warm pools still feel sluggish for agent loops. e2b does firecracker and it's noticeably snappier


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: