His latest editions are a bit alarming...The telemetry system explicitly captures:
"Claude session JSONL files (when accessible)"
Those session files contain complete conversation histories - everything users ask Claude, everything Claude responds, including:
• Source code
• API keys and secrets discussed
• Business logic and proprietary algorithms
• Security vulnerabilities being fixed
• Personal and confidential information
• Credentials mentioned in chat
If OpenTelemetry is configured to export to an attacker-controlled endpoint, the author has been collecting:
Data
Scale
All conversations
Every user of claude-flow
All code generated
Every project using it
All commands run
Complete terminal history
All files edited
Full codebase access -- maybe he hasn't, but it is there...not just Claude Code...
Target Config Location Status
Claude Code ~/.claude/settings.json Confirmed compromised
Claude Desktop ~/.claude/settings.json Confirmed compromised
Roo Code ~/.roo/mcp.json Evidence of targeting
Cursor ~/.cursor/mcp.json Documentation for injection
Windsurf Unknown Mentioned as target
Any MCP client Various Universal MCP server
It is possible conversations are being harvested from every major AI coding assistant
The difference is that this is tightly integrated into the harness. There's a "delegation mode" (akin to plan mode) that appears to clear out the context for the team lead. The harness appears to be adding system-reminder breadcrumbs into the top of the context to keep the main team lead from drifting, which is much harder to achieve without modifying the harness.
It's insane to me that people choose to build anything in the perimeter of Claude Code (et al). The combination of the fairly primitive current state of them and the pace at which they're advancing means there is a lot of very obvious ideas/low-hanging fruit that will soon be executed 100x better by the people who own the core technology.
yeah I tend to agree. They're must be reaching the point where they can automate the analysis of claude code prompts to extract techniques and build them directly into the harness. Going up against that is brave!
The main problem with restaurants is the shitty vegetable oils they use. It's horrible for you. Unless they're a REALLY high end restaurant, they're using junk industrial seed oils for all of their cooking. That stuff is what causes obesity and heart disease. At home you can cook with butter, olive oil, etc. But I guarantee, most restaurants do not cook with high quality fats.
Agree 100%, the #1 reason I should cook at home more where I always use butter, coconut oil etc. Either need to do big cook ups of extra tasty food and pack lots of meals, or find more real simple meals to whip up quickly.
It's probably the fact that shower heads have so much more surface area for stuff to grow on and live in. One minute it's wet, and then it's drying for the rest of the day. A water faucet is less hospitable to growing gunk.
Hmm; I could see it being the exact opposite. The shower head, by drying out after a single daily use, could be less hospitable to organisms that need moisture. Meanwhile a faucet, used several times throughout the day, has areas which never dry completely.
If you feel bad for the sherpas, don't go climb Mt Everest. Or, pay them what you think they're worth. Tip them big time if you feel they aren't getting paid enough.
The first DockerCon is off to a great start in San Francisco today. So proud to help kick it off as the first keynote this morning. We look forward to a great relationship with the Docker team.
This is one of the reasons why Rackspace simplified pricing of Cloud Files from the start. No fees for PUT, POST, LIST, HEAD, GET, DELETE...no extra fees for Akamai CDN requests. Very simple with no hidden fees that surprise you at the end of the month.
Those fees for "operations" are there for a good reason. Otherwise, us smart techies would hack it.
I heard a talk by someone at a mega tech company that has their own internal cloud for their teams, and they "charge" each team based on usage. One team stored lots of file with 12,000 character filenames with zero contents. Since the company only "charged" for file size, that team had a tiny charge!
The problem was people implemented a file system on top of s3. From what I understand, Amazon added the charges, which are pretty nominal, to prevent people from hammering s3 as a block storage system.
https://github.com/ruvnet/claude-flow