If there's actually any proprietary rockety data, maybe. Without knowing what data went into the fine-tune there's no way to tell. This could be a "internal procedures chatbot" or an "onboarding chatbot" where new people can ask where the coolest watercooler in the company is.
In my experience post-training mainly deals with "how" the model displays whatever data ("knowledge") it spits out. Having it learn new data (say the number of screws on the new supersecretengine_v4_final_FINAL (1).pdf) is often time hit and miss.
You'd get much better results with having some sort of RAG / MCP (tools) integration do the actual digging, and the model just synthesising / summarising the results.
Or, since we're apparently playing the game of maybes in this thread, maybe the LLM was only trained on the teams grandmothers' spaghetti recipes, so that new hires can learn to make the best bolognese sauce.
I didn't miss anything in the wordplay*, it was obvious. (As are the initials, an extra pun).
I put quotemarks around "flamethrower" because that's what it was originally sold as before obvious and predictable legal issues with real flamethrowers and the fact it was obviously mimicing the prop in Spaceballs.
My point is: neither weed burners nor actual flamethrowers have anything to do with digging tunnels nor any adjacent aspect of civil engineering.
Boring as the noun, not adjective. Also, Tesla was named that before Musk was involved, so it’s not his humor involved in naming both. Nikola Tesla is known for a lot more than just Tesla coils.
In my experience post-training mainly deals with "how" the model displays whatever data ("knowledge") it spits out. Having it learn new data (say the number of screws on the new supersecretengine_v4_final_FINAL (1).pdf) is often time hit and miss.
You'd get much better results with having some sort of RAG / MCP (tools) integration do the actual digging, and the model just synthesising / summarising the results.