Probably they mentioned something like: "Not possible with current hardware speed" just to be translated as "impossible" since the recollection is second-hand.
> The cost of turning written business logic into code has dropped to zero
It hasn't. Large enterprises currently footing the bill, essentially subsidizing AI for now.
I constantly see comparisons between the 200$ Claude-Code Max subscription vs 6-figure (100k$) salary of an engineer.
The comparison here is, first of all, not apples-to-apples. Let's correct CC subscription to the yearly amount first; 12x200=2400$. Still more than 10x difference compared to the human engineer.
Although when you have the human engineer, you also pay for the experience, responsibility, and you somewhat transfer liability (especially when regulations come into play)
Moreover, creation by a human engineer, unless stolen IP or was plagiarized, owned by you as the employer/company. Meanwhile, whatever AI generated is guaranteed to be somewhat plagiarized in the first place. The ownership of the IP is questionable.
This is like when a layman complains when the electrician comes to their house, identifies the breaker problem, replaces the breaker which costs 5$ and charges 100$ for 10-minute job. Which is complete underestimation of skill, experience, and safety. A wrong one may cause constant circuit-breaks, causing malfunction in multitude of electronic devices in the household. Or worse, may cause a fire. When you think you paid 100$ for 10-minutes, in fact it was years of education, exams, certification, and experience you had paid for your future safety.
The same principle applies to the AI. It seems like it had accumulated more and more experience, but failing at the first prompt-injection. It seems like getting better at benchmarks because they are now part of their dataset. All these are hidden-costs 99% does not talk about. All these hidden costs are liabilities.
You may save an engineer's yearly salary today, at the cost of losing ten times more to the civil-lawsuits tomorrow. (Of course, depending on a field/business)
If your business was not that critical to get a civil-lawsuit in the first place, then you probably didn't needed to hire an engineer yourself. You could hire an agency/contractor to do that in much cheaper way, while still sharing liability...
With the current scale and speed, it is not yet viable to make N+1 calls to other models with specific prompts. (Or even calling multiple fine-tuned models)
However, even Google (and others) admit(s) that some sort of prompt-injection is always possible, hence out-of-scope for bug-bounty programs.
There are only 2 ways to fix this;
1. Either we ask multiple models with multiple system prompts to validate both inputs, processing, and outputs, then showing results to the user. Possibly making these kind of indirect attacks 2x-3x or Nx more difficult. (ie. Specialized checks and post-processing of the output of original model)
Note that this is linearly-scalable, looking like a *nix shell (bash) pipeline as such: `input-sanitizer-llm | translation-llm | output-sanitizer-llm | security-guard-llm`
2. I do not want to say "tiny LLMs" as the term itself is silly, but essentially finding a similar but different architecture to utilize transformers & language-relationship parts to create one-to-one models that are specialized for certain jobs.
Currently we use "General knowledge" LLMs and trying to "specialize" their output, this is inefficient overall as you have bunch of unnecessary things encoded in it, which are causing either hallucinations or these kinds of attacks. Meanwhile, an LLM with no information about some other unnecessary things besides the task it was trained for would be much better and safer. (WIthout requiring linear scaling of point#1)
I also believe that the tokenizer will require the most work to make point #2 possible. If point #2 becomes even a slight reality, capacity constraints will drop significantly yielding much higher efficiency for those agentic tasks.
I originally saw this on Twitter/X, the wording here is very confusing. The tl;dr version is simply they are going to be KTLO -keep the lights on- mode.
Interesting approach. I like Docker/Kubernetes way of secret mounts where you can limit user/group permissions too.
Meanwhile, I was an avid user of the echo secret | ssh consume approach, specifically for the kerberos authentication.
In my workflow, I saved the kerberos password to the macOS keychain, where kinit --use-keychain authenticated me seamlessly. However this wasn't the case for remote machines.
Therefore, I have implemented a quick script that is essentially
I have been booting from external drives on different hardware since 2007. I was even able to trick Windows XP to boot off of a 12GB SanDisk thumb drive. (Although it was horribly slow!)
Coming back to the author's story, as others have pointed out as well, I do not think it is related to the DFU port itself. I think it depends on the BIOS/UEFI firmware which is addressing those ports, and then the bootloader who is responsible for finding the system (root) volume.
Nowadays these happen with Volume UUIDs hence it should not matter, at least in theory. But even GRUB adds a hint, as discovery just with UUID may fail.
Since we cannot see what actually is happening or see the logs, I would simply say: "Always use the same port for booting and installation." Which usually simplifies the process.
I am quite certain "the undocumented DFU port" was the port author initially used to install macOS to the external drive. Maybe on another Mac/machine. When they change the machine, addressing/enumeration of ports may be different, due to how boot process works. Therefore, let's say you used the port=0x3 in the first install, when you change the machine, you need to find the same port=0x3. Thus being the undocumented-DFU-port author mentions.
> P.S: Also DFU port is for installing firmware (BIOS/UEFI) to the device even before boot occurs. For example, you should connect one end of a USB cable to a working computer (ie. "master"), another end to the DFU port of target (ie. "slave") while the machine that is off. Some specific sequence of power-key combination puts target machine into DFU-mode, where you can overwrite the firmware (UEFI/BIOS, etc) from the working machine... That is the purpose of DFU. -- Or at least access the internal hard-drive/SSD without actually booting the "slave" machine.
> I would simply say: "Always use the same port for booting and installation." Which usually simplifies the process.
That would be even worse than not being able to update macOS on the DFU port. I'm supposed to remember which of 3 ports I originally used, forever??
> Maybe on another Mac/machine. When they change the machine
No, I did not use another machine.
The entire purpose of this install on an external disk was to take screenshots from my MacBook Pro. My other machine, a Mac mini, has a non-retina display, which is not good for that purpose, not to mention that the Mac mini already has multiple boot volumes, so I wouldn't need an external disk with that machine.
> That would be even worse than not being able to update macOS on the DFU port. I'm supposed to remember which of 3 ports I originally used, forever??
I simply use the port that is closest to the MagSafe. Haven't tried much with the M1 macs though, since because my M1Pro MBP was issued by the company and had lots of restrictions on it.
> Under Prime Minister Modi's leadership, India has successfully built digital infrastructure that connects businesses to customers, small towns to global markets, and traditional enterprises to modern technology.
reply