How does Apple improve their bionic processor / FaceID if the embeddings never leave the Secure Enclave? Do they simply have to test this themselves at their offices?
If the in-game hard-coded AI just follows the ball, how do you ever beat it? I’d like to see the logic behind that 1970s opponent paddle AI, especially as it ramps up difficulty.
Looks like this task focused on binary sentiment analysis (positive or negative movie reviews) - have you tried this on something with a broader potential output space? This seems relevant for what you’re calling “neural tags” on your client’s customer conversations, which seems more open-ended than simply “positive” or “negative”.
Yes, great insight! The choice to focus on sentiment here was mainly to align with fast.ai’s original research—-hopefully maximizing the generalizability and accessibility of the results.
Internally we have made use of these results to improve a broad set of language tasks, hopefully will be able to publish on those in the coming months as well.
I’d like to see this AI hooked up to an actual Atari machine somehow. Has anyone tried something like that? Could the model process the frames quickly enough to move a robotic arm up or down on a joystick?
Curious about differences in brain waves for desire versus intent. I might desire to say something awful to that person who just honked at me, but I probably shouldn’t - and right now wouldn’t. Would a BCI system be able to tell the difference?
Haha, that's a fantastic point! And in order to get anywhere closer to making such differentiations, surface EEG S (used for the results into his article) would surely not suffice. The technology and results we are collecting currently are just scratching the surface, though I'd suspect if a BCI was capable of detecting intent of such complexity as the situation you described, it would also probably have the capacity to know whether it was something you planned to act on or not.