Sure, the history of NNs goes back a while, but nobody was attempting to build AI out of perceptrons (single layer), which were famously criticized as not being able to even implement an XOR function.
The modern era of NNs started with being able to train multilayer neural nets using backprop, but the ability to train NNs large enough to actually be useful for complex things AI research, can arguably be dated to the 2012 Imagenet competition when Geoff Hinton's team repurposed GPUs to train AlexNet.
But, AlexNet was just a CNN, a classifier, which IMO is better just considered as ML, not AI, so if we're looking for the first AI in this post-GOFAI world of NN-based experimentation, then it seems we have to give the nod to transformer-based LLMs.
The modern era of NNs started with being able to train multilayer neural nets using backprop, but the ability to train NNs large enough to actually be useful for complex things AI research, can arguably be dated to the 2012 Imagenet competition when Geoff Hinton's team repurposed GPUs to train AlexNet.
But, AlexNet was just a CNN, a classifier, which IMO is better just considered as ML, not AI, so if we're looking for the first AI in this post-GOFAI world of NN-based experimentation, then it seems we have to give the nod to transformer-based LLMs.