It's surprising to me that a programming language would have started out following engineering conventions rather than mathematical conventions, even if there's a strong overlap between EE and CS. That said (and speaking as a mathematician) I think it's foolish to use j over i, especially since in some cases (e.g. quaternions) it will be very confusing.
Physicists, on the other hand, have come to terms with the fact that they have fewer letters in the English and Greek alphabets than concepts they need to express, and so happily overload i and everything else, hoping to always figure it out from context.
Indeed, the use of aleph for transfinite numbers seems to be the only use of the Hebrew alphabet in the sciences that I know.
Looking at it, it seems like it would be rather hard to introduce. Any symbol which isn't already a bit too close to a misdrawn version of a Greek or Roman letter (I see three letters which look like misdrawn πs) is pretty hard to draw without practice.
Not really. Python only supports the second square root of -1 instead of the first at the moment. They are indistinguishable from each other until you introduce the other two. It's just convention that the first one is what's in common use when you don't need all three.
The decision to use "j" was made quite a long time ago by Guido. There as a fairly short discussion about it and he made his decision. As other people have said, this is a bike shed issue. BTW, Guido's MSc is in math.