I think it’d be safe…
I think it’d be safe to say that at a very broad level, AI and ML “train” and make models by recognizing (and remembering) patterns in the Boolean outcomes of specific conditions. They train by representing “cause-and-effect”, much like humans do. Neural networks are necessary for this representation as simple Boolean logic won’t cut it