Are Abstractions Necessary for AI?

I am really enjoying listening to AGI-21 presentations. I liked Ben's presentation and I learned something from it. I also got something about the discussion about databases even though that is more of a user-group thing. But I find myself heartily approving much of what Linas Vepstas is saying.  First of all I agree about his 2013 ideas about hypergraphs (which Ben mentioned) but I am concerned because there isn't even a hint of a fantasy of fail-safe in that. (DBT: Dragons Be There.)  But if you are going to rely on graph theory than hypergraphs seem like a necessity for general intelligence that is capable of reflection and understanding. 

But Linas's  presentation on Interpretable Natural Language Processing (INLP) is really interesting to me because I have been thinking about something I call Artificial Artificial Neural Networks and Linas had some ideas that could be relevant to what I would like to do. (I have only thought about it and I do not see myself getting much of anything done because my life of luxury is quickly coming to an end.) For instance his normalization technique to find Similarity Scores is interesting. I have thought about things like that but never tried anything.  I probably do not understand some of it but I am in the ballpark. I did notice that he hasn't demonstrated that his Explainable Patterns INLP really would work in more complicated situations but it is an important step to try something like he did try. 

As I have been watching the presentations I have wondered if highly developed abstractions are really necessary for advancing toward AGI. (I think abstractions of 'expression' are necessary but I have wondered about developed graph theory or logic or developed probability theory and so on).

There was another reason I wondered about highly developed abstraction theories. They are amazing in mathematics but, as I have repeatedly mentioned, computational arithmetic is effective because the n-ary or base n representations of numbers where n>1 is an effective compression of the 1-ary representation (ie making a mark for each item that is being counted.) And then, the higher n-ary representations can be used in computational arithmetic without being decompressed. It's amazing and it is the reason that math is so important in science and things like computers. 

But when you use abstractions for AI you do not have the spectacular compression of (a general) representation of objects and of compression of functions on those objects. So are highly developed abstractions for AGI really going to be that useful? One thing I did not like about old-fashioned probability reasoning was that the probability resultants lose the association with their relevant sources of input. For a simple example, if we want to try various what-if cases then the sources have to be found and reimplemented. Yes they can be implemented but then the efficiency of the system would be lost.

But I found an answer to my question: Abstract representations might be useful when the abstractions refer to reasoning along with the sources of the 'objects' used in the reasoning rather than just some resultant that is stripped of the derivation of  its sources.  If an abstract form refers to some reasoning, then not only could it be used to represent the generalization of the reasoning, but it could also be used to implement a tool to rapidly try various what-if scenarios without needing to go through endless unraveling of the "what was I thinking" attempt to adjust various parameters of a system of reasoning and then go through the process again to try to find which parameters of reasoning would need to be adjusted to get a resultant that was closer to some goal.

(I realize that abstraction is an important part of writing computer programs. But my question concerned highly developed abstractions that would be explicitly designed for AI and AGI. I was wondering if those highly developed abstractions are necessary for advancing AI in 2021-2022 or are their development really an example of putting the cart before the horse?)