Proposal for a Design and Testing of a Simple AGI Program

12/30/12

I started out by saying that you have to test your ideas about AGI and then you have to accept the results of your tests. But I realized those tests cannot be fully defined in any detail beforehand. So using the example of my planned project, I pointed out that if 5 months goes by and I did not even start it, then that would be an indication that I did not really have it figured out very well (unless there was a dramatic change in my life which prevented me from working on it.) And if I started in on the project but was not able to show that it even seemed to work after a year then that would be an indication that I did not have it figured out. Yes you should give an experimenter some leeway, but if he keeps kicking the can down the road year after year then questions about the credibility of his claims cannot be ignored. I also recognized that going through a trial and error method of testing functions that you think would be needed is an important step of the scientific method. I was just saying that you also need to test simplified models of the essential qualities of the proposed project as those kinds of tests became feasible. This is also an essential part of the scientific method. I pointed out that the program that I am going to work on would be limited because AGI is something that is still on the frontier of science, and I mentioned that some people would not be impressed with anything that was so limited. But other enthusiasts might be interested if I could actually get my program to do something that seemed intelligent near to or beyond what other people have already done.

I gave my planned project a classification label. I called it AGi where the lower case "i" represented the idea that it would be limited. I realized that I needed to define some ways that I could actually define the results of my experiments since I was saying that you had to accept the results of your experiments. I don't think that anything is absolutely falsifiable but I do believe that there are ways we can gather empirical evidence on how well our ideas work if we are willing to make that effort. I have made the claim that my program should be able to learn a rudimentary human-like language. This would be pretty easy for some enthusiasts to accept if it actually occurred. As I thought about what kind of tests I could use to collect evidence if I was on the right track (or on a good track) I came up with an insight about language that I now recognize is an objective which I think is obviously correlated with some of the fundamental qualities of intelligence. I have often said that concepts can play different roles in combination with other concepts. This theory can be extended to the use of word-based representations of concepts. I then thought of the simple language (that I would want my program to learn) in terms of creating relations between words as if this were occurring in a kind of simple database program. I realized that a very simple database instruction language would not qualify as a simplified human-like language because words in human languages play different kinds of roles. For example, in a simple database instruction language you might create a classification for a record and then create definitions for each of the fields used in the record. With a human language your words can, to give a simple and obvious example, be used in sentences that instruct someone to make a new category and a new definition of a record and so on. So human languages include the ability to create new codes that can actually be used to motivate someone to change the way he thinks about something. I felt that this idea might work as a definition for a minimal human-like language. So I came up with the insight that words should be able to invoke new ways of defining and classifying the operations of a database on the collection of ideas that it learned about. For example, the language should not be defined to rely only on special instruction keywords to define a new category of records (in this database metaphor). Words should be able to invoke procedural modifications as well as simply add declarative values. So the same words (or phrases or sentences) which have a declarative value can be used to invoke a procedural action without relying on a pre-designated keyword that is only used to represent an operation of the database.

There was some discussion about the difference between declarative and procedural knowledge in the old days of AI, but I suspect that the concept became bogged down in the traditional model of computer programming which makes strong distinctions between data that refers to a procedure and data that refers to a function call.

So I defined an objective which may be tested (or at least examined if I start testing the AGi program I have in mind.) A human-like language has the ability to induce a novel encoding of symbols powerful enough to invoke new ways of thinking about something. This objective can be constructed to be so simple that it could even be tested using a carefully designed test where the abstract nature of the idea can be tested in a controlled environment. So even if my program did not work, I could fall back on the essential idea (of the objective) and work on that.

This idea makes a great deal of sense. We want an AGI program to be able to act on language in just this way. Now perhaps there are other qualities of intelligence that I haven’t thought of. But in trying to design a presentation in which I defined a method to test my progress on my project I have managed to put certain ideas together in a slightly novel way which I feel goes to the heart of what we think intelligence should be. Because I chose to take this route I serendipitously discovered a way that this could be constructed in a highly controlled abstract test. If my impressions are right about this it should be easy to demonstrate the basics of how they work in my AGi program (even though that program would be limited). I hope this makes sense to someone.

Jim Bromer