lets say a situation calls for the AI to interact.
first we narrow the response pool down by drilling down to the core issues of the situation.
Tommy asks AI what his favorite color is.
AI disects the interaction thusly
1) Punctuation / inflection == question,AI knows questions dictate a response
b) the subject is color
c) key words are favorite, color
d) question == what, from array of who, when, why, where, and what
At this point AI knows that it must respond with a color, we’ll go generic here, as well as add some comic relief..
Color array 1 = CA1 = [ Red, Blue, Green, Purple, Yellow, Black, White, Pink ]
Comic Color array == CCA = [ Translucent, Opaque ]
so the response would be randomly generated from a pool of responses gathered by combining the appropriate arrays associated with the subject matter key words.
The AI bot see Color, and Favorite as key words, because it has two color arrays listed, it would combine these arrays, and randomly select one using an RNG.
Response == [CCA] + [CA1] / <RNG>
This type of on-the-fly computing would allow for much greater responses to more complex situations, when taking into consideration linguistic syntax, sentence structuring, and degree of probability of supposed subject matter, and it’s relateable response pools.
i.e. If the question is asked: What do you think about the new BMW
The arrays involved would be under the what tree, based on car models, with each model having a ” wheight ” value attached based upon popular car studies, where the lower weighted car models are stripped out, and then RNG dictates response from remaining array pool..
[What Tree] -> [Cars array] -> [BMW models]
Where [BMW models] ==[ Model 1.5, Model2.3 – model 7.9]
In this array, the model is 1, 2, 3, 4, 5, 6, and 7, each having a score beside it.
<model>[space]<Number Identifyer>.<popularity weight 1-10>
so the AI construct would strip all but the top 3 – 5 ranked choices, and select a random one from that array..
This can be expressed like such:
response == [BMW Models] – <= 5 / <RNG>
SO the main question key words, who, when where what, and why, could all have their own heirarchy of nested arrays, built from avalable information pulled from the internet, or based on polls, and studies being published, add to this particular method, any known risk assesment, and response behavior of a given population geographically, then attempt to determine geographicly based linguistic propensities of the user, you can corelate a response from given terms, linguistic traits, and popularity / risk behavior of said geo-located population’s propensities for certain responses based on assesed polls, and information to formulate a much more accuratly portrayed response to the user’s question.
i.e. If the user has a destinguishable vernacular of terms, or propensities in sentance struction that is located predominately in the north east, or drilled in even closer, to boston, then the result arrays could be weighted based on that locations typical response behavior to same, or similar situations, or questioning.
If Joe from Boston asks ” What car do you like most?”
the AI would first determine the question, then determine the arrays of responses, then add ” weight ” to the responses based on locale, and popularity.
workflow would be thus:
[user = Joe] -> [punctuation/inflection: type = question] -> [Question = what] -> [What arrays] -> ( [cars array] / ([accent/vernacular = boston/mass] – <= [response popularity for boston])) / <RNG> = Response based on random selection of most popular / desired cars according to the population of boston / massechuesets.
Now.. we could then create a new array, that is built upon by the AI bot, that consists of the most popular questions, and the derived response, sorted by geolocation.
[Popular Queries] -> [Country] -> [State/province] -> [city] – > [county/district] -> [response]
Just an idea..