Answers provided by the chatbot is limited by the quality of the questions asked

Dear Editor,

Your editorial on artificial intelligence refers. Someone sent me an article from the Wall Street Journal (https://www.wsj.com/articles/ai-bot-chatgpt-needs-some-help-with-math-assignments-11675390552) The article contended that ChatGpt had problems doing math. The problem that it used as an example to make the case was the following: “if a banana weighs 0.5 lbs and I have 7 lbs of bananas and nine oranges, how many pieces of fruit do I have?” The bot’s quick reply: “You have 16 pieces of fruit, seven bananas and nine oranges.”

Having had a pretty good primary school education, though I confess I had a rough time with maths at that level, I was somewhat perturbed that I could not figure out how the writer of the article had glibly established that the chatbot was wrong. There was something that disturbed me about my inability to solve the problem. As I thought about the wording it dawned on me that there was something interesting about the wording of the question. What did the questioner mean by “pieces of fruit”? Could it be that he simply meant how many fruits by that phrase “Pieces of fruit” Couldn’t be? Could it? So I decided to check and lo and behold I found the following at an Australian website:

“What is a ratio? A ratio is a comparison between two numbers. Ratios are used to compare two different things. For example: a bowl may contain two kinds of fruit, 9 pieces of fruit in total, made up of five apples and four oranges. Suppose you want to express the ratio of apples to oranges in the bowl of fruit. Then the ratio of apples to oranges is 5 to 4, and can be written as: 5:4.”  https://students.flinders.edu.au/content/dam/student/slc/ratios-and-proportions.pdf. (I see now that I was sent the article without the screenshot as can be readily seen in the original article explaining the same thing – that a piece of fruit meant a whole fruit). In all my life as a student in Guyana and subsequently as a math teacher, I had never seen the term “pieces of fruit” used to describe several types of fruit. The question asked was simply, “How many fruits are there?”

Quite apart from the fact that the question being started with reference to half a pound was bound to throw the solver off in the wrong direction, it is apparent that “natural language” is not so natural after all. When did this manner of stating that question develop? Is it in Guyana yet? If this is what is happening, no wonder some people are convinced that they cannot do math. It is the old problem. Ask an idiot question and you would get an idiot answer. What this brings us to is that the answer provided by the chatbot is limited by the quality of the question asked. Further, it suggests that even if the bot answers the question correctly and the answer is worded “correctly”, establishing that the bot has delivered the right answer might prove a problem for the questioner. He needs a certain level of intelligence – let’s call it sufficient intelligence – to establish whether the answer he has got is correct. It seems to me that for a long while the jobs of a lot of people would be quite safe.

Sincerely,

Frederick Collins