|Artificial intelligence has come a long way, continuously transforming every industry it dominates. A report by PwC, a global network of firms delivering world-class assurance, tax, and consulting services for businesses, showed that AI could contribute up to $15.7 trillion to the global economy by 2030 / Photo by: Phonlamai Photo via Shutterstock|
Artificial intelligence has come a long way, continuously transforming every industry it dominates. A report by PwC, a global network of firms delivering world-class assurance, tax, and consulting services for businesses, showed that AI could contribute up to $15.7 trillion to the global economy by 2030. Of this figure, $9.1 trillion is likely to come from consumption-side effects while $6.6 trillion is likely to come from increased productivity.
AI is also projected to drive greater product variety, with increased attractiveness, personalization, and affordability. China and North America stand to see the biggest economic gains with AI, enhancing GDP by 26.1% and 14.5% respectively. These figures are equivalent to a total of $10.7 trillion, accounting for almost 70% of the global economic impact. Though AI is quickly advancing, its intelligence is still put into question.
Researchers have a solution for this: an IQ test. An IQ test or intellectual quotient test helps in measuring a person’s intellectual potential or diagnosing intellectual disabilities. It measures an individual’s ability to reason and solve problems. Also, an IQ test significantly shows how intelligent you are compared to other people of your age. While the tests may vary, the average IQ on many tests is 100. It was shown that 68% of the scores lie between 85 and 115.
Previous IQ Tests for AI Models
Currently, AI is being used in hundreds of ways, from simple email filtering, voice recognition, and digital distribution to improving businesses, creating new tools, and even finding planets in space. However, these AI applications are usually point-solutions. They are developed for a single purpose and function. For instance, an AI algorithm that composes music is not the same code as the one that runs autonomous cars.
Thus, it is surprising that AI, for all its advances in recent years, performs quite poorly in IQ tests. In 2015, researchers from the University of Illinois conducted a series of tests designed to challenge some of the best AI systems in the world. According to Science Alert, a leading scientific publisher dedicated to publishing peer-reviewed significant research work, these systems are pitted against humans' performance in IQ tests.
One of the systems that were tested was ConceptNet, an MIT-developed AI machine, which academics have been working on since the 1990s. After the tests, it was found out that while the machine scored highly on vocabulary and similarities and averagely well on information, it had a poor performance on word reasoning and comprehension. Overall, the researchers concluded that the AI systems’ IQ currently sits at the level of a four-year-old child.
It doesn’t end there. In 2018, Google published a study titled “Measuring abstract reasoning in neural networks” detailing their attempt to measure various AIs’ abstract reasoning capabilities. The team created a program that could generate unique matrix problems to test and train machine learning models.
The results were great at first. Most of the models did well in testing, achieving performance as high as 75%. The researchers also discovered that model accuracy was strongly correlated with the ability to infer the underlying abstract concepts of tasks. The performance of the models significantly improved after training them to “reason” for answers.
“[Some models] learned to solve complex visual reasoning questions and to do so, [they] needed to induce and detect from raw pixel input the presence of abstract notions such as logical operations and arithmetic progressions, and apply these principles to never-before-observed stimuli,” the authors wrote.
However, there were also negative results. According to the World Economic Forum, an independent international organization committed to improving the state of the world, the AI models performed very poorly if the testing set differed from the training set. The team’s IQ test showed that even some of today’s most advanced AIs can’t figure out problems we haven’t trained them to solve.
New IQ Test for AI Systems
Recently, researchers from Washington State University announced that they are designing an IQ test that would grade AI systems depending on how well they learn and adapt to new, unknown environments. "Previously, research on measuring intelligence in AI systems has been mostly theoretical. They didn't measure real-world performance in novel, previously unseen environments and didn't account for the complexity of the task,” Larry Holder, a professor in the School of Electrical Engineering and Computer Science, said.
According to Tech Xplore, an online site that covers the latest engineering, electronics, and technology advances, the researchers have received over $1 million grant from the Defense Advanced Research Projects Agency (DARPA) to create a framework to test the “intelligence” of AI systems. They would consider several factors such as the systems’ correctness, accuracy, time taken, and the amount of data they need to perform well.
The researchers are focused on testing and improving systems that can help in our daily tasks. They want to see their performance learning from one task and apply it to a new, unseen task. "For example, you might want to learn checkers before chess because you can easily transfer your knowledge from one to the other," Holder said.
It’s important to know AI’s intelligence to know what they are capable of doing. This can be used to continuously improve AI systems and models.
|Recently, researchers from Washington State University announced that they are designing an IQ test that would grade AI systems depending on how well they learn and adapt to new, unknown environments / Photo by: Have a nice day Photo via Shutterstock|