Artificial intelligence (AI) now matches or outperforms human intelligence in an astonishing array of games, tests, and other cognitive tasks that involve high-level reasoning and thinking. Many scholars argue that—due to human bias and bounded rationality—humans should (or will soon) be replaced by AI in situations involving high-level cognition and strategic decision making. We disagree. In this paper we first trace the historical origins of the idea of artificial intelligence and human cognition as a form of computation and information processing. We highlight problems with the analogy between computers and human minds as input-output devices, using large language models as an example. Human cognition—in important instances—is better conceptualized as a form of theorizing rather than data processing, prediction, or even Bayesian updating. Our argument, when it comes to cognition, is that AI’s data-based prediction is different from human theory-based causal logic. We introduce the idea of belief-data (a)symmetries to highlight the difference between AI and human cognition, and use “heavier-than-air flight” as an example of our arguments. Theories provide a mechanism for identifying new data and evidence, a way of “intervening” in the world, experimenting, and problem solving. We conclude with a discussion of the implications of our arguments for strategic decision making, including the role that human-AI hybrids might play in this process.

Go to Source