The centre’s research project on Artificial Intelligence (AI) pursues synthesising machine learning research with philosophical expositions related to AI.
While a sustained dialog remains difficult, as disciplinary allegiances interrupt the exchange of ideas, the project intends to enter the unexplored territory by developing a series of constituent syntheses laying ground for a structurally sound interconnection. Comprising epistemological issues such as reliability of inductive reasoning or the nature of modal knowledge the syntheses converge onto larger philosophy of science conundrums as they meet their counterparts within the field of artificial representation learning, and presently Deep Learning in particular. Informing the AI’s exposition by the synthesis of machine learning with long standing epistemological issues offers insights into related subjects as well, including topics such as security concerns over AI itself. Most pertinently, however, the synthesis provides a forward looking middle way entailing philosophy and machine learning into one whole avoiding reductionisms of its parts as to what we might expect of AI in the future.
As our epistemic ambitions grow, the common and scientific endeavours are becoming increasingly dependent on Machine Learning (ML). The field rests on a single experimental paradigm, which consists of splitting the available data into a training and testing set and using the latter to measure how well the trained ML model generalises to unseen samples.
DetailThe paper proposes a synthesis between human scientists and artificial representation learning models as a way of augmenting epistemic warrants of realist theories against various anti-realist attempts.
DetailThe environmental costs and energy constraints have become emerging issues for the future development of Machine Learning (ML) and Artificial Intelligence (AI). So far, the discussion on environmental impacts of ML/AI lacks a perspective reaching beyond quantitative measurements of the energy-related research costs. Building on the foundations laid down by Schwartz et al. (2019) in the GreenAI initiative, our argument considers two interlinked phenomena, the gratuitous generalisation capability and the future where ML/AI performs the majority of quantifiable inductive inferences. The gratuitous generalisation capability refers to a discrepancy between the cognitive demands of a task to be accomplished and the performance (accuracy) of a used ML/AI model. If the latter exceeds the former because the model was optimised to achieve the best possible accuracy, it becomes inefficient and its operation harmful to the environment. The future dominated by the non-anthropic induction describes a use of ML/AI so all-pervasive that most of the inductive inferences become furnished by ML/AI generalisations. The paper argues that the present debate deserves an expansion connecting the environmental costs of research and ineffective ML/AI uses (the issue of gratuitous generalisation capability) with the (near) future marked by the all-pervasive Human-Artificial Intelligence Nexus.
DetailThe disjunction effect introduced in the famous study by Shafir and Tversky (1992) and confirmed in subsequent studies remains one of the key "anomalies" for the standard model of the Prisoner's Dilemma game. In the last 10 years, a new approach has appeared that explains this effect using quantum probability theory. However, the existing results do not allow for a parameter-free test of these models. This article develops a simple quantum model of the Prisoner's Dilemma game as well as a new experimental design that enables one to test the predictions of this model. The results show the viability of the quantum model and a substantial difference between women's and men's representations of the game.
DetailSocial norms can be understood as the grammar of social interaction. Like grammar in speech, they specify what is acceptable in a given context (Bicchieri in The grammar of society: the nature and dynamics of social norms, Cambridge University Press, New York, 2006). But what are the specific rules that direct human compliance with the norm? This paper presents a quantitative model of self- and the other-perspective interaction based on a ‘quantum model of decision-making’, which can explain some of the ‘fallacies’ of the classical model of strategic choice. By (re)connecting two fields of social science research—norms compliance, and strategic decision-making—we aim to show how the novel quantum approach to the later can advance our understanding of the former. From the cacophony of different quantum models, we distill the minimal structure necessary to account for the known dynamics between the expectations and decisions of an actor. This model was designed for the strategic interaction of two players and successfully tested in the case of the one-shot Prisoners’ Dilemma game. Quantum models offer a new conceptual framework for examining the interaction between self- and other-perspective in the process of social interaction which enables us to specify how social norms influence individual behavior.
DetailThe present paper shows how statistical learning theory and machine learning models can be used to enhance understanding of AI-related epistemological issues regarding inductive reasoning and reliability of generalisations.
DetailThere are two kinds of often intertwined arguments accounting for innovative appraisals of the current developments in scientific landscape. The first maintains that science is not in any way different from other social realms and can be characterized by unprecedented dynamization (or acceleration) observable on various levels and in different dimensions that constitute scientific activities. The second position, often stemming from the first, is exemplified in our analysis through critical engagement with Dick Pels’s notion of ‘unhastening science’.
Detail