Read our research on artificial intelligence, security order, and radicalisation
Human Induction in Machine Learning: A Survey of the Nexus
Artificial InteligencePetr Špelda17/4/2021View more
As our epistemic ambitions grow, the common and scientific endeavours are becoming increasingly dependent on Machine Learning (ML). The field rests on a single experimental paradigm, which consists of splitting the available data into a training and testing set and using the latter to measure how well the trained ML model generalises to unseen samples.
Autonomous Weapon Systems (AWS), The Palgrave Encyclopedia of Global Security Studies
Security OrderAnzhelika Solovyeva5/9/2020View more
by Anzhelika Solovyeva & Nik Hynek - This chapter comprehensively defines autonomous weapon systems (AWS), discusses their military utility and strategic importance, as well as draws attention to related normative and legal considerations.
Operations of Power in Autonomous Weapon Systems: Ethical Conditions and Socio‐Political Prospects
Security OrderAnzhelika Solovyeva29/8/2020View more
By Nik Hynek & Anzhelika Solovyeva - The purpose of this article is to provide a multi-perspective examination of one of the most important contemporary security issues: weaponized, and especially lethal, artificial intelligence.
What Can Artificial Intelligence Do for Scientific Realism?
Artificial InteligencePetr Špelda7/4/2020View more
The paper proposes a synthesis between human scientists and artificial representation learning models as a way of augmenting epistemic warrants of realist theories against various anti-realist attempts.
The Future of Human-Artificial Intelligence Nexus and its Environmental Costs
Artificial InteligencePetr Špelda1/3/2020View more
The environmental costs and energy constraints have become emerging issues for the future development of Machine Learning (ML) and Artificial Intelligence (AI). So far, the discussion on environmental impacts of ML/AI lacks a perspective reaching beyond quantitative measurements of the energy-related research costs. Building on the foundations laid down by Schwartz et al. (2019) in the GreenAI initiative, our argument considers two interlinked phenomena, the gratuitous generalisation capability and the future where ML/AI performs the majority of quantifiable inductive inferences. The gratuitous generalisation capability refers to a discrepancy between the cognitive demands of a task to be accomplished and the performance (accuracy) of a used ML/AI model. If the latter exceeds the former because the model was optimised to achieve the best possible accuracy, it becomes inefficient and its operation harmful to the environment. The future dominated by the non-anthropic induction describes a use of ML/AI so all-pervasive that most of the inductive inferences become furnished by ML/AI generalisations. The paper argues that the present debate deserves an expansion connecting the environmental costs of research and ineffective ML/AI uses (the issue of gratuitous generalisation capability) with the (near) future marked by the all-pervasive Human-Artificial Intelligence Nexus.
What is (not) asymmetric conflict? From conceptual stretching to conceptual structuring
RadicalizationEmil Aslan Souleimanov4/11/2019View more
In the second half of the 1990s, the label “asymmetric” conflict rose to prominence among scholars and strategists, as a term for capturing the rising challenge that violent non-state actors posed to the liberal world order. However, the concept soon became a catch-phrase for a range of disparate phenomena, and other buzzwords arose to describe the threats of concern to decision-makers. Conceptual confusion beset the field. This article dissects the notion of asymmetric conflicts, and distinguishes between asymmetries involving differences in (1) status, (2) capabilities, or (3) strategies between belligerents. It argues that “asymmetric” conflicts can take numerous forms depending on the combination of differences present, and offers a blue-print for keeping track of the meaning of this concept in the hope of bringing greater precision to future debates.