PhilSci Archive

Items where Subject is "Specific Sciences > Artificial Intelligence > Machine Learning"

Up a level
Export as [feed] Atom [feed] RSS 1.0 [feed] RSS 2.0
Group by: Creators | Item Type
Jump to: A | B | C | D | F | G | H | J | K | L | M | N | O | P | R | S | T | W | Y | Z
Number of items at this level: 101.

A

Alvarado, Ramón (2022) AI as an Epistemic Technology. [Preprint]

Alvarado, Ramón (2024) Challenges for Computational Reliabilism. [Preprint]

Anderson, Michael L and Champion, Heather (2021) Some dilemmas for an account of neural representation: A reply to Poldrack. [Preprint]

Andrews, Mel (2023) The Devil in the Data: Machine Learning & the Theory-Free Ideal. [Preprint]

Andrews, Mel (2023) The Devil in the Data: Machine Learning & the Theory-Free Ideal. [Preprint]

Andrews, Mel (2022) Making Reification Concrete: A Response to Bruineberg et al. [Preprint]

B

Babic, Boris and Gerke, Sara and Evgeniou, Theodoros and Cohen, Glenn (2019) Algorithms on Regulatory Lockdown in Medicine. Science.

Bagwala, Abbas (2024) On Informational Injustice and Epistemic Exclusions. [Preprint]

Barrett, Jeffrey A. and Gabriel, Nathan (2021) Reinforcement with Iterative Punishment. [Preprint]

Beisbart, Claus and Räz, Tim (2022) Philosophy of science at sea: Clarifying the interpretability of machine learning. Philosophy Compass.

Birch, Jonathan (2023) Medical AI, Inductive Risk, and the Communication of Uncertainty: The Case of Disorders of Consciousness. [Preprint]

Boge, Florian J. and Grünke, Paul (2019) Computer simulations, machine learning and the Laplacean demon: Opacity in the case of high energy physics∗. [Preprint]

Buchholz, Oliver (2023) The Deep Neural Network Approach to the Reference Class Problem. [Preprint]

Buckner, Cameron (2019) Deep Learning: A Philosophical Introduction. [Preprint]

Buijsman, Stefan (2024) Machine Learning models as Mathematics: interpreting explainable AI in non-causal terms. [Preprint]

C

Cabrera, Frank (2020) Correlation Isn’t Good Enough: Causal Explanation and Big Data. Metascience. ISSN 0815-0796

Casacuberta, David and Estany, Anna (2019) Convergence between experiment and theory in the processes of invention and innovation. THEORIA. An International Journal for Theory, History and Foundations of Science, 34 (3). pp. 373-387. ISSN 2171-679X

Climenhaga, Nevin (2019) The Structure of Epistemic Probabilities. Philosophical Studies. pp. 1-30. ISSN 0031-8116

Creel, Kathleen A. (2019) Transparency in Complex Computational Systems. [Preprint]

D

Davies-Barton, Tyeson and Raja, Vicente and Baggs, Edward and Anderson, Michael L (2022) Debt-free intelligence: Ecological information in minds and machines. [Preprint]

Dotan, Ravit (2020) Theory Choice, Non-epistemic Values, and Machine Learning. [Preprint]

Duede, Eamon (2022) Instruments, Agents, and Artificial Intelligence: Novel Epistemic Categories of Reliability. [Preprint]

Duran, Juan Manuel (2025) Beyond transparency: computational reliabilism as an externalist epistemology of algorithms. [Preprint] (Submitted)

Durt, Christoph and Froese, Tom and Fuchs, Thomas (2023) Large Language Models and the Patterns of Human Language Use: An Alternative View of the Relation of AI to Understanding and Sentience. [Preprint]

F

Facchin, Marco (2021) Are generative models structural representations? [Preprint]

Facchin, Marco (2021) Are generative models structural representations?

Facchin, Marco (2021) Troubles with mathematical contents. [Preprint]

Facchin, Marco and Zanotti, Giacomo (2024) Affective artificial agents as sui generis affective artifacts. [Preprint]

Facchini, Alessandro and Termine, Alberto (2022) Towards a Taxonomy for the Opacity of AI Systems. [Preprint]

G

Gerke, Sara and Babic, Boris and Evgeniou, Theodoros and Cohen, Glenn (2020) The need for a system view to regulate artificial intelligence/machine learning-based software as medical device. Nature Digital Medicine.

Gonzalez-Cabrera, Ivan (2022) A Lineage Explanation of Human Normative Guidance: The Coadaptive Model of Instrumental Rationality and Shared Intentionality. [Preprint]

Gopnik, Alison (2024) Empowerment as Causal Learning, Causal Learning as Empowerment: A bridge between Bayesian causal hypothesis testing and reinforcement learning. In: UNSPECIFIED.

Grimsley, Christopher (2020) Causal and Non-Causal Explanations of Artificial Intelligence. In: UNSPECIFIED.

Grote, Thomas and Buchholz, Oliver (2024) Machine Learning in Public Health and the Prediction-Intervention Gap. [Preprint]

Grujicic, Bojana and Illari, Phyllis (2023) Using deep neural networks and similarity metrics to predict and control brain responses. [Preprint]

H

Hudetz, Laurenz and Crawford, Neil (2022) Variation semantics: when counterfactuals in explanations of algorithmic decisions are true. [Preprint]

J

Jebari, Karim and Lundborg, Joakim (2019) Artificial superintelligence and its limits: why AlphaZero cannot become a general agent. [Preprint]

Jebeile, Julie and Lam, Vincent and Majszak, Mason and Räz, Tim (2023) Machine learning and the quest for objectivity in climate model parameterization. Climatic Change, 176 (101).

Jebeile, Julie and Lam, Vincent and Räz, Tim (2020) Understanding Climate Change with Statistical Downscaling and Machine Learning. [Preprint]

Johnson, Gabbrielle (2020) Algorithmic Bias: On the Implicit Biases of Social Technology. [Preprint]

K

K. Yee, Adrian (2023) Information Deprivation and Democratic Engagement. [Preprint]

Kasirzadeh, Atoosa and Klein, Colin (2021) The Ethical Gravity Thesis: Marrian Levels and the Persistence of Bias in Automated Decision-making Systems. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society.

Kieval, Phillip Hintikka and Westerblad, Oscar (2024) Deep Learning as Method-Learning: Pragmatic Understanding, Epistemic Strategies and Design-Rules. In: UNSPECIFIED.

Korbak, Tomasz (2019) Unsupervised learning and the natural origins of content. [Preprint]

L

LaCroix, Travis (2022) The Linguistic Blind Spot of Value-Aligned Agency, Natural and Artificial. [Preprint]

LaCroix, Travis (2022) Moral Dilemmas for Moral Machines. [Preprint]

Landgrebe, Jobst and Smith, Barry (2019) Making AI meaningful again. Synthese. ISSN 1573-0964

López-Rubio, Ezequiel (2020) The Big Data razor. [Preprint]

López-Rubio, Ezequiel (2020) Throwing light on black boxes: emergence of visual categories from deep learning. [Preprint]

López-Rubio, Ezequiel and Ratti, Emanuele (2019) Data science and molecular biology: prediction and mechanistic explanation. [Preprint]

M

Meskhidze, Helen (2024) Beyond Classification and Prediction: The Promise of Physics-Informed Machine Learning in Astronomy and Cosmology. [Preprint]

Miller, Ryan (2021) Does Artificial Intelligence Use Private Language? [Preprint]

Miller, Ryan (2023) Holding Large Language Models to Account. Proceedings of the AISB Convention 2023. pp. 7-14.

Mussgnug, Alexander (2022) The Predictive Reframing of Machine Learning Applications: Good Predictions and Bad Measurements. [Preprint]

N

Noichl, Maximilian (2019) Modeling the Structure of Recent Philosophy. [Preprint]

O

Otsuka, Jun and Saigo, Hayato (2022) The process theory of causality: an overview. [Preprint]

P

Patil, Kaustubh R. and Heinrichs, Bert (2022) Verifiability as a Complement to AI Explainability: A Conceptual Proposal. [Preprint]

Peters, Uwe (2022) Algorithmic political bias in artificial intelligence systems. [Preprint]

Peters, Uwe and Ojea Quintana, Ignacio (2024) Are Generics and Negativity about Social Groups Common on Social Media? – A Comparative Analysis of Twitter (X) Data. [Preprint]

R

Ratti, Emanuele (2022) Integrating Artificial Intelligence in Scientific Practice: Explicable AI as an Interface. Philosophy & Technology, 35.

Ratti, Emanuele (2024) Machine Learning and the Ethics of Induction. [Preprint]

Ratti, Emanuele (2020) What Kind of Novelties Can Machine Learning Possibly Generate? The Case of Genomics. [Preprint]

Ratti, Emanuele and Graves, Mark (2021) Cultivating Moral Attention: A Virtue-oriented Approach to Responsible Data Science in Healthcare. Philosophy & Technology.

Ratti, Emanuele and Graves, Mark (2022) Explainable machine learning practices: opening another black box for reliable medical AI. AI and Ethics.

Ratti, Emanuele and Russo, Federica (2024) Science and Values: A Two-way Direction. [Preprint]

Ratti, Emanuele and Termine, Alberto and Facchini, Alessandro (2024) Machine Learning and Theory-Ladenness: A Phenomenological Account. [Preprint]

Rosenstock, Sarita (2020) Learning from the Shape of Data. In: UNSPECIFIED.

Rushing, Bruce and Gomez-Lavin, Javier (2024) Is the Scaling Hypothesis Falsifiable? In: UNSPECIFIED.

Räz, Tim (2024) From Explanations to Interpretability and Back. [Preprint]

Räz, Tim (2024) ML interpretability: Simple isn't easy. Studies in History and Philosophy of Science. ISSN 00393681

Räz, Tim (2023) Methods for identifying emergent concepts in deep neural networks. Patterns, 4. pp. 1-7.

Räz, Tim (2020) Understanding Deep Learning With Statistical Relevance. [Preprint]

Räz, Tim (2022) Understanding risk with FOTRES? AI and Ethics.

Räz, Tim and Beisbart, Claus (2022) The Importance of Understanding Deep Learning. Erkenntnis. ISSN 0165-0106

S

Scorzato, Luigi (2024) Reliability and Interpretability in Science and Deep Learning. [Preprint]

Shech, Elay and Tamir, Michael (2023) Understanding from Deep Learning Models in Context. [Preprint]

Sprevak, Mark (2021) Predictive coding I: Introduction. [Preprint]

Sprevak, Mark (2021) Predictive coding II: The computational level. [Preprint]

Sprevak, Mark (2021) Predictive coding III: The algorithmic level. [Preprint]

Sprevak, Mark (2021) Predictive coding IV: The implementation level. [Preprint]

Sterkenburg, Tom F. (2023) Statistical Learning Theory and Occam's Razor: The Argument from Empirical Risk Minimization. [Preprint]

Sterkenburg, Tom F. and De Heide, Rianne (2021) On the Truth-Convergence of Open-Minded Bayesianism. The Review of Symbolic Logic.

Sterkenburg, Tom F. and Grünwald, Peter D. (2020) The No-Free-Lunch Theorems of Supervised Learning. [Preprint]

Sterkenburg, Tom F. and Grünwald, Peter D. (2021) The No-Free-Lunch Theorems of Supervised Learning. Synthese.

Stinson, Catherine (2019) From Implausible Artificial Neurons to Idealized Cognitive Models: Rebooting Philosophy of Artificial Intelligence. [Preprint]

Streppel, Yeji (2024) Demarcating value demarcation in ML. In: UNSPECIFIED.

Sullivan, Emily (2023) Do ML models represent their targets? [Preprint]

Sullivan, Emily (2022) How Values Shape the Machine Learning Opacity Problem. Scientific Understanding and Representation (Eds) Insa Lawler, Kareem Khalifa & Elay Shech. pp. 306-322.

Sullivan, Emily (2022) Inductive Risk, Understanding, and Opaque Machine Learning Models. [Preprint]

Sullivan, Emily (2022) Link Uncertainty, Implementation, and ML Opacity: A Reply to Tamir and Shech. Scientific Understanding and Representation (Eds) Insa Lawler, Kareem Khalifa & Elay Shech. pp. 341-345.

Sullivan, Emily (2024) SIDEs: Separating Idealization from Deceptive ‘Explanations’ in xAI. [Preprint]

Sullivan, Emily (2019) Understanding from Machine Learning Models. British Journal for the Philosophy of Science. ISSN 1464-3537

Sullivan, Emily and Kasirzadeh, Atoosa (2024) Explanation Hacking: The perils of algorithmic recourse. [Preprint]

T

Tamir, Michael and Elay, Shech (2023) Machine Understanding and Deep Learning Representation. Synthese, 201 (51). ISSN 1573-0964

Thompson, Jessica A. F. (2018) Towards a common theory of explanation for artificial and biological intelligence. [Preprint]

W

Weinstein, Galina (2023) The Neverending Story of the Eternal Wormhole and the Noisy Sycamore. [Preprint]

Woodward, James (2022) Flagpoles anyone? Causal and explanatory asymmetries. THEORIA. An International Journal for Theory, History and Foundations of Science, 37 (1). pp. 7-52. ISSN 2171-679X

Y

Yee, Adrian K. (2023) Machine Learning, Misinformation, and Citizen Science. [Preprint]

Z

Zednik, Carlos and Boelsen, Hannes (2020) The Exploratory Role of Explainable Artificial Intelligence. In: UNSPECIFIED.

Zerilli, John (2020) Explaining machine learning decisions. [Preprint]

Zhang, Jianqiu (2024) What is Lacking in Sora and V-JEPA’s World Models? -A Philosophical Analysis of Video AIs Through the Theory of Productive Imagination. [Preprint]

This list was generated on Wed Oct 16 04:06:36 2024 EDT.