2024-03-28T13:52:12Z
http:///cgi/oai2
oai:philsci-archive.pitt.edu:779
2010-10-07T15:20:41Z
oai:philsci-archive.pitt.edu:781
2010-10-07T15:11:02Z
7375626A656374733D73706563:6368616F732D7468656F7279
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:70687973696373:6669656C64732D616E642D7061727469636C6573
7375626A656374733D73706563:6D617468656D6174696373
7375626A656374733D73706563:70687973696373:7175616E74756D2D6D656368616E696373
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/781/
Spacetime Memory: Phase-Locked Geometric Phases
Binder, Bernd
Complex Systems
Computer Science
Fields and Particles
Mathematics
Quantum Mechanics
Spacetime memory is defined with a holonomic approach to information processing, where multi-state stability is introduced by a non-linear phase-locked loop. Geometric phases serve as the carrier of physical information and geometric memory (of orientation) given by a path integral measure of curvature that is periodically refreshed. Regarding the resulting spin-orbit coupling and gauge field, the geometric nature of spacetime memory suggests to assign intrinsic computational properties to the electromagnetic field.
2002-08
Preprint
NonPeerReviewed
application/pdf
en
https://philsci-archive.pitt.edu/781/1/berrycomp06.pdf
Binder, Bernd (2002) Spacetime Memory: Phase-Locked Geometric Phases. [Preprint]
oai:philsci-archive.pitt.edu:1164
2010-10-07T15:11:48Z
7375626A656374733D73706563:636F676E69746976652D736369656E6365
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:636F6E6669726D6174696F6E2D696E64756374696F6E
7375626A656374733D67656E:636F6E76656E74696F6E616C69736D
7375626A656374733D67656E:6C6F676963616C2D706F736974697669736D2D656D706972696369736D
7375626A656374733D73706563:6D617468656D6174696373
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/1164/
Formal Systems as Physical Objects: A Physicalist Account of Mathematical Truth
E. Szabó, László
Cognitive Science
Computer Science
Confirmation/Induction
Conventionalism
Logical Positivism/Logical Empiricism
Mathematics
This paper is a brief formulation of a radical thesis. We start with the formalist doctrine that mathematical objects have no meanings; we have marks and rules governing how these marks can be combined. That's all. Then I go further by arguing that the signs of a formal system of mathematics should be considered as physical objects, and the formal operations as physical processes. The rules of the formal operations are or can be expressed in terms of the laws of physics governing these processes. In accordance with the physicalist understanding of mind, this is true even if the operations in question are executed in the head. A truth obtained through (mathematical) reasoning is, therefore, an observed outcome of a neuro-physiological (or other physical) experiment. Consequently, deduction is nothing but a particular case of induction.
2003-05
Preprint
NonPeerReviewed
application/pdf
en
https://philsci-archive.pitt.edu/1164/1/formfiz_preprint.pdf
E. Szabó, László (2003) Formal Systems as Physical Objects: A Physicalist Account of Mathematical Truth. [Preprint]
oai:philsci-archive.pitt.edu:1799
2010-10-07T15:12:35Z
7374617475733D696E7072657373
7375626A656374733D73706563:636F6D70757465722D736369656E6365
74797065733D6F74686572
https://philsci-archive.pitt.edu/1799/
The Logic of Quantum Programs.
Baltag, Alexandru
Smets, Sonja
Computer Science
We present a logical calculus for reasoning about information flow in quantum programs. In particular we introduce a dynamic logic that is capable of dealing with quantum measurements, unitary evolutions and entanglements in compound quantum systems. We give a syntax and a relational semantics in which we abstract away from phases and probabilities. We present a sound proof system for this logic, and we show how to characterize by logical means various forms of entanglement (e.g. the Bell states) and various linear operators. As an example we sketch an analysis of the teleportation protocol.
2004-06
Other
NonPeerReviewed
application/pdf
en
https://philsci-archive.pitt.edu/1799/1/Baltag-Smets2.pdf
Baltag, Alexandru and Smets, Sonja (2004) The Logic of Quantum Programs. UNSPECIFIED. (In Press)
oai:philsci-archive.pitt.edu:1891
2010-10-07T15:12:43Z
7375626A656374733D73706563:70687973696373:636F736D6F6C6F6779
7375626A656374733D73706563:6D617468656D6174696373
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:70687973696373:72656C617469766974792D7468656F7279
7375626A656374733D73706563:70687973696373
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/1891/
Universe creation on a computer
McCabe, Gordon
Cosmology
Mathematics
Computer Science
Relativity Theory
Physics
The purpose of this paper is to provide an account of the epistemology and metaphysics of universe creation on a computer. The paper begins with F.J.Tipler's argument that our experience is indistinguishable from the experience of someone embedded in a perfect computer simulation of our own universe, hence we cannot know whether or not we are part of such a computer program ourselves. Tipler's argument is treated as a special case of epistemological scepticism, in a similar vein to `brain-in-a-vat' arguments. It is argued that the hypothesis that our universe is a program running on a digital computer in another universe generates empirical predictions, and is therefore a falsifiable hypothesis. The computer program hypothesis is also treated as a hypothesis about what exists beyond the physical world, and is compared with Kant's metaphysics of noumena. It is proposed that a theory about what exists beyond the physical world should be formulated with the precise concepts of mathematics, and should generate physical predictions. It is argued that if our universe is a program running on a digital computer, then our universe must have compact spatial topology, and the possibilities of observationally testing this prediction are considered. The possibility of testing the computer program hypothesis with the value of the density parameter Omega_0 is also analysed. The informational requirements for a computer to represent a universeexactly and completely are considered. Consequent doubt is thrown upon Tipler's claim that if a hierarchy of computer universes exists, we would not be able to know which `level of implementation' our universe exists at. It is then argued that a digital computer simulation of a universe cannot exist as a universe. However, the paper concludes with the acknowledgement that an analog computer simulation can be objectively related to the thing it represents, hence an analog computer simulation of a universe could, in principle, exist as a universe.
2004-08
Preprint
NonPeerReviewed
application/pdf
en
https://philsci-archive.pitt.edu/1891/1/UniverseCreationComputer.pdf
McCabe, Gordon (2004) Universe creation on a computer. [Preprint]
oai:philsci-archive.pitt.edu:2015
2010-10-07T15:12:57Z
7374617475733D756E707562
7375626A656374733D73706563:636F6D70757465722D736369656E6365
74797065733D6F74686572
https://philsci-archive.pitt.edu/2015/
The Functional Account of Computing Mechanisms
Piccinini, Gualtiero
Computer Science
This paper offers an account of what it is for a physical system to be a computing mechanism—a mechanism that performs computations. A computing mechanism is any mechanism whose functional analysis ascribes it the function of generating outputs strings from input strings in accordance with a general rule that applies to all strings. This account is motivated by reasons that are endogenous to the philosophy of computing, but it may also be seen as an application of recent literature on mechanisms. The account can be used to individuate computing mechanisms and the functions they compute and to taxonomize computing mechanisms based on their computing power. This makes it ideal for grounding the comparison and assessment of computational theories of mind and brain.
2004
Other
NonPeerReviewed
doc
en
https://philsci-archive.pitt.edu/2015/1/The_Functional_Account_of_Computing_Mechanisms_6.doc
Piccinini, Gualtiero (2004) The Functional Account of Computing Mechanisms. UNSPECIFIED. (Unpublished)
oai:philsci-archive.pitt.edu:2016
2010-10-07T15:12:58Z
7374617475733D756E707562
7375626A656374733D73706563:636F6D70757465722D736369656E6365
74797065733D6F74686572
https://philsci-archive.pitt.edu/2016/
Computers
Piccinini, Gualtiero
Computer Science
According to some philosophers, there is no fact of the matter whether something is either a computer, or some other computing mechanism, or something that performs no computations at all. On the contrary, I argue that there is a fact of the matter whether something is a calculator or a computer: a computer is a calculator of large capacity, and a calculator is a mechanism whose function is to perform one out of several possible computations on inputs of nontrivial size at once. This paper is devoted to a detailed defense of these theses, including a specification of the relevant notion of “large capacity” and an explication of the notion of computer.
2004
Other
PeerReviewed
doc
en
https://philsci-archive.pitt.edu/2016/1/Varieties_of_Computers_11.doc
Piccinini, Gualtiero (2004) Computers. UNSPECIFIED. (Unpublished)
oai:philsci-archive.pitt.edu:2374
2010-10-07T15:13:29Z
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:70687973696373:737461746973746963616C2D6D656368616E6963732D746865726D6F64796E616D696373
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/2374/
The Connection between Logical and Thermodynamical Irreversibility
Short, Tony
Ladyman, James
Groisman, Berry
Presnell, Stuart
Computer Science
Statistical Mechanics/Thermodynamics
There has recently been a good deal of controversy about Landauer's Principle, which is often stated as follows: The erasure of one bit of information in a computational device is necessarily accompanied by a generation of kT ln 2 heat. This is often generalised to the claim that any logically irreversible operation cannot be implemented in a thermodynamically reversible way. John Norton (2005) and Owen Maroney (2005) both argue that Landauer's Principle has not been shown to hold in general, and Maroney offers a method that he claims instantiates the operation reset in a thermodynamically reversible way. In this paper we defend the qualitative form of Landauer's Principle, and clarify its quantitative consequences (assuming the second law of thermodynamics). We analyse in detail what it means for a physical system to implement a logical transformation L, and we make this precise by defining the notion of an L-machine. Then we show that logical irreversibility of L implies thermodynamic irreversibility of every corresponding L-machine. We do this in two ways. First, by assuming the phenomenological validity of the Kelvin statement of the second law, and second, by using information-theoretic reasoning. We illustrate our results with the example of the logical transformation 'reset', and thereby recover the quantitative form of Landauer's Principle.
2005-07
Preprint
NonPeerReviewed
application/pdf
en
https://philsci-archive.pitt.edu/2374/1/irreversibility.pdf
Short, Tony and Ladyman, James and Groisman, Berry and Presnell, Stuart (2005) The Connection between Logical and Thermodynamical Irreversibility. [Preprint]
oai:philsci-archive.pitt.edu:2689
2010-10-07T15:14:00Z
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:70687973696373:737461746973746963616C2D6D656368616E6963732D746865726D6F64796E616D696373
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/2689/
The Connection between Logical and Thermodynamic Irreversibility
Ladyman, James
Presnell, Stuart
Short, Anthony J.
Groisman, Berry
Computer Science
Statistical Mechanics/Thermodynamics
There has recently been a good deal of controversy about Landauer's Principle, which is often stated as follows: The erasure of one bit of information in a computational device is necessarily accompanied by a generation of kTln2 heat. This is often generalised to the claim that any logically irreversible operation cannot be implemented in a thermodynamically reversible way. John Norton (2005) and Owen Maroney (2005) both argue that Landauer's Principle has not been shown to hold in general, and Maroney offers a method that he claims instantiates the operation Reset in a thermodynamically reversible way. In this paper we defend the qualitative form of Landauer's Principle, and clarify its quantitative consequences (assuming the second law of thermodynamics). We analyse in detail what it means for a physical system to implement a logical transformation L, and we make this precise by defining the notion of an L-machine. Then we show that logical irreversibility of L implies thermodynamic irreversibility of every corresponding L-machine. We do this in two ways. First, by assuming the phenomenological validity of the Kelvin statement of the second law, and second, by using information-theoretic reasoning. We illustrate our results with the example of the logical transformation 'Reset', and thereby recover the quantitative form of Landauer's Principle.
2006-03
Preprint
NonPeerReviewed
application/pdf
en
https://philsci-archive.pitt.edu/2689/1/irreversibility%28rev2%29.pdf
Ladyman, James and Presnell, Stuart and Short, Anthony J. and Groisman, Berry (2006) The Connection between Logical and Thermodynamic Irreversibility. [Preprint]
oai:philsci-archive.pitt.edu:2756
2010-10-07T15:14:06Z
7374617475733D756E707562
7375626A656374733D73706563:70687973696373:636C6173736963616C2D70687973696373
7375626A656374733D67656E:6D6F64656C732D616E642D696465616C697A6174696F6E
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E:436C6173736963616C
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:6368616F732D7468656F7279
74797065733D636F6E666572656E63655F6974656D
https://philsci-archive.pitt.edu/2756/
Computing the Uncomputable, or, The Discrete Charm of Second-Order Simulacra
Parker, Matthew W.
Classical Physics
Models and Idealization
Classical
Computer Science
Complex Systems
We study an especially attenuated application of “mediating models”, in which computer simulations suggest that a certain dynamical system exhibits non-computable behaviour. These simulations are defended by reference to a simpler model of the model (hence “second-order simulacra”). We will see that this defence is problematic, but there are general reasons to believe the simulations are accurate. And though these models do not prove anything specific about an actual physical system, they influence our general expectations, and provide an essential component for any complete explanation of why and how the qualitative behaviour of some actual systems may be non-computable.
2006
Conference or Workshop Item
NonPeerReviewed
doc
en
https://philsci-archive.pitt.edu/2756/1/Discrete_Charm_a.doc
Parker, Matthew W. (2006) Computing the Uncomputable, or, The Discrete Charm of Second-Order Simulacra. In: UNSPECIFIED. (Unpublished)
oai:philsci-archive.pitt.edu:2783
2010-10-07T15:21:13Z
oai:philsci-archive.pitt.edu:2884
2010-10-07T15:14:19Z
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:726564756374696F6E69736D2D686F6C69736D
7375626A656374733D73706563:6368616F732D7468656F7279
7375626A656374733D73706563:70687973696373
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/2884/
Why diachronically emergent properties must also be salient
Imbert, Cyrille
Computer Science
Reductionism/Holism
Complex Systems
Physics
In this paper, I criticize Bedau's definition of `diachronically emergent properties' (DEPs), which says that a property is a DEP if it can only be predicted by a simulation (simulation requirement) and is nominally emergent. I argue at length that this definition is not complete because it fails to eliminate trivial cases. I discuss the features that an additional criterion should meet in order to complete the definition and I develop a notion, salience, which together with the simulation requirement can be used to characterize DEPs. In the second part of the paper, I sketch this notion. Basically, a property is salient when one can find an indicator, namely a descriptive function (DF), that is such that its fitting description shifts from one elementary mathematical object (EMO) to another when the property appears. Finally, I discuss restrictions that must be brought to what can count as DFs and EMOs if the definition of salience is to work and be non trivial. I conclude that salience (or a refined version of it) can complete the definition of DEPs.
2005-09
Preprint
NonPeerReviewed
application/pdf
en
https://philsci-archive.pitt.edu/2884/1/papierpourPhilsciArchive.pdf
Imbert, Cyrille (2005) Why diachronically emergent properties must also be salient. [Preprint]
oai:philsci-archive.pitt.edu:3180
2010-10-07T15:14:53Z
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E:636F6D7075746174696F6E2D696E666F726D6174696F6E2D7175616E74756D
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E:436C6173736963616C
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:70687973696373:7175616E74756D2D6D656368616E696373
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/3180/
Quantum Hypercomputation - Hype or Computation?
Hagar, Amit
Korolev, Alex
Quantum
Classical
Computer Science
Quantum Mechanics
A recent attempt to compute a (recursion--theoretic) non--computable function using the quantum adiabatic algorithm is criticized and found wanting. Quantum algorithms may outperform classical algorithms in some cases, but so far they retain the classical (recursion--theoretic) notion of computability. A speculation is then offered as to where the putative power of quantum computers may come from.
2007-01
Preprint
NonPeerReviewed
application/pdf
en
https://philsci-archive.pitt.edu/3180/1/Quantum_Hype.pdf
Hagar, Amit and Korolev, Alex (2007) Quantum Hypercomputation - Hype or Computation? [Preprint]
oai:philsci-archive.pitt.edu:3600
2010-10-07T15:21:26Z
oai:philsci-archive.pitt.edu:3777
2013-01-01T21:13:54Z
oai:philsci-archive.pitt.edu:3800
2010-10-07T15:21:30Z
oai:philsci-archive.pitt.edu:4037
2010-10-07T15:16:37Z
7375626A656374733D67656E:6C6177732D6F662D6E6174757265
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:726564756374696F6E69736D2D686F6C69736D
7375626A656374733D73706563:6368616F732D7468656F7279
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/4037/
The reductionist blind spot
Abbott, Russ
Laws of Nature
Computer Science
Reductionism/Holism
Complex Systems
Can there be higher level laws of nature even though everything is reducible to the fundamental laws of physics? The computer science notion of level of abstraction explains how there can be.
2008-05
Preprint
NonPeerReviewed
application/pdf
en
https://philsci-archive.pitt.edu/4037/1/The_reductionist_blind_spot-08-05-01-2300.pdf
Abbott, Russ (2008) The reductionist blind spot. [Preprint]
oai:philsci-archive.pitt.edu:4075
2010-10-07T15:16:43Z
7375626A656374733D73706563:636F6D70757465722D736369656E6365
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/4075/
Understanding Epistemic Relevance
Floridi, Luciano
Computer Science
Agents require a constant flow, and a high level of processing, of relevant semantic information, in order to interact successfully among themselves and with the environment in which they are embedded. Standard theories of information, however, are silent on the nature of epistemic relevance. In this paper, a subjectivist interpretation of epistemic relevance is developed and defended. It is based on a counterfactual and metatheoretical analysis of the degree of relevance of some semantic information i to an informee/agent a, as a function of the accuracy of i understood as an answer to a query q, given the probability that q might be asked by a. This interpretation of epistemic relevance vindicates a strongly semantic theory of information, according to which semantic information encapsulates truth. It accounts satisfactorily for several important applications and interpretations of the concept of relevant information in a variety of philosophical areas. And it interfaces successfully with current philosophical interpretations of causal and logical relevance.
2008-06
Preprint
NonPeerReviewed
application/pdf
en
https://philsci-archive.pitt.edu/4075/1/uer.pdf
Floridi, Luciano (2008) Understanding Epistemic Relevance. [Preprint]
oai:philsci-archive.pitt.edu:4076
2010-10-07T15:16:43Z
7375626A656374733D73706563:636F6D70757465722D736369656E6365
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/4076/
Against Digital Ontology
Floridi, Luciano
Computer Science
The paper argues that digital ontology (the ultimate nature of reality is digital, and the universe is a computational system equivalent to a Turing Machine) should be carefully distinguished from informational ontology (the ultimate nature of reality is structural), in order to abandon the former and retain only the latter as a promising line of research. Digital vs. analogue is a Boolean dichotomy typical of our computational paradigm, but digital and analogue are only “modes of presentation” of Being (to paraphrase Kant), that is, ways in which reality is experienced and/or conceptualised by an epistemic agent at a given level of abstraction. A preferable alternative is provided by an informational approach to structural realism, according to which knowledge of the world is knowledge of its structures. The most reasonable ontological commitment turns out to be in favour of an interpretation of reality as the totality of structures dynamically interacting with each other. The paper is the first part (the pars destruens) of a two-part piece of research. The pars construens, entitled “A Defence of Informational Structural Realism”, is forthcoming in Synthese.
2008-06
Preprint
NonPeerReviewed
application/pdf
en
https://philsci-archive.pitt.edu/4076/1/ado.pdf
Floridi, Luciano (2008) Against Digital Ontology. [Preprint]
oai:philsci-archive.pitt.edu:4353
2010-10-07T15:17:28Z
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E:436C6173736963616C
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:6368616F732D7468656F7279
7375626A656374733D73706563:636F676E69746976652D736369656E6365
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/4353/
Lurching Toward Chernobyl: Dysfunctions of Real-Time Computation
Wallace, Rodrick
Artificial Intelligence
Classical
Computer Science
Complex Systems
Cognitive Science
Cognitive biological structures, social organizations, and computing machines operating in real time are subject to Rate Distortion Theorem constraints driven by the homology between information source uncertainty and free energy density. This exposes the unitary structure/environment system to a relentless entropic torrent compounded by large deviations causing increased average distortion between intent and impact, particularly as demands escalate. The phase transitions characteristic of information phenomena suggest that, rather than graceful decay under increasing load, these structures will undergo punctuated degradation akin to spontaneous symmetry breaking in physical systems. Rate distortion problems, that also affect internal structural dynamics, can become synergistic with limitations equivalent to the inattentional blindness of natural cognitive processes. These mechanisms, and their interactions, are unlikely to scale well, so that, depending on architecture, enlarging the structure or its duties may lead to a crossover point at which added resources must be almost entirely devoted to ensuring system stability -- a form of allometric scaling familiar from biological examples. This suggests a critical need to tune architecture to problem type and system demand. A real-time computational structure and its environment are a unitary phenomenon, and environments are usually idiosyncratic. Thus the resulting path dependence in the development of pathology could often require an individualized approach to remediation more akin to an arduous psychiatric intervention than to the traditional engineering or medical quick fix. Failure to recognize the depth of these problems seems likely to produce a relentless chain of the Chernobyl-like failures that are necessary, but often insufficient, for remediation under our system.
2008-11
Preprint
NonPeerReviewed
application/pdf
en
https://philsci-archive.pitt.edu/4353/1/rdcomp18.pdf
Wallace, Rodrick (2008) Lurching Toward Chernobyl: Dysfunctions of Real-Time Computation. [Preprint]
oai:philsci-archive.pitt.edu:4539
2010-10-07T15:21:38Z
oai:philsci-archive.pitt.edu:4540
2010-10-07T15:17:50Z
7375626A656374733D67656E:6C6177732D6F662D6E6174757265
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:726564756374696F6E69736D2D686F6C69736D
7375626A656374733D73706563:6368616F732D7468656F7279
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/4540/
The reductionist blind spot
Abbott, Russ
Laws of Nature
Computer Science
Reductionism/Holism
Complex Systems
Can there be higher level laws of nature even though everything is reducible to the fundamental laws of physics? The computer science notion of level of abstraction explains how there can be. The key relationship between elements on different levels of abstraction is not the is-composed-of relationship but the im-plements relationship. I take a scientific realist position with respect to (material) levels of abstraction and their instantiation as (material) entities. They exist as ob-jective elements of nature. Reducing them away to lower order phenomena pro-duces a reductionist blind spot and is bad science.
2009-02
Preprint
NonPeerReviewed
application/pdf
en
https://philsci-archive.pitt.edu/4540/1/The_reductionist_blind_spot-09-03-30.pdf
Abbott, Russ (2009) The reductionist blind spot. [Preprint]
oai:philsci-archive.pitt.edu:9944
2013-08-23T13:48:22Z
7375626A656374733D67656E:636175736174696F6E
7375626A656374733D73706563:6368616F732D7468656F7279
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365
7375626A656374733D67656E:6578706C616E6174696F6E
7375626A656374733D67656E:6C6177732D6F662D6E6174757265
7375626A656374733D67656E:6D6F64656C732D616E642D696465616C697A6174696F6E
7375626A656374733D73706563:70726F626162696C6974792D73746174697374696373
7375626A656374733D67656E:726564756374696F6E69736D2D686F6C69736D
7375626A656374733D67656E:7374727563747572652D6F662D7468656F72696573
7375626A656374733D67656E:746563686E6F6C6F6779
74797065733D636F6E666572656E63655F6974656D
https://philsci-archive.pitt.edu/9944/
Big Data – The New Science of Complexity
Pietsch, Wolfgang
Causation
Complex Systems
Computer Science
Artificial Intelligence
Explanation
Laws of Nature
Models and Idealization
Probability/Statistics
Reductionism/Holism
Structure of Theories
Technology
Data-intensive techniques, now widely referred to as 'big data', allow for novel ways to address complexity in science. I assess their impact on the scientific method. First, big-data science is distinguished from other scientific uses of information technologies, in particular from computer simulations. Then, I sketch the complex and contextual nature of the laws established by data-intensive methods and relate them to a specific concept of causality, thereby dispelling the popular myth that big data is only concerned with correlations. The modeling in data-intensive science is characterized as 'horizontal'—lacking the hierarchical, nested structure familiar from more conventional approaches. The significance of the transition from hierarchical to horizontal modeling is underlined by a concurrent paradigm shift in statistics from parametric to non-parametric methods.
2013-08-23
Conference or Workshop Item
NonPeerReviewed
application/pdf
en
https://philsci-archive.pitt.edu/9944/1/pietsch-bigdata_complexity.pdf
Pietsch, Wolfgang (2013) Big Data – The New Science of Complexity. In: UNSPECIFIED.
oai:philsci-archive.pitt.edu:10124
2013-12-08T16:12:30Z
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:64657465726D696E69736D2D696E64657465726D696E69736D
7375626A656374733D67656E:6D6F64656C732D616E642D696465616C697A6174696F6E
74797065733D7075626C69736865645F61727469636C65
https://philsci-archive.pitt.edu/10124/
Branching Space-Times and Parallel Processing
Wronski, Leszek
Computer Science
Determinism/Indeterminism
Models and Idealization
There is a remarkable similarity between some mathematical objects used in the Branching Space-Times framework and those appearing in computer science in the fields of event structures for concurrent processing and Chu spaces. This paper introduces the similarities and formulates a few open questions for further research, hoping that both BST theorists and computer scientists can benefit from the project.
Springer
2012-02-24
Published Article or Volume
NonPeerReviewed
application/pdf
en
cc_by
https://philsci-archive.pitt.edu/10124/1/LWAzory_corrected_pitt.pdf
Wronski, Leszek (2012) Branching Space-Times and Parallel Processing. H. Andersen et al. (eds.), New Challenges to Philosophy of Science, The Philosophy of Science in a European Perspective, 4. pp. 135-148.
http://link.springer.com/chapter/10.1007/978-94-007-5845-2_12
10.1007/978-94-007-5845-2_12
oai:philsci-archive.pitt.edu:10316
2014-02-03T15:03:31Z
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/10316/
Turing on the Integration of Human and Machine Intelligence
Sterrett, S. G.
Computer Science
Artificial Intelligence
Abstract Philosophical discussion of Alan Turing’s writings on intelligence has mostly revolved around a single point made in a paper published in the journal Mind in 1950. This is unfortunate, for Turing’s reflections on machine (artificial) intelligence, human intelligence, and the relation between them were more extensive and sophisticated. They are seen to be extremely well-considered and sound in retrospect. Recently, IBM developed a question-answering computer (Watson) that could compete against humans on the game show Jeopardy! There are hopes it can be adapted to other contexts besides that game show, in the role of a collaborator of, rather than a competitor to, humans. Another, different, research project - an artificial intelligence program put into operation in 2010 - is the machine learning program NELL (Never Ending Language Learning), which continuously ‘learns’ by ‘reading’ massive amounts of material on millions of web pages. Both of these recent endeavors in artificial intelligence rely to some extent on the integration of human guidance and feedback at various points in the machine’s learning process. In this paper, I examine Turing’s remarks on the development of intelligence used in various kinds of search, in light of the experience gained to date on these projects.
2014-02-03
Preprint
NonPeerReviewed
application/pdf
en
https://philsci-archive.pitt.edu/10316/1/SterrettTuring100February2.pdf
Sterrett, S. G. (2014) Turing on the Integration of Human and Machine Intelligence. [Preprint]
oai:philsci-archive.pitt.edu:10777
2014-06-24T14:56:05Z
7375626A656374733D67656E:636175736174696F6E
7375626A656374733D73706563:6368616F732D7468656F7279
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365
7375626A656374733D67656E:636F6E6669726D6174696F6E2D696E64756374696F6E
7375626A656374733D67656E:6578706572696D656E746174696F6E
7375626A656374733D67656E:6D6F64656C732D616E642D696465616C697A6174696F6E
7375626A656374733D73706563:70726F626162696C6974792D73746174697374696373
7375626A656374733D67656E:7374727563747572652D6F662D7468656F72696573
7375626A656374733D67656E:746563686E6F6C6F6779
74797065733D636F6E666572656E63655F6974656D
https://philsci-archive.pitt.edu/10777/
Aspects of theory-ladenness in data-intensive science
Pietsch, Wolfgang
Causation
Complex Systems
Computer Science
Artificial Intelligence
Confirmation/Induction
Experimentation
Models and Idealization
Probability/Statistics
Structure of Theories
Technology
Recent claims, mainly from computer scientists, concerning a largely automated and model-free data-intensive science have been countered by critical reactions from a number of philosophers of science. The debate suffers from a lack of detail in two respects, regarding (i) the actual methods used in data-intensive science and (ii) the specific ways in which these methods presuppose theoretical assumptions. I examine two widely-used algorithms, classificatory trees and non-parametric regression, and argue that these are theory-laden in an external sense, regarding the framing of research questions, but not in an internal sense concerning the causal structure of the examined phenomenon. With respect to the novelty of data-intensive science, I draw an analogy to exploratory as opposed to theory-directed experimentation.
2014-03-01
Conference or Workshop Item
NonPeerReviewed
application/pdf
en
https://philsci-archive.pitt.edu/10777/1/pietsch_data-intensive-science_psa.pdf
Pietsch, Wolfgang (2014) Aspects of theory-ladenness in data-intensive science. In: UNSPECIFIED.
oai:philsci-archive.pitt.edu:11351
2019-11-17T03:31:12Z
oai:philsci-archive.pitt.edu:11591
2015-07-28T15:30:44Z
7375626A656374733D67656E:636175736174696F6E
7375626A656374733D73706563:6368616F732D7468656F7279
7375626A656374733D73706563:636F6D70757465722D736369656E6365
74797065733D6F74686572
https://philsci-archive.pitt.edu/11591/
Causality, computing, and complexity
Abbott, Russ
Causation
Complex Systems
Computer Science
I discuss two categories of causal relationships: primitive causal interactions of the sort characterized by Phil Dowe and the more general manipulable causal relationships as defined by James Woodward. All primitive causal interactions are manipulable causal relationships, but there are manipulable causal relationships that are not primitive causal interactions. I’ll call the latter constructed causal relationships, and I’ll argue that constructed causal relationships serve as a foundation for both computing and complex systems.
Perhaps even more interesting are autonomous causal relationships. These are constructed causal relationships in which the causal mechanism resides primarily in the effect. A typical example is a software execution engine. Software execution engines are on the effect side of a cause-effect relationship in which software is the cause and the behavior of the execution engine is the effect. The mechanism responsible for that causal relationship resides in the execution engine.
2015-07-26
Other
NonPeerReviewed
application/pdf
en
https://philsci-archive.pitt.edu/11591/1/Constructed_causal_relationships.pdf
Abbott, Russ (2015) Causality, computing, and complexity. UNSPECIFIED.
oai:philsci-archive.pitt.edu:11603
2015-08-06T14:26:49Z
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:686973746F72792D6F662D7068696C6F736F7068792D6F662D736369656E6365
7375626A656374733D67656E:6D6F64656C732D616E642D696465616C697A6174696F6E
7375626A656374733D67656E:7374727563747572652D6F662D7468656F72696573
74797065733D7075626C69736865645F61727469636C65
https://philsci-archive.pitt.edu/11603/
Understanding scientific study via process modeling
Luk, Robert
Computer Science
History of Philosophy of Science
Models and Idealization
Structure of Theories
This paper argues that scientific studies distinguish themselves from other studies by a combination of their processes, their (knowledge) elements and the roles of these elements. This is supported by constructing a process model. An illustrative example based on Newtonian mechanics shows how scientific knowledge is structured
according to the process model. To distinguish scientific studies from research and scientific research, two additional process models are built for such processes. We apply these process models: (1) to argue that scientific progress should emphasize both the process of change and the content of change; (2) to chart the major stages of scientific study development; and (3) to define “science”.
Springer
2010-02
Published Article or Volume
NonPeerReviewed
application/pdf
en
https://philsci-archive.pitt.edu/11603/1/LUKUSS.pdf
Luk, Robert (2010) Understanding scientific study via process modeling. Foundations of Science, 15 (1). pp. 49-78. ISSN 1233-1821
http://link.springer.com/article/10.1007%2Fs10699-009-9168-9
10.1007/s10699-009-9168-9
oai:philsci-archive.pitt.edu:11717
2019-11-17T03:32:26Z
oai:philsci-archive.pitt.edu:11927
2016-02-25T04:04:43Z
7375626A656374733D67656E:636175736174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:636F6E6669726D6174696F6E2D696E64756374696F6E
7375626A656374733D67656E:6578706C616E6174696F6E
7375626A656374733D73706563:70726F626162696C6974792D73746174697374696373
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/11927/
Foundations for a Probabilistic Theory of Causal Strength
Sprenger, Jan
Causation
Computer Science
Confirmation/Induction
Explanation
Probability/Statistics
This paper develops axiomatic foundations for a probabilistic-interventionist theory of causal strength. Transferring methods from Bayesian confirmation theory, I proceed in three steps: (1) I develop a framework for defining and comparing measures of causal strength; (2) I argue that no single measure can satisfy all natural constraints; (3) I prove two representation theorems for popular measures of causal strength: Pearl's causal effect measure and Eells' difference measure. In other words, I demonstrate these two measures can be derived from a set of plausible adequacy conditions. The paper concludes by sketching future research avenues.
2016
Preprint
NonPeerReviewed
application/pdf
en
https://philsci-archive.pitt.edu/11927/1/GradedCausation-v2.pdf
Sprenger, Jan (2016) Foundations for a Probabilistic Theory of Causal Strength. [Preprint]
oai:philsci-archive.pitt.edu:11985
2016-03-21T15:13:24Z
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:70687973696373:7175616E74756D2D6D656368616E696373
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/11985/
What if we have only one universe and closed timelike curves exist?
Kim, Minseong
Computer Science
Quantum Mechanics
David Deutsch provided us one possible solution to the grandfather paradox, Deutsch's closed timelike curves, or simply Deutsch CTC. Deutsch states that this gives us a tool to test many-worlds (Everettian) hypothesis since Deutsch CTC requires Everettian understanding. This paper explores the possibility of co-existence of Deutsch CTC with contextual/epistemic understanding of quantum mechanics. Then this paper presents the irrelevance hypothesis and the hypothetical application to quantum complexity theory.
2016-03-21
Preprint
NonPeerReviewed
application/pdf
en
https://philsci-archive.pitt.edu/11985/1/deutsch-ctc-program-everettian.pdf
Kim, Minseong (2016) What if we have only one universe and closed timelike curves exist? [Preprint]
oai:philsci-archive.pitt.edu:12060
2016-04-24T19:03:40Z
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:7068696C6F736F70686572732D6F662D736369656E6365
74797065733D7075626C69736865645F61727469636C65
https://philsci-archive.pitt.edu/12060/
Behavior, Organization, Substance: Three Gestalts of
General Systems Theory
De Florio, Vincenzo
Computer Science
Philosophers of Science
The term gestalt, when used in the context of gen-
eral systems theory, assumes the value of “systemic touchstone”, namely a figure of reference useful to categorize the properties or qualities of a set of systems. Typical gestalts used, e.g., in biology, are those based on anatomical or physiological characteristics, which correspond respectively to architectural and organizational
design choices in natural and artificial systems. In this paper we discuss three gestalts of general systems theory: behavior, organization, and substance, which refer respectively to the works of Wiener, Boulding, and Leibniz. Our major focus here is the system introduced by the latter. Through a discussion of some of the elements of the Leibnitian System, and by means of several novel interpretations of those elements in terms of today’s
computer science, we highlight the debt that contemporary
research still has with this Giant among the giant scholars of the past.
IEEE
2014-06-24
Published Article or Volume
NonPeerReviewed
application/pdf
en
https://philsci-archive.pitt.edu/12060/1/bare_conf.pdf
De Florio, Vincenzo (2014) Behavior, Organization, Substance: Three Gestalts of General Systems Theory. Proc. of the 2014 Conference on Norbert Wiener in the 21st Century.
oai:philsci-archive.pitt.edu:12429
2016-09-14T12:45:10Z
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:636F6E6669726D6174696F6E2D696E64756374696F6E
7375626A656374733D73706563:70726F626162696C6974792D73746174697374696373
74797065733D7075626C69736865645F61727469636C65
https://philsci-archive.pitt.edu/12429/
Solomonoff Prediction and Occam's Razor
Sterkenburg, Tom F.
Computer Science
Confirmation/Induction
Probability/Statistics
Algorithmic information theory gives an idealized notion of compressibility, that is often presented as an objective measure of simplicity. It is suggested at times that Solomonoff prediction, or algorithmic information theory in a predictive setting, can deliver an argument to justify Occam's razor. This paper explicates the relevant argument, and, by converting it into a Bayesian framework, reveals why it has no such justificatory force. The supposed simplicity concept is better perceived as a specific inductive assumption, the assumption of effectiveness. It is this assumption that is the characterizing element of Solomonoff prediction, and wherein its philosophical interest lies.
The University of Chicago Press
2016-10
Published Article or Volume
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/12429/1/soloccam.pdf
Sterkenburg, Tom F. (2016) Solomonoff Prediction and Occam's Razor. Philosophy of Science, 83 (4). pp. 459-479.
http://www.journals.uchicago.edu/doi/pdfplus/10.1086/687257
10.1086/687257
oai:philsci-archive.pitt.edu:12626
2017-02-21T14:32:55Z
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:6D6F64656C732D616E642D696465616C697A6174696F6E
7375626A656374733D67656E:746563686E6F6C6F6779
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/12626/
On The Hourglass Model, The End-to-End Principle and Deployment Scalability
Beck, Micah
Computer Science
Models and Idealization
Technology
The hourglass model is a widely used as a means of describing the design of the Internet, and can be found in the introduction of many modern textbooks. It arguably also applies to the design of other successful spanning layers, notably the Unix operating system kernel interface, meaning the primitive system calls and the interactions between user processes and the kernel. The impressive success of the Internet has led to a wider interest in using the hourglass model in other layered systems, with the goal of achieving similar results. However, application of the hourglass model has often led to controversy, perhaps in part because the language in which it has been expressed has been informal, and arguments for its validity have not been precise. Making a start on formalizing such an argument is the goal of this paper.
2016-11-08
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/12626/1/hourglass.pdf
Beck, Micah (2016) On The Hourglass Model, The End-to-End Principle and Deployment Scalability. [Preprint]
oai:philsci-archive.pitt.edu:12644
2016-11-18T19:43:12Z
7375626A656374733D67656E:636175736174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:636F6E6669726D6174696F6E2D696E64756374696F6E
7375626A656374733D67656E:6578706C616E6174696F6E
7375626A656374733D73706563:6D65646963696E65
7375626A656374733D73706563:70726F626162696C6974792D73746174697374696373
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/12644/
Foundations for a Probabilistic Theory of Causal Strength
Sprenger, Jan
Causation
Computer Science
Confirmation/Induction
Explanation
Medicine
Probability/Statistics
This paper develops axiomatic foundations for a probabilistic theory of causal strength. I proceed in three steps: First, I motivate the choice of causal Bayes nets as a framework for defining and comparing measures of causal strength. Second, I prove several representation theorems for probabilistic measures of causal strength---that is, I demonstrate how these measures can be derived from a set of plausible adequacy conditions. Third, I compare these measures on the basis of their characteristic properties, including an application to quantifying causal effect in medicine. Finally, I use the above results to argue for a specific measure of causal strength and I outline future research avenues.
2016
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/12644/1/GradedCausation-v3.pdf
Sprenger, Jan (2016) Foundations for a Probabilistic Theory of Causal Strength. [Preprint]
oai:philsci-archive.pitt.edu:12725
2017-01-01T15:28:28Z
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:70687973696373:7175616E74756D2D6D656368616E696373
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/12725/
What if we have only one universe and closed timelike curves exist?
Kim, Bryce
Computer Science
Quantum Mechanics
David Deutsch provided us one possible solution to the grandfather paradox, Deutsch's closed timelike curves, or simply Deutsch CTC. Deutsch states that this gives us a tool to test many-worlds (Everettian) hypothesis since Deutsch CTC requires Everettian understanding. This paper explores the possibility of co-existence of Deutsch CTC with contextual/epistemic understanding of quantum mechanics. Then this paper presents the irrelevance hypothesis and the hypothetical application to quantum complexity theory.
2016-03-21
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/12725/1/deutsch-ctc-program-everettian.pdf
Kim, Bryce (2016) What if we have only one universe and closed timelike curves exist? [Preprint]
oai:philsci-archive.pitt.edu:12818
2017-02-12T18:41:06Z
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E:436C6173736963616C
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E:636F6D7075746174696F6E2D696E666F726D6174696F6E2D7175616E74756D
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:7374727563747572652D6F662D7468656F72696573
7375626A656374733D67656E:746563686E6F6C6F6779
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/12818/
Universality, Invariance, and the Foundations of Computational Complexity in the light of the Quantum Computer
Cuffaro, Michael E.
Classical
Quantum
Computer Science
Structure of Theories
Technology
2017-02-11
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/12818/1/tech_complex.pdf
Cuffaro, Michael E. (2017) Universality, Invariance, and the Foundations of Computational Complexity in the light of the Quantum Computer. [Preprint]
oai:philsci-archive.pitt.edu:12824
2017-02-16T15:15:16Z
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:6578706C616E6174696F6E
7375626A656374733D73706563:6D617468656D6174696373
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/12824/
A Meaning Explanation for HoTT
Tsementzis, Dimitris
Computer Science
Explanation
Mathematics
The Univalent Foundations (UF) offer a new picture of the foundations of mathematics largely independent from set theory. In this paper I will focus on the question of whether Homotopy Type Theory (HoTT) (as a formalization of UF) can be justified intuitively as a theory of shapes in the same way that ZFC (as a formalization of set-theoretic foundations) can be justified intuitively as a theory of collections. I first clarify what I mean by an “intuitive justification” by distinguishing between formal and pre- formal “meaning explanations” in the vein of Martin-Löf. I then explain why Martin-Löf’s original meaning explanation for type theory no longer applies to HoTT. Finally, I outline a pre-formal meaning explanation for HoTT based on spatial notions like “shape”, “path”, “point” etc. which in particular provides an intuitive justification of the axiom of univalence. I conclude by discussing the limitations and prospects of such a project.
2017-02-15
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/12824/1/A.Meaning.Explanation.for.HoTT.pdf
Tsementzis, Dimitris (2017) A Meaning Explanation for HoTT. [Preprint]
oai:philsci-archive.pitt.edu:12973
2017-04-06T14:33:03Z
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:70687973696373
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/12973/
How is there a Physics of Information? On characterising physical evolution as information processing.
Maroney, O J E
Timpson, C G
Computer Science
Physics
We have a conundrum. The physical basis of information is clearly a highly active research area. Yet the power of information theory comes precisely from separating it from the detailed problems of building physical systems to perform information processing tasks. Developments in quantum information over the last two decades seem to have undermined this separation, leading to suggestions that information is itself a physical entity and must be part of our physical theories, with resource-cost implications. We will consider a variety of ways in which physics seems to a affect computation, but will ultimately argue to the contrary: rejecting the claims that information is physical provides a better basis for understanding the fertile relationship between information theory and physics. instead, we will argue that the physical resource costs of information processing are to be understood through the need to consider physically embodied agents for whom information processing tasks are performed. Doing so sheds light on what it takes for something to be implementing a computational or information processing task of a given kind.
2017-04-06
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/12973/1/beholder.pdf
Maroney, O J E and Timpson, C G (2017) How is there a Physics of Information? On characterising physical evolution as information processing. [Preprint]
oai:philsci-archive.pitt.edu:13082
2017-05-30T20:19:05Z
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E:436C6173736963616C
7375626A656374733D67656E:536369656E74696669635F4D65746170687973696373
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:7068696C6F736F70686572732D6F662D736369656E6365
74797065733D636F6E666572656E63655F6974656D
https://philsci-archive.pitt.edu/13082/
Minimal Information Structural Realism
Krzanowski, Roman
Classical
Scientific Metaphysics
Computer Science
Philosophers of Science
This paper presents Minimal Information Structural Realism (MISR). MISR claims that information (signified by I) is an ontologically and epistemologically objective physical entity1 (signified by R) and is perceived as, but not identical to, organization, form, or structure of nature (signified by S). There is a relatively significant body of literature claiming that the essential, if not fundamental, element of nature is information. Authors differ on the precise description of information conceived this way. However, they do agree that it would be a forming element in nature, a factor responsible for patterns observed in reality, apprehended through order, organization or structures. To express the fundamental ontological role of information in nature, a new kind of structural realism, or rather information structural realism (ISR), is needed. This paper is proposing exactly this in the form of minimal information structural realism (MISR). The basic claim of MISR is that information is a foundation of reality and it is perceived or apprehended through patterns or structures. This claim embodies basic intuitions regarding the role of information in nature. MISR is not associated with the structural realism SR of the ontic or epistemic kinds, and is only remotely related to the concept of information structural realism (ISR) defined by Floridi.
2017-04
Conference or Workshop Item
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/13082/1/MinimalISR_NNPS2007_Krzanowski_AWABSMay.pdf
Krzanowski, Roman (2017) Minimal Information Structural Realism. In: UNSPECIFIED.
oai:philsci-archive.pitt.edu:13175
2017-07-03T14:35:52Z
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E:436C6173736963616C
7375626A656374733D73706563:6D617468656D6174696373:504D6C6F676963
7375626A656374733D73706563:70687973696373:636C6173736963616C2D70687973696373
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E6365
74797065733D7075626C69736865645F61727469636C65
https://philsci-archive.pitt.edu/13175/
Undecidability in Rn: Riddled Basins, the KAM Tori, and the Stability of the Solar System
Parker, Matthew W.
Classical
Logic
Classical Physics
Computation/Information
Computer Science
Some have suggested that certain classical physical systems have undecidable long-term behavior, without specifying an appropriate notion of decidability over the reals. We introduce such a notion, decidability in μ (or d-μ) for any measure μ, which is particularly appropriate for physics and in some ways more intuitive than Ko’s (1991) recursive approximability (r.a.). For Lebesgue measure λ, d-λ implies r.a. Sets with positive λ-measure that are sufficiently “riddled” with holes are never d-λ but are often r.a. This explicates Sommerer and Ott’s (1996) claim of uncomputable behavior in a system with riddled basins of attraction. Furthermore, it clarifies speculations that the stability of the solar system (and similar systems) may be undecidable, for the invariant tori established by KAM theory form sets that are not d-λ.
University of Chicago Press
2003-04-01
Published Article or Volume
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/13175/1/parker2003.pdf
Parker, Matthew W. (2003) Undecidability in Rn: Riddled Basins, the KAM Tori, and the Stability of the Solar System. Philosophy of Science, 70 (2). pp. 359-382.
http://www.journals.uchicago.edu/doi/abs/10.1086/375472
https://doi.org/10.1086/375472
oai:philsci-archive.pitt.edu:13181
2017-07-06T17:58:06Z
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E:436C6173736963616C
7375626A656374733D73706563:70687973696373:636C6173736963616C2D70687973696373
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:70687973696373
74797065733D7075626C69736865645F61727469636C65
https://philsci-archive.pitt.edu/13181/
Undecidable Long-term Behavior in Classical Physics: Foundations, Results, and Interpretation
Parker, Matthew W.
Classical
Classical Physics
Computation/Information
Computer Science
Physics
The behavior of some systems is non-computable in a precise new sense. One infamous problem is that of the stability of the solar system: Given the initial positions and velocities of several mutually gravitating bodies, will any eventually collide or be thrown off to infinity? Many have made vague suggestions that this and similar problems are undecidable: no finite procedure can reliably determine whether a given configuration will eventually prove unstable. But taken in the most natural way, this is trivial. The state of a system corresponds to a point in a continuous space, and virtually no set of points in space is strictly decidable. A new, more pragmatic concept is therefore introduced: a set is decidable up to measure zero (d.m.z.) if there is a procedure to decide whether a point is in that set and it only fails on some points that form a set of zero volume. This volume and probability: we can ignore a zero-volume set of states because the state of an arbitrary system almost certainly will not fall in that set. D.m.z. is also closer to the intuition of decidability than other notions in the literature, which are either less strict or apply only to special sets, like closed sets. Certain complicated sets are not d.m.z., most remarkably including the set of known stable orbits for planetary systems (the KAM tori). This suggests that the stability problem is indeed undecidable in the precise sense of d.m.z. Carefully extending decidability concepts from idealized models to actual systems, we see that even deterministic aspects of physical behavior can be undecidable in a clear and significant sense.
University of Chicago
2005
Published Article or Volume
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/13181/1/Parker%2005%20dissertation.pdf
Parker, Matthew W. (2005) Undecidable Long-term Behavior in Classical Physics: Foundations, Results, and Interpretation.
https://search.proquest.com/pqdtglobal/docview/305413272
oai:philsci-archive.pitt.edu:13217
2017-07-16T18:23:43Z
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:65636F6E6F6D696373
7375626A656374733D73706563:70726F626162696C6974792D73746174697374696373
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/13217/
Role of information and its processing in statistical analysis
Kim, Bryce
Computation/Information
Computer Science
Economics
Probability/Statistics
This paper discusses how real-life statistical analysis/inference deviates from ideal environments. More specifically, there often exist models that have equal statistical power as the actual data-generating model, given only limited information and information processing/computation capacity. This means that misspecification actually has two problems: first with misspecification around the model we wish to find, and that an actual data-generating model may never be discovered. Thus the role information - this includes data - plays on statistical inference needs to be considered more heavily than often done. A game defining pseudo-equivalent models is presented in this light. This limited information nature effectively casts a statistical analyst as a decider in decision theory facing an identical problem: trying best to form credence/belief of some events, even if it may end up not being close to objective probability. The sleeping beauty problem is used as a study case to highlight some properties of real-life statistical inference. Bayesian inference of prior updates can lead to wrong credence analysis when prior is assigned to variables/events that are not (statistical identification-wise) identifiable. A controversial idea that Bayesianism can go around identification problems in frequentist analysis is brought to more doubts. This necessitates re-defining how Kolmogorov probability theory is applied in real-life statistical inference, and what concepts need to be fundamental.
2017-07-15
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/13217/1/role_of_information_processing.pdf
Kim, Bryce (2017) Role of information and its processing in statistical analysis. [Preprint]
oai:philsci-archive.pitt.edu:13232
2017-07-20T14:52:22Z
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:65636F6E6F6D696373
7375626A656374733D73706563:70726F626162696C6974792D73746174697374696373
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/13232/
Role of information and its processing in statistical analysis
Kim, Bryce
Computation/Information
Computer Science
Economics
Probability/Statistics
This paper discusses how real-life statistical analysis/inference deviates from ideal environments. More specifically, there often exist models that have equal statistical power as the actual data-generating model, given only limited information and information processing/computation capacity. This means that misspecification actually has two problems: first with misspecification around the model we wish to find, and that an actual data-generating model may never be discovered. Thus the role information - this includes data - plays on statistical inference needs to be considered more heavily than often done. A game defining pseudo-equivalent models is presented in this light. This limited information nature effectively casts a statistical analyst as a decider in decision theory facing an identical problem: trying best to form credence/belief of some events, even if it may end up not being close to objective probability. The sleeping beauty problem is used as a study case to highlight some properties of real-life statistical inference. Bayesian inference of prior updates can lead to wrong credence analysis when prior is assigned to variables/events that are not (statistical identification-wise) identifiable. A controversial idea that Bayesianism can go around identification problems in frequentist analysis is brought to more doubts. This necessitates re-defining how Kolmogorov probability theory is applied in real-life statistical inference, and what concepts need to be fundamental.
2017-07-15
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/13232/1/role_of_information_processing.pdf
Kim, Bryce (2017) Role of information and its processing in statistical analysis. [Preprint]
oai:philsci-archive.pitt.edu:13244
2017-07-21T14:00:36Z
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:65636F6E6F6D696373
7375626A656374733D73706563:70726F626162696C6974792D73746174697374696373
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/13244/
Role of information and its processing in statistical analysis
Kim, Bryce
Computation/Information
Computer Science
Economics
Probability/Statistics
This paper discusses how real-life statistical analysis/inference deviates from ideal environments. More specifically, there often exist models that have equal statistical power as the actual data-generating model, given only limited information and information processing/computation capacity. This means that misspecification actually has two problems: first with misspecification around the model we wish to find, and that an actual data-generating model may never be discovered. Thus the role information - this includes data - plays on statistical inference needs to be considered more heavily than often done. A game defining pseudo-equivalent models is presented in this light. This limited information nature effectively casts a statistical analyst as a decider in decision theory facing an identical problem: trying best to form credence/belief of some events, even if it may end up not being close to objective probability. The sleeping beauty problem is used as a study case to highlight some properties of real-life statistical inference. Bayesian inference of prior updates can lead to wrong credence analysis when prior is assigned to variables/events that are not (statistical identification-wise) identifiable. A controversial idea that Bayesianism can go around identification problems in frequentist analysis is brought to more doubts. This necessitates re-defining how Kolmogorov probability theory is applied in real-life statistical inference, and what concepts need to be fundamental.
2017-07-15
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/13244/1/role_of_information_processing.pdf
Kim, Bryce (2017) Role of information and its processing in statistical analysis. [Preprint]
oai:philsci-archive.pitt.edu:14108
2017-11-10T16:38:40Z
7375626A656374733D67656E:636175736174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:636F6E6669726D6174696F6E2D696E64756374696F6E
7375626A656374733D67656E:6578706C616E6174696F6E
7375626A656374733D73706563:6D65646963696E65
7375626A656374733D73706563:70726F626162696C6974792D73746174697374696373
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/14108/
Foundations of a Probabilistic Theory of Causal Strength
Sprenger, Jan
Causation
Computer Science
Confirmation/Induction
Explanation
Medicine
Probability/Statistics
This paper develops axiomatic foundations for a probabilistic theory of causal strength as difference-making. I proceed in three steps: First, I motivate the choice of causal Bayes nets as an adequate framework for defining and comparing measures of causal strength. Second, I prove several representation theorems for probabilistic measures of causal strength---that is, I demonstrate how these measures can be derived from a set of plausible adequacy conditions. Third, I use these results to argue for a specific measure of causal strength: the difference that interventions on the cause make for the probability of the effect. I conclude by discussing my results and outlining future research avenues.
2017-11-01
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/14108/7/GradedCausation-v7.pdf
text
en
https://philsci-archive.pitt.edu/14108/8/Proofs_GradedCausation_appendix.pdf
Sprenger, Jan (2017) Foundations of a Probabilistic Theory of Causal Strength. [Preprint]
oai:philsci-archive.pitt.edu:14151
2018-05-02T20:32:15Z
oai:philsci-archive.pitt.edu:14300
2018-01-17T16:15:46Z
7375626A656374733D67656E:44617461
7375626A656374733D73706563:62696F6C6F6779
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:686973746F72792D6F662D7068696C6F736F7068792D6F662D736369656E6365
7375626A656374733D67656E:7374727563747572652D6F662D7468656F72696573
7375626A656374733D67656E:7468656F72792D6368616E6765
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/14300/
How to do digital philosophy of science
Pence, Charles H.
Ramsey, Grant
Data
Biology
Computation/Information
Computer Science
History of Philosophy of Science
Structure of Theories
Theory Change
Philosophy of science is beginning to be expanded via the introduction of new digital resources—both data and tools for its analysis. The data comprise digitized published books and journal articles, as well as heretofore unpublished and recently digitized material, such as images, archival text, notebooks, meeting notes, and programs. This growing bounty of data would be of little use, however, without quality tools with which to analyze it. Fortunately, the growth in available data is matched by the extensive development of automated analysis tools. For the beginner, this wide variety of data sources and tools can be overwhelming. In this essay, we survey the state of digital work in the philosophy of science, showing what kinds of questions can be answered and how one can go about answering them.
2018
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/14300/1/HowToDoDPoS_Preprint.pdf
Pence, Charles H. and Ramsey, Grant (2018) How to do digital philosophy of science. [Preprint]
oai:philsci-archive.pitt.edu:14599
2018-05-02T20:32:04Z
oai:philsci-archive.pitt.edu:14600
2019-03-02T17:02:21Z
oai:philsci-archive.pitt.edu:14612
2018-05-02T20:31:54Z
oai:philsci-archive.pitt.edu:14725
2018-05-30T22:13:45Z
7375626A656374733D73706563:70737963686F6C6F67792D70737963686961747279:62696F6C6F67792D65766F6C7574696F6E6172792D70737963686F6C6F6779
7375626A656374733D73706563:62696F6C6F6779:62696F6C6F67792D65766F6C7574696F6E6172792D7468656F7279
7375626A656374733D73706563:636F676E69746976652D736369656E6365
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365
7375626A656374733D67656E:666F726D616C2D6C6561726E696E672D7468656F7279
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/14725/
Hierarchical Models for the Evolution of Compositional Language
Barrett, Jeffrey A.
Skyrms, Brian
Cochran, Calvin
Evolutionary Psychology
Evolutionary Theory
Cognitive Science
Computer Science
Artificial Intelligence
Formal Learning Theory
We present three hierarchical models for the evolution of compositional language. Each has the basic structure of a two-sender/one receiver Lewis signaling game augmented with executive agents who can learn to influence the behavior of the basic senders and receiver. With each game, we move from stronger to weaker modeling assumptions. The first game shows how the basic senders and receiver might evolve a compositional language when the two senders have pre-established representational roles. The second shows how the two senders might coevolve representational roles as they evolve a reliable compositional language. Both of these games impose an efficiency demand on the agents. The third game shows how costly signaling alone might lead role-free agents to evolve a compositional language.
2018-05-30
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/14725/1/compositional%20language%206%20imbs.pdf
Barrett, Jeffrey A. and Skyrms, Brian and Cochran, Calvin (2018) Hierarchical Models for the Evolution of Compositional Language. [Preprint]
oai:philsci-archive.pitt.edu:14742
2018-06-04T20:34:13Z
7375626A656374733D67656E:44617461
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:6D65646963696E65
7375626A656374733D67656E:746563686E6F6C6F6779
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/14742/
Concealment and Discovery: The Role of Information Security in Biomedical Data Re-Use
Tempini, N
Leonelli, Sabina
Data
Computer Science
Medicine
Technology
This paper analyses the role of information security (IS) in shaping the dissemination and re-use of biomedical data, as well as the embedding of such data in the material, social and regulatory landscapes of research. We consider the data management practices adopted by two UK-based data linkage infrastructures: the Secure Anonymised Information Linkage, a Welsh databank that facilitates appropriate re-use of health data derived from research and routine medical practice in the region; and the Medical and Environmental Data Mash-up Infrastructure, a project bringing together researchers from the University of Exeter, the London School of Hygiene and Tropical Medicine, the Met Office and Public Health England to link and analyse complex meteorological, environmental and epidemiological data. Through an in-depth analysis of how data are sourced, processed and analysed in these two cases, we show that IS takes two distinct forms: epistemic IS, focused on protecting the reliability and reusability of data as they move across platforms and research contexts; and infrastructural IS, concerned with protecting data from external attacks, mishandling and use disruption. These two dimensions are intertwined and mutually constitutive, and yet are often perceived by researchers as being in tension with each other. We discuss how such tensions emerge when the two dimensions of IS are operationalised in ways that put them at cross purpose with each other, thus exemplifying the vulnerability of data management strategies to broader governance and technological regimes. We also show that whenever biomedical researchers manage to overcome the conflict, the interplay between epistemic and infrastructural IS prompts critical questions concerning data sources, formats, metadata and potential uses, resulting in an improved understanding of the wider context of research and the development of relevant resources. This informs and significantly improves the re-usability of biomedical data, while encouraging exploratory analyses of secondary data sources.
2018-05-31
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/14742/1/SSS_Concealment%26Discovery_2018.pdf
Tempini, N and Leonelli, Sabina (2018) Concealment and Discovery: The Role of Information Security in Biomedical Data Re-Use. [Preprint]
oai:philsci-archive.pitt.edu:14885
2019-03-02T17:02:46Z
oai:philsci-archive.pitt.edu:14894
2018-07-23T17:40:51Z
7375626A656374733D73706563:6D617468656D6174696373:504D6170706C69636162696C697479
7375626A656374733D73706563:6D617468656D6174696373:504D6570697374656D6F6C6F6779
7375626A656374733D73706563:6D617468656D6174696373:504D6578706C616E6174696F6E
7375626A656374733D73706563:6D617468656D6174696373:504D666F756E646174696F6E73
7375626A656374733D73706563:6D617468656D6174696373:504D6D6574686F646F6C6F6779
7375626A656374733D73706563:6D617468656D6174696373:504D6F6E746F6C6F6779
7375626A656374733D73706563:636F676E69746976652D736369656E6365
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E:636F6D7075746174696F6E2D696E666F726D6174696F6E2D7175616E74756D
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365
7375626A656374733D73706563:65636F6E6F6D696373
7375626A656374733D73706563:70726F626162696C6974792D73746174697374696373
7375626A656374733D73706563:736F63696F6C6F6779
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/14894/
Postulating the theory of experience and chance
as a theory of co~events (co~beings)
Vorobyev, Oleg Yu
Applicability
Epistemology
Explanation
Foundations
Methodology
Ontology
Cognitive Science
Computation/Information
Quantum
Computer Science
Artificial Intelligence
Economics
Probability/Statistics
Sociology
The aim of the paper is the axiomatic justification of the theory of experience and chance,
one of the dual halves of which is the Kolmogorov probability theory. The author’s main idea was the
natural inclusion of Kolmogorov’s axiomatics of probability theory in a number of general concepts of
the theory of experience and chance. The analogy between the measure of a set and the probability of an
event has become clear for a long time. This analogy also allows further evolution: the measure of a set is
completely analogous to the believability of an event. In order to postulate the theory of experience and
chance on the basis of this analogy, you just need to add to the Kolmogorov probability theory its dual
reflection — the believability theory, so that the theory of experience and chance could be postulated as
the certainty (believability-probability) theory on the Cartesian product of the probability and believability
spaces, and the central concept of the theory is the new notion of co~event as a measurable binary relation
on the Cartesian product of sets of elementary incomes and elementary outcomes. Attempts to build the
foundations of the theory of experience and chance from this general point of view are unknown to me,
and the whole range of ideas presented here has not yet acquired popularity even in a narrow circle of
specialists; in addition, there was still no complete system of the postulates of the theory of experience
and chance free from unnecessary complications. Postulating the theory of experience and chance can be
carried out in different ways, both in the choice of axioms and in the choice of basic concepts and relations.
If one tries to achieve the possible simplicity of both the system of axioms and the theory constructed
from it, then it is hardly possible to suggest anything other than axiomatization of concepts co~event and
its certainty (believability-probability). The main result of this work is the axiom of co~event, intended
for the sake of constructing a theory formed by dual theories of believabilities and probabilities, each of
which itself is postulated by its own Kolmogorov system of axioms. Of course, other systems of postulating
the theory of experience and chance can be imagined, however, in this work, a preference is given to
a system of postulates that is able to describe in the most simple manner the results of what I call an
experienced-random experiment.
2016-09-30
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/14894/1/XV-famems2016-ISBN-978-5-9903358-6-8-VorobyevOYu-25-43.pdf
Vorobyev, Oleg Yu (2016) Postulating the theory of experience and chance as a theory of co~events (co~beings). [Preprint]
https://www.academia.edu/34417203/Proceedings_of_the_XV_FAMEMS-2016_Conference_on_Financial_and_Actuarial_Math_and_Eventology_of_Multivariate_Statistics_and_the_EEC-H_s6P_Workshop_on_Hilberts_Sixth_Problem_Oleg_Vorobyev_ed._-_Krasnoyarsk_SFU_2016._-_261p
oai:philsci-archive.pitt.edu:14944
2018-10-31T17:51:52Z
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:636F6E6669726D6174696F6E2D696E64756374696F6E
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/14944/
The Meta-Inductive Justification of Induction
Sterkenburg, Tom F.
Computer Science
Confirmation/Induction
I evaluate Schurz's proposed meta-inductive justification of induction, a refinement of Reichenbach's pragmatic justification that is founded on results from the machine learning branch of prediction with expert advice.
2018
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/14944/1/metapredl.pdf
Sterkenburg, Tom F. (2018) The Meta-Inductive Justification of Induction. [Preprint]
oai:philsci-archive.pitt.edu:15034
2018-09-19T17:49:19Z
7375626A656374733D73706563:6D617468656D6174696373:504D666F756E646174696F6E73
7375626A656374733D73706563:6D617468656D6174696373:504D6C6F676963
7375626A656374733D73706563:6D617468656D6174696373:504D7072616374696365
7375626A656374733D73706563:6D617468656D6174696373:504D70726F6F66
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:7374727563747572652D6F662D7468656F72696573
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/15034/
Univalent Foundations and the UniMath Library
Bordg, Anthony
Foundations
Logic
Practice
Proof
Computer Science
Structure of Theories
We give a concise presentation of the Univalent Foundations of mathematics outlining the main ideas (section 1), followed by a discussion of the large-scale UniMath library of formalized mathematics implementing the ideas of the Univalent Foundations, and the challenges one faces in designing such a library (section 2). This leads us to a general discussion about the links between architecture and mathematics where a meeting of minds is revealed between architects and mathematicians (section 3). Last, we show how the Univalent Foundations enforces a structuralist view of mathematics embodied in the so-called Structure Identity Principle (section 4). On the way our odyssey from the foundations to the "horizon" of mathematics will lead us to meet the mathematicians David Hilbert and Nicolas Bourbaki as well as the architect Christopher Alexander and the philosopher Paul Benacerraf.
2018
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/15034/1/Univalent_Foundations_and_the_UniMath_library.pdf
Bordg, Anthony (2018) Univalent Foundations and the UniMath Library. [Preprint]
oai:philsci-archive.pitt.edu:15051
2018-09-24T16:32:14Z
7375626A656374733D73706563:6D617468656D6174696373:504D666F756E646174696F6E73
7375626A656374733D73706563:6D617468656D6174696373:504D6C6F676963
7375626A656374733D73706563:6D617468656D6174696373:504D7072616374696365
7375626A656374733D73706563:6D617468656D6174696373:504D70726F6F66
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:7374727563747572652D6F662D7468656F72696573
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/15051/
Univalent Foundations and the UniMath Library. The Architecture of Mathematics.
Bordg, Anthony
Foundations
Logic
Practice
Proof
Computer Science
Structure of Theories
We give a concise presentation of the Univalent Foundations of mathematics outlining the main ideas, followed by a discussion of the UniMath library of formalized mathematics implementing the ideas of the Univalent Foundations (section 1), and the challenges one faces in attempting to design a large-scale library of formalized mathematics (section 2). This leads us to a general discussion about the links between architecture and mathematics where a meeting of minds is revealed between architects and mathematicians (section 3). On the way our odyssey from the foundations to the "horizon" of mathematics will lead us to meet the mathematicians David Hilbert and Nicolas Bourbaki as well as the architect Christopher Alexander.
2018
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/15051/7/UF_and_UniMath.pdf
Bordg, Anthony (2018) Univalent Foundations and the UniMath Library. The Architecture of Mathematics. [Preprint]
oai:philsci-archive.pitt.edu:15052
2018-09-24T16:32:19Z
7375626A656374733D73706563:6D617468656D6174696373:504D666F756E646174696F6E73
7375626A656374733D73706563:6D617468656D6174696373:504D6C6F676963
7375626A656374733D73706563:6D617468656D6174696373:504D7072616374696365
7375626A656374733D73706563:6D617468656D6174696373:504D70726F6F66
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:7374727563747572652D6F662D7468656F72696573
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/15052/
Univalent Foundations and the UniMath Library. The Architecture of Mathematics.
Bordg, Anthony
Foundations
Logic
Practice
Proof
Computer Science
Structure of Theories
We give a concise presentation of the Univalent Foundations of mathematics outlining the main ideas, followed by a discussion of the UniMath library of formalized mathematics implementing the ideas of the Univalent Foundations (section 1), and the challenges one faces in attempting to design a large-scale library of formalized mathematics (section 2). This leads us to a general discussion about the links between architecture and mathematics where a meeting of minds is revealed between architects and mathematicians (section 3). On the way our odyssey from the foundations to the "horizon" of mathematics will lead us to meet the mathematicians David Hilbert and Nicolas Bourbaki as well as the architect Christopher Alexander.
2018
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/15052/1/Univalent_Foundations_and_the_UniMath_library.pdf
Bordg, Anthony (2018) Univalent Foundations and the UniMath Library. The Architecture of Mathematics. [Preprint]
oai:philsci-archive.pitt.edu:15057
2018-09-25T17:55:39Z
7375626A656374733D73706563:6D617468656D6174696373:504D666F756E646174696F6E73
7375626A656374733D73706563:6D617468656D6174696373:504D6C6F676963
7375626A656374733D73706563:6D617468656D6174696373:504D7072616374696365
7375626A656374733D73706563:6D617468656D6174696373:504D70726F6F66
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:7374727563747572652D6F662D7468656F72696573
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/15057/
Univalent Foundations and the UniMath Library. The Architecture of Mathematics.
Bordg, Anthony
Foundations
Logic
Practice
Proof
Computer Science
Structure of Theories
We give a concise presentation of the Univalent Foundations of mathematics outlining the main ideas, followed by a discussion of the UniMath library of formalized mathematics implementing the ideas of the Univalent Foundations (section 1), and the challenges one faces in attempting to design a large-scale library of formalized mathematics (section 2). This leads us to a general discussion about the links between architecture and mathematics where a meeting of minds is revealed between architects and mathematicians (section 3). On the way our odyssey from the foundations to the "horizon" of mathematics will lead us to meet the mathematicians David Hilbert and Nicolas Bourbaki as well as the architect Christopher Alexander.
2018
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/15057/1/UF_and_UniMath.pdf
Bordg, Anthony (2018) Univalent Foundations and the UniMath Library. The Architecture of Mathematics. [Preprint]
oai:philsci-archive.pitt.edu:15075
2018-09-29T16:03:11Z
7375626A656374733D73706563:62696F6C6F6779:62696F6C6F67792D66756E6374696F6E2D74656C656F6C6F6779
7375626A656374733D73706563:636F676E69746976652D736369656E6365
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:6578706C616E6174696F6E
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/15075/
Are There Teleological Functions to Compute?
Coelho Mollo, Dimitri
Function/Teleology
Cognitive Science
Computation/Information
Computer Science
Explanation
I analyse a tension at the core of the mechanistic view of computation, generated by its joint commitment to the medium-independence of computational vehicles, and to computational systems possessing teleological functions to compute. While computation is individuated in medium-independent terms, teleology is sensitive to the constitutive physical properties of vehicles. This tension spells trouble for the mechanistic view, suggesting that there can be no teleological functions to compute. I argue that, once considerations about the relevant function-bestowing factors for computational systems are brought to bear, the tension dissolves: physical systems can have the teleological function to compute.
2018-09-05
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/15075/1/PreprintPhilSciCoelhoMollo
Coelho Mollo, Dimitri (2018) Are There Teleological Functions to Compute? [Preprint]
oai:philsci-archive.pitt.edu:15171
2018-10-20T15:20:35Z
7375626A656374733D73706563:6368616F732D7468656F7279
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:6578706C616E6174696F6E
7375626A656374733D67656E:6D6F64656C732D616E642D696465616C697A6174696F6E
7375626A656374733D73706563:736F63696F6C6F6779
74797065733D636F6E666572656E63655F6974656D
https://philsci-archive.pitt.edu/15171/
Explaining Scientific Collaboration: a General Functional Account
Boyer-Kassem, Thomas
Imbert, Cyrille
Complex Systems
Computer Science
Explanation
Models and Idealization
Sociology
For two centuries, collaborative research has become increasingly widespread. Various explanations of this trend have been proposed. Here, we offer a novel functional explanation of it. It differs from ac- counts like that of Wray (2002) by the precise socio-epistemic mech- anism that grounds the beneficialness of collaboration. Boyer-Kassem and Imbert (2015) show how minor differences in the step-efficiency of collaborative groups can make them much more successful in particular configurations. We investigate this model further, derive robust social patterns concerning the general successfulness of collaborative groups, and argue that these patterns can be used to defend a general functional account.
2018-10
Conference or Workshop Item
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/15171/1/2018_03_01_Explaining_collaboration%3DPSA%3DV91_online%3DPhilsci_Archiv.pdf
Boyer-Kassem, Thomas and Imbert, Cyrille (2018) Explaining Scientific Collaboration: a General Functional Account. In: UNSPECIFIED.
oai:philsci-archive.pitt.edu:15230
2018-10-31T17:53:56Z
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:636F6E6669726D6174696F6E2D696E64756374696F6E
74797065733D636F6E666572656E63655F6974656D
https://philsci-archive.pitt.edu/15230/
The Meta-Inductive Justification of Induction: The Pool of Strategies
Sterkenburg, Tom F.
Computer Science
Confirmation/Induction
This paper poses a challenge to Schurz's proposed meta-inductive justification of induction. It is argued that Schurz's argument requires a notion of optimality that can deal with an expanding pool of prediction strategies.
2018
Conference or Workshop Item
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/15230/1/metapredz.pdf
Sterkenburg, Tom F. (2018) The Meta-Inductive Justification of Induction: The Pool of Strategies. In: UNSPECIFIED.
oai:philsci-archive.pitt.edu:15349
2018-11-20T01:39:55Z
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E:436C6173736963616C
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E:636F6D7075746174696F6E2D696E666F726D6174696F6E2D7175616E74756D
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:7374727563747572652D6F662D7468656F72696573
7375626A656374733D67656E:746563686E6F6C6F6779
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/15349/
Universality, Invariance, and the Foundations of Computational Complexity in the light of the Quantum Computer
Cuffaro, Michael E.
Classical
Quantum
Computer Science
Structure of Theories
Technology
2018-11-18
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/15349/1/tech_complex.pdf
Cuffaro, Michael E. (2018) Universality, Invariance, and the Foundations of Computational Complexity in the light of the Quantum Computer. [Preprint]
https://doi.org/10.1007/978-3-319-93779-3_11
10.1007/978-3-319-93779-3_11
oai:philsci-archive.pitt.edu:15738
2019-02-23T22:09:19Z
7375626A656374733D73706563:6368656D6973747279
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:6578706572696D656E746174696F6E
7375626A656374733D67656E:7468656F72792D6F62736572766174696F6E
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/15738/
Instrumental Perspectivism: Is AI Machine Learning Technology like NMR Spectroscopy?
Mitchell, Sandra D.
Chemistry
Computer Science
Experimentation
Theory/Observation
The question, “Will science remain human?” expresses a worry that deep learning algorithms will replace scientists in making crucial judgments of classification and inference and that something crucial will be lost if that happens. Ever since the introduction of telescopes and microscopes humans have relied on technologies to “extend” beyond human sensory perception in acquiring scientific knowledge. In this paper I explore whether the ways in which new learning technologies “extend” beyond human cognitive aspects of science can be treated instrumentally. I will consider the norms for determining the reliability of a detection instrument, nuclear magnetic resonance spectroscopy, in predicting models of protein atomic structure. Do the same norms that apply in that case be used to judge the reliability of Artificial Intelligence deep learning algorithms?
2019-02-12
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/15738/1/Mitchell%20Instrumental%20Perspectivism%202019.pdf
Mitchell, Sandra D. (2019) Instrumental Perspectivism: Is AI Machine Learning Technology like NMR Spectroscopy? [Preprint]
oai:philsci-archive.pitt.edu:15770
2019-02-25T21:28:20Z
7375626A656374733D73706563:62696F6C6F6779
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:6578706572696D656E746174696F6E
7375626A656374733D67656E:746563686E6F6C6F6779
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/15770/
Phronesis and Automated Science: The Case of Machine Learning and Biology
Ratti, Emanuele
Biology
Computation/Information
Computer Science
Experimentation
Technology
The applications of machine learning (ML) and deep learning to the natural sciences has fostered the idea that the automated nature of algorithmic analysis will gradually dispense human beings from scientific work. In this paper, I will show that this view is problematic, at least when ML is applied to biology. In particular, I will claim that ML is not independent of human beings and cannot form the basis of automated science. Computer scientists conceive their work as being a case of Aristotle’s poiesis perfected by techne, which can be reduced to a number of straightforward rules and technical knowledge. I will show a number of concrete cases where at each level of computational analysis, more is required to ML than just poiesis and techne, and that the work of ML practitioners in biology needs also the cultivation of something analogous to phronesis, which cannot be automated. But even if we knew how to frame phronesis into rules (which is inconsistent with its own definition), still this virtue is deeply entrenched in our biological constitution, which computers lack. Whether computers can fully perform scientific practice (which is the result of the way we are cognitively and biologically) independently of humans (and their cognitive and biological specificities) is an ill-posed question.
2019
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/15770/1/Emanuele%20Ratti%20-%20Phronesis%20and%20Automated%20Science.pdf
Ratti, Emanuele (2019) Phronesis and Automated Science: The Case of Machine Learning and Biology. [Preprint]
oai:philsci-archive.pitt.edu:15855
2019-03-28T02:58:45Z
7375626A656374733D73706563:6D617468656D6174696373:504D6C6F676963
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365:636C6173736963616C2D6169
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365:6D616368696E652D6C6561726E696E67
7375626A656374733D67656E:7068696C6F736F70686572732D6F662D736369656E6365
74797065733D7075626C69736865645F61727469636C65
https://philsci-archive.pitt.edu/15855/
Making AI meaningful again
Landgrebe, Jobst
Smith, Barry
Logic
Classical AI
Computer Science
Machine Learning
Philosophers of Science
Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s, but this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial intelligence encouraged by these successes, especially in the domain of language processing. We then show an alternative approach to language-centric AI, in which we identify a role for philosophy.
Springer (Springer Science+Business Media B.V.)
2019
Published Article or Volume
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/15855/1/makingAImeaningfulagain.pdf
Landgrebe, Jobst and Smith, Barry (2019) Making AI meaningful again. Synthese. ISSN 1573-0964
oai:philsci-archive.pitt.edu:15978
2019-05-06T15:55:35Z
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E:436C6173736963616C
7375626A656374733D73706563:6D617468656D6174696373:504D666F756E646174696F6E73
7375626A656374733D73706563:6D617468656D6174696373:504D6C6F676963
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365:636C6173736963616C2D6169
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:63756C747572616C2D65766F6C7574696F6E
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/15978/
For Cybersecurity, Computer Science Must Rely on the Opposite of Gödel’s Results
Hewitt, Carl
Classical
Foundations
Logic
Classical AI
Computer Science
Cultural Evolution
This article shows how fundamental higher-order theories of mathematical structures of computer science (e.g. natural numbers [Dedekind 1888] and Actors [Hewitt et. al. 1973]) are cetegorical meaning that they can be axiomatized up to a unique isomorphism thereby removing any ambiguity in the mathematical structures being axiomatized. Having these mathematical structures precisely defined can make systems more secure because there are fewer ambiguities and holes for cyberattackers to exploit. For example, there are no infinite elements in models for natural numbers to be exploited. On the other hand, the 1st-order theories of Gödel’s results necessarily leave the mathematical structures ill-defined, e.g., there are necessarily models with infinite integers.
Cyberattackers have severely damaged national, corporate, and individual security as well causing hundreds of billions of dollars of economic damage. A significant cause of the damage is that current engineering practices are not sufficiently grounded in theoretical principles. In the last two decades, little new theoretical work has been done that practically impacts large engineering projects with the result that computer systems engineering education is insufficient in providing theoretical grounding. If the current cybersecurity situation is not quickly remedied, it will soon become much worse because of the projected development of Scalable Intelligent Systems by 2025 [Hewitt 2019].
Gödel strongly advocated that the Turing Machine is the preeminent universal model of computation. A Turing machine formalizes an algorithm in which computation proceeds without external interaction. However, computing is now highly interactive, which this article proves is beyond the capability of a Turing Machine. Instead of the Turing Machine model, this article presents an axiomatization of a universal model of digital computation (including implementation of Scalable Intelligent Systems) up to a unique isomorphism.
2019-05-03
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/15978/1/LICS-105.pdf
Hewitt, Carl (2019) For Cybersecurity, Computer Science Must Rely on the Opposite of Gödel’s Results. [Preprint]
oai:philsci-archive.pitt.edu:15982
2019-05-07T17:04:22Z
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E:436C6173736963616C
7375626A656374733D73706563:6D617468656D6174696373:504D666F756E646174696F6E73
7375626A656374733D73706563:6D617468656D6174696373:504D6C6F676963
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365:636C6173736963616C2D6169
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:63756C747572616C2D65766F6C7574696F6E
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/15982/
For Cybersecurity, Computer Science Must Rely on the Opposite of Gödel’s Results
Hewitt, Carl
Classical
Foundations
Logic
Classical AI
Computer Science
Cultural Evolution
This article shows how fundamental higher-order theories of mathematical structures of computer science (e.g. natural numbers [Dedekind 1888] and Actors [Hewitt et. al. 1973]) are cetegorical meaning that they can be axiomatized up to a unique isomorphism thereby removing any ambiguity in the mathematical structures being axiomatized. Having these mathematical structures precisely defined can make systems more secure because there are fewer ambiguities and holes for cyberattackers to exploit. For example, there are no infinite elements in models for natural numbers to be exploited. On the other hand, the 1st-order theories of Gödel’s results necessarily leave the mathematical structures ill-defined, e.g., there are necessarily models with infinite integers.
Cyberattackers have severely damaged national, corporate, and individual security as well causing hundreds of billions of dollars of economic damage. A significant cause of the damage is that current engineering practices are not sufficiently grounded in theoretical principles. In the last two decades, little new theoretical work has been done that practically impacts large engineering projects with the result that computer systems engineering education is insufficient in providing theoretical grounding. If the current cybersecurity situation is not quickly remedied, it will soon become much worse because of the projected development of Scalable Intelligent Systems by 2025 [Hewitt 2019].
Gödel strongly advocated that the Turing Machine is the preeminent universal model of computation. A Turing machine formalizes an algorithm in which computation proceeds without external interaction. However, computing is now highly interactive, which this article proves is beyond the capability of a Turing Machine. Instead of the Turing Machine model, this article presents an axiomatization of a universal model of digital computation (including implementation of Scalable Intelligent Systems) up to a unique isomorphism.
2019-05-03
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/15982/1/LICS-105.pdf
text
en
https://philsci-archive.pitt.edu/15982/7/LICS-107.pdf
Hewitt, Carl (2019) For Cybersecurity, Computer Science Must Rely on the Opposite of Gödel’s Results. [Preprint]
oai:philsci-archive.pitt.edu:15989
2019-05-09T20:57:50Z
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E:436C6173736963616C
7375626A656374733D73706563:6D617468656D6174696373:504D666F756E646174696F6E73
7375626A656374733D73706563:6D617468656D6174696373:504D6C6F676963
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365:636C6173736963616C2D6169
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:63756C747572616C2D65766F6C7574696F6E
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/15989/
For Cybersecurity, Computer Science Must Rely on the Opposite of Gödel’s Results
Hewitt, Carl
Classical
Foundations
Logic
Classical AI
Computer Science
Cultural Evolution
This article shows how fundamental higher-order theories of mathematical structures of computer science (e.g. natural numbers [Dedekind 1888] and Actors [Hewitt et. al. 1973]) are cetegorical meaning that they can be axiomatized up to a unique isomorphism thereby removing any ambiguity in the mathematical structures being axiomatized. Having these mathematical structures precisely defined can make systems more secure because there are fewer ambiguities and holes for cyberattackers to exploit. For example, there are no infinite elements in models for natural numbers to be exploited. On the other hand, the 1st-order theories of Gödel’s results necessarily leave the mathematical structures ill-defined, e.g., there are necessarily models with infinite integers.
Cyberattackers have severely damaged national, corporate, and individual security as well causing hundreds of billions of dollars of economic damage. A significant cause of the damage is that current engineering practices are not sufficiently grounded in theoretical principles. In the last two decades, little new theoretical work has been done that practically impacts large engineering projects with the result that computer systems engineering education is insufficient in providing theoretical grounding. If the current cybersecurity situation is not quickly remedied, it will soon become much worse because of the projected development of Scalable Intelligent Systems by 2025 [Hewitt 2019].
Gödel strongly advocated that the Turing Machine is the preeminent universal model of computation. A Turing machine formalizes an algorithm in which computation proceeds without external interaction. However, computing is now highly interactive, which this article proves is beyond the capability of a Turing Machine. Instead of the Turing Machine model, this article presents an axiomatization of a universal model of digital computation (including implementation of Scalable Intelligent Systems) up to a unique isomorphism.
2019-05-03
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/15989/1/LICS-105.pdf
text
en
https://philsci-archive.pitt.edu/15989/7/LICS-107.pdf
Hewitt, Carl (2019) For Cybersecurity, Computer Science Must Rely on the Opposite of Gödel’s Results. [Preprint]
oai:philsci-archive.pitt.edu:16001
2019-05-11T17:01:48Z
7375626A656374733D73706563:6D617468656D6174696373:504D666F756E646174696F6E73
7375626A656374733D73706563:6D617468656D6174696373:504D686973746F7279
7375626A656374733D73706563:6D617468656D6174696373:504D6C6F676963
7375626A656374733D73706563:6D617468656D6174696373:504D70726F6F66
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:6D617468656D6174696373
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/16001/
For Cybersecurity, Computer Science Must Rely on Strong Types
Hewitt, Carl
Foundations
History
Logic
Proof
Computer Science
Mathematics
This article shows how fundamental higher-order theories of mathematical structures of computer science (e.g. natural numbers [Dedekind 1888] and Actors [Hewitt et. al. 1973]) are categorical meaning that they can be axiomatized up to a unique isomorphism thereby removing any ambiguity in the mathematical structures being axiomatized. Having these mathematical structures precisely defined can make systems more secure because there are fewer ambiguities and holes for cyberattackers to exploit. For example, there are no infinite elements in models for natural numbers to be exploited. On the other hand, the 1st-order theories of Gödel’s results necessarily leave the mathematical structures ill-defined, e.g., there are necessarily models with infinite integers.
Cyberattackers have severely damaged national, corporate, and individual security as well causing hundreds of billions of dollars of economic damage. [Sobers 2019] A significant cause of the damage is that current engineering practices are not sufficiently grounded in theoretical principles. In the last two decades, little new theoretical work has been done that practically impacts large engineering projects with the result that computer systems engineering education is insufficient in providing theoretical grounding. If the current cybersecurity situation is not quickly remedied, it will soon become much worse because of the projected development of Scalable Intelligent Systems by 2025 [Hewitt 2019].
Gödel strongly advocated that the Turing Machine is the preeminent universal model of computation. A Turing machine formalizes an algorithm in which computation proceeds without external interaction. However, computing is now highly interactive, which this article proves is beyond the capability of a Turing Machine. Instead of the Turing Machine model, this article presents an axiomatization of a universal model of digital computation (including implementation of Scalable Intelligent Systems) up to a unique isomorphism.
2019-05-10
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/16001/1/LICS-110.pdf
Hewitt, Carl (2019) For Cybersecurity, Computer Science Must Rely on Strong Types. [Preprint]
oai:philsci-archive.pitt.edu:16008
2019-05-16T14:01:09Z
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:636F6E6669726D6174696F6E2D696E64756374696F6E
74797065733D7075626C69736865645F61727469636C65
https://philsci-archive.pitt.edu/16008/
The Meta-Inductive Justification of Induction
Sterkenburg, Tom F.
Computer Science
Confirmation/Induction
I evaluate Schurz's proposed meta-inductive justification of induction, a refinement of Reichenbach's pragmatic justification that rests on results from the machine learning branch of prediction with expert advice.
My conclusion is that the argument, suitably explicated, comes remarkably close to its grand aim: an actual justification of induction. This finding, however, is subject to two main qualifications, and still disregards one important challenge.
The first qualification concerns the empirical success of induction. Even though, I argue, Schurz's argument does not need to spell out what inductive method actually consists in, it does need to postulate that there is something like the inductive or scientific prediction strategy that has so far been *significantly* more successful than alternative approaches. The second qualification concerns the difference between having a justification for inductive method and for sticking with induction *for now*. Schurz's argument can only provide the latter. Finally, the remaining challenge concerns the pool of alternative strategies, and the relevant notion of a meta-inductivist's optimality that features in the analytical step of Schurz's argument. Building on the work done here, I will argue in a follow-up paper that the argument needs a stronger *dynamic* notion of a meta-inductivist's optimality.
2019
Published Article or Volume
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/16008/7/metapredl.pdf
Sterkenburg, Tom F. (2019) The Meta-Inductive Justification of Induction. Episteme.
http://dx.doi.org/10.1017/epi.2018.52
oai:philsci-archive.pitt.edu:16024
2019-05-20T18:35:48Z
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E:436C6173736963616C
7375626A656374733D73706563:6D617468656D6174696373:504D666F756E646174696F6E73
7375626A656374733D73706563:6D617468656D6174696373:504D6C6F676963
7375626A656374733D73706563:6D617468656D6174696373:504D70726F6F66
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365:636C6173736963616C2D6169
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:686973746F72792D6F662D7068696C6F736F7068792D6F662D736369656E6365
7375626A656374733D67656E:7068696C6F736F70686572732D6F662D736369656E6365
7375626A656374733D67656E:736F6369616C2D6570697374656D6F6C6F67792D6F662D736369656E6365
7375626A656374733D67656E:7468656F72792D6368616E6765
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/16024/
For Cybersecurity, Computer Science Must Rely on Strongly-Typed Actors
Hewitt, Carl
Classical
Foundations
Logic
Proof
Classical AI
Computer Science
History of Philosophy of Science
Philosophers of Science
Social Epistemology of Science
Theory Change
This article shows how fundamental higher-order theories of mathematical structures of computer science (e.g. natural numbers [Dedekind 1888] and Actors [Hewitt et. al. 1973]) are categorical meaning that they can be axiomatized up to a unique isomorphism thereby removing any ambiguity in the mathematical structures being axiomatized. Having these mathematical structures precisely defined can make systems more secure because there are fewer ambiguities and holes for cyberattackers to exploit. For example, there are no infinite elements in models for natural numbers to be exploited. On the other hand, the 1st-order theories and computational systems which are not strongly-typed necessarily provide opportunities for cyberattack.
Cyberattackers have severely damaged national, corporate, and individual security as well causing hundreds of billions of dollars of economic damage. [Sobers 2019] A significant cause of the damage is that current engineering practices are not sufficiently grounded in theoretical principles. In the last two decades, little new theoretical work has been done that practically impacts large engineering projects with the result that computer systems engineering education is insufficient in providing theoretical grounding. If the current cybersecurity situation is not quickly remedied, it will soon become much worse because of the projected development of Scalable Intelligent Systems by 2025 [Hewitt 2019].
Gödel strongly advocated that the Turing Machine is the preeminent universal model of computation. A Turing machine formalizes an algorithm in which computation proceeds without external interaction. However, computing is now highly interactive, which this article proves is beyond the capability of a Turing Machine. Instead of the Turing Machine model, this article presents an axiomatization of a strongly-typed universal model of digital computation (including implementation of Scalable Intelligent Systems) up to a unique isomorphism. Strongly-typed Actors provide the foundation for tremendous improvements in cyberdefense.
2019-05-16
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/16024/1/LICS-114.pdf
Hewitt, Carl (2019) For Cybersecurity, Computer Science Must Rely on Strongly-Typed Actors. [Preprint]
oai:philsci-archive.pitt.edu:16057
2019-05-30T04:46:36Z
7375626A656374733D73706563:62696F6C6F6779:62696F6C6F67792D6D6F6C6563756C61722D62696F6C6F67792D67656E6574696373
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365
7375626A656374733D67656E:6578706C616E6174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365:6D616368696E652D6C6561726E696E67
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/16057/
Data science and molecular biology: prediction and mechanistic explanation
López-Rubio, Ezequiel
Ratti, Emanuele
Molecular Biology/Genetics
Computer Science
Artificial Intelligence
Explanation
Machine Learning
In the last few years, biologists and computer scientists have claimed that the introduction of data science techniques in molecular biology has changed the characteristics and the aims of typical outputs (i.e. models) of such a discipline. In this paper we will critically examine this claim. First, we identify the received view on models and their aims in molecular biology. Models in molecular biology are mechanistic and explanatory. Next, we identify the scope and aims of data science (machine learning in particular). These lie mainly in the creation of predictive models which performances increase as data set increases. Next, we will identify a tradeoff between predictive and explanatory performances by comparing the features of mechanistic and predictive models. Finally, we show how this a priori analysis of machine learning and mechanistic research applies to actual biological practice. This will be done by analyzing the publications of a consortium – The Cancer Genome Atlas - which stands at the forefront in integrating data science and molecular biology. The result will be that biologists have to deal with the tradeoff between explaining and predicting that we have identified, and hence the explanatory force of the ‘new’ biology is substantially diminished if compared to the ‘old’ biology. However, this aspect also emphasizes the existence of other research goals which make predictive force independent from explanation.
2019-05-28
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/16057/1/Data%20science%20and%20molecular%20biology.pdf
spreadsheet
en
https://philsci-archive.pitt.edu/16057/2/Supplementary%20Table%201.xls
text
en
https://philsci-archive.pitt.edu/16057/3/SupplementaryTableReferenceList.pdf
López-Rubio, Ezequiel and Ratti, Emanuele (2019) Data science and molecular biology: prediction and mechanistic explanation. [Preprint]
10.1007/s11229-019-02271-0
oai:philsci-archive.pitt.edu:16133
2019-06-22T18:11:51Z
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E:436C6173736963616C
7375626A656374733D73706563:6D617468656D6174696373:504D666F756E646174696F6E73
7375626A656374733D73706563:6D617468656D6174696373:504D686973746F7279
7375626A656374733D73706563:6D617468656D6174696373:504D6C6F676963
7375626A656374733D73706563:6D617468656D6174696373:504D6D6574686F646F6C6F6779
7375626A656374733D73706563:6D617468656D6174696373:504D70726F6F66
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:64657465726D696E69736D2D696E64657465726D696E69736D
7375626A656374733D67656E:736369656E63652D706F6C696379
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/16133/
For Cybersecurity, Computer Science Must Rely on Strongly-Typed Actors
Hewitt, Carl
Classical
Foundations
History
Logic
Methodology
Proof
Computer Science
Determinism/Indeterminism
Science and Policy
This article shows how fundamental higher-order theories of mathematical structures of computer science (e.g. natural numbers [Dedekind 1888] and Actors [Hewitt et. al. 1973]) are categorical meaning that they can be axiomatized up to a unique isomorphism thereby removing any ambiguity in the mathematical structures being axiomatized. Having these mathematical structures precisely defined can make systems more secure because there are fewer ambiguities and holes for cyberattackers to exploit. For example, there are no infinite elements in models for natural numbers to be exploited. On the other hand, the 1st-order theories and computational systems which are not strongly-typed necessarily provide opportunities for cyberattack.
Cyberattackers have severely damaged national, corporate, and individual security as well causing hundreds of billions of dollars of economic damage. [Sobers 2019] A significant cause of the damage is that current engineering practices are not sufficiently grounded in theoretical principles. In the last two decades, little new theoretical work has been done that practically impacts large engineering projects with the result that computer systems engineering education is insufficient in providing theoretical grounding. If the current cybersecurity situation is not quickly remedied, it will soon become much worse because of the projected development of Scalable Intelligent Systems by 2025 [Hewitt 2019].
Kurt Gödel strongly advocated that the Turing Machine is the preeminent universal model of computation. A Turing machine formalizes an algorithm in which computation proceeds without external interaction. However, computing is now highly interactive, which this article proves is beyond the capability of a Turing Machine. Instead of the Turing Machine model, this article presents an axiomatization of a strongly-typed universal model of digital computation (including implementation of Scalable Intelligent Systems) up to a unique isomorphism. Strongly-typed Actors provide the foundation for tremendous improvements in cyberdefense.
2019-06-19
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/16133/1/LFCS-010.pdf
Hewitt, Carl (2019) For Cybersecurity, Computer Science Must Rely on Strongly-Typed Actors. [Preprint]
oai:philsci-archive.pitt.edu:16139
2019-06-20T18:16:39Z
7375626A656374733D67656E:44617461
7375626A656374733D73706563:62696F6C6F6779
7375626A656374733D73706563:62696F6C6F6779:62696F6C6F67792D73797374656D6174696373
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:7265616C69736D2D616E74692D7265616C69736D
7375626A656374733D67656E:736F6369616C2D6570697374656D6F6C6F67792D6F662D736369656E6365
7375626A656374733D67656E:7468656F72792D6368616E6765
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/16139/
Alternatives to Realist Consensus in Bio-Ontologies: Taxonomic Classification as a Basis for Data Discovery and Integration
Sterner, Beckett
Witteveen, Joeri
Franz, Nico
Data
Biology
Systematics
Computer Science
Realism/Anti-realism
Social Epistemology of Science
Theory Change
Big data is opening new angles on old questions about scientific progress. Is scientific knowledge cumulative? If yes, how does it make progress? In the life sciences, what we call the Consensus Principle has dominated the design of data discovery and integration tools: the design of a formal classificatory system for expressing a body of data should be grounded in consensus. Based on current approaches in biomedicine and systematic biology, we formulate and compare three types of the Consensus Principle: realist, contextual-best, and coordinative. Contrasted with the realist program of the Open Biomedical Ontologies Foundry, we argue that historical practices in systematic biology provide an important and overlooked alternative based on coordinative consensus. Systematists have developed a robust system for referring to taxonomic entities that can deliver high quality data discovery and integration without invoking consensus about reality or “settled” science.
2019
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/16139/1/AlternativesToConsensusOntologies-March12-2019.pdf
Sterner, Beckett and Witteveen, Joeri and Franz, Nico (2019) Alternatives to Realist Consensus in Bio-Ontologies: Taxonomic Classification as a Basis for Data Discovery and Integration. [Preprint]
oai:philsci-archive.pitt.edu:16276
2019-08-02T04:09:22Z
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:6578706C616E6174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365:6D616368696E652D6C6561726E696E67
7375626A656374733D67656E:6D6F64656C732D616E642D696465616C697A6174696F6E
7375626A656374733D67656E:76616C7565732D696E2D736369656E6365
74797065733D7075626C69736865645F61727469636C65
https://philsci-archive.pitt.edu/16276/
Understanding from Machine Learning Models
Sullivan, Emily
Computer Science
Explanation
Machine Learning
Models and Idealization
Values In Science
Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In this paper, using the case of deep neural networks, I argue that it is not the complexity or black box nature of a model that limits how much understanding the model provides. Instead, it is a lack of scientific and empirical evidence supporting the link that connects a model to the target phenomenon that primarily prohibits understanding.
2019
Published Article or Volume
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/16276/1/Und_MLM_Sullivan_penultiamte.pdf
Sullivan, Emily (2019) Understanding from Machine Learning Models. British Journal for the Philosophy of Science. ISSN 1464-3537
oai:philsci-archive.pitt.edu:16326
2019-08-15T02:04:50Z
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:6578706C616E6174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365:6D616368696E652D6C6561726E696E67
7375626A656374733D73706563:6E6575726F736369656E6365
7375626A656374733D67656E:746563686E6F6C6F6779
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/16326/
Deep Learning: A Philosophical Introduction
Buckner, Cameron
Computer Science
Explanation
Machine Learning
Neuroscience
Technology
Deep learning is currently the most prominent and widely successful method in artificial intelligence. Despite having played an active role in earlier artificial intelligence and neural network research, philosophers have been largely silent on this technology so far. This is remarkable, given that deep learning neural networks have blown past predicted upper limits on artificial intelligence performance—recognizing complex objects in natural photographs, and defeating world champions in strategy games as complex as Go and chess—yet there remains no universally-accepted explanation as to why they work so well. This article provides an introduction to these networks, as well as an opinionated guidebook on the philosophical significance of their structure and achievements. It argues that deep learning neural networks differ importantly in their structure and mathematical properties from the shallower neural networks that were the subject of so much philosophical reflection in the 1980s and 1990s. The article then explores several different explanations for their success, and ends by proposing ten areas of research that would benefit from future engagement by philosophers of mind, epistemology, science, perception, law, and ethics.
2019
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/16326/1/Deep%20learning%20-%20Phil%20compass%20-%20draft%203.pdf
Buckner, Cameron (2019) Deep Learning: A Philosophical Introduction. [Preprint]
10.1111/phc3.12625
oai:philsci-archive.pitt.edu:16336
2019-08-16T12:00:17Z
7375626A656374733D73706563:636F676E69746976652D736369656E6365
7375626A656374733D73706563:636F676E69746976652D736369656E6365:636F6D7075746174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:6578706C616E6174696F6E
7375626A656374733D67656E:6F7065726174696F6E616C69736D2D696E7374756D656E74616C69736D
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/16336/
Against Computational Perspectivalism
Coelho Mollo, Dimitri
Cognitive Science
Computation
Computer Science
Explanation
Operationalism/Instrumentalism
Computational perspectivalism has been recently proposed as an alternative to mainstream accounts of physical computation, and especially to the teleologically-based mechanistic view. It takes physical computation to be partly dependent on explanatory perspectives, and eschews appeal to teleology in helping individuate computational systems. I assess several varieties of computational perspectivalism, showing that they either collapse into existing non-perspectival views; or end up with unsatisfactory or implausible accounts of physical computation. Computational perspectivalism fails therefore to be a compelling alternative to perspective-independent theories of computation in physical systems. I conclude that a teleologically-based, non-perspectival mechanistic account of physical computation is to be preferred.
2019-08
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/16336/1/Against%20computational%20perspectivalism%20%28BJPS%29%20-%20Preprint%20-%20Coelho%20Mollo.pdf
Coelho Mollo, Dimitri (2019) Against Computational Perspectivalism. [Preprint]
https://doi.org/10.1093/bjps/axz036
10.1093/bjps/axz036
oai:philsci-archive.pitt.edu:16337
2019-08-16T12:01:25Z
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:70737963686F6C6F67792D70737963686961747279:6A7564676D656E742D616E642D6465636973696F6E2D6D616B696E67
7375626A656374733D73706563:70726F626162696C6974792D73746174697374696373
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/16337/
Nonrational Belief Paradoxes as Byzantine Failures
Miller, Ryan
Computer Science
Judgment and Decision Making
Probability/Statistics
David Christensen and others argue that Dutch Strategies are more like peer disagreements than Dutch Books, and should not count against agents’ conformity to ideal rationality. I review these arguments, then show that Dutch Books, Dutch Strategies, and peer disagreements are only possible in the case of what computer scientists call Byzantine Failures—uncorrected Byzantine Faults which update arbitrary values. Yet such Byzantine Failures make agents equally vulnerable to all three kinds of epistemic inconsistencies, so there is no principled basis for claiming that only avoidance of true Dutch Books characterizes ideally rational agents. Agents without Byzantine Failures can be ideally rational in a very strong sense, but are not normative for humans. Bounded rationality in the presence of Byzantine Faults remains an unsolved problem.
2019
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/16337/1/byzantineirrationality.pdf
Miller, Ryan (2019) Nonrational Belief Paradoxes as Byzantine Failures. [Preprint]
oai:philsci-archive.pitt.edu:16344
2019-08-20T13:54:08Z
7375626A656374733D73706563:62696F6C6F6779:62696F6C6F67792D66756E6374696F6E2D74656C656F6C6F6779
7375626A656374733D73706563:636F676E69746976652D736369656E6365
7375626A656374733D73706563:636F676E69746976652D736369656E6365:636F6D7075746174696F6E
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:6578706C616E6174696F6E
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/16344/
Are There Teleological Functions to Compute?
Coelho Mollo, Dimitri
Function/Teleology
Cognitive Science
Computation
Computation/Information
Computer Science
Explanation
I analyse a tension at the core of the mechanistic view of computation, generated by its joint commitment to the medium-independence of computational vehicles, and to computational systems possessing teleological functions to compute. While computation is individuated in medium-independent terms, teleology is sensitive to the constitutive physical properties of vehicles. This tension spells trouble for the mechanistic view, suggesting that there can be no teleological functions to compute. I argue that, once considerations about the relevant function-bestowing factors for computational systems are brought to bear, the tension dissolves: physical systems can have the teleological function to compute.
2019-07
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/16344/1/Are%20there%20teleological%20functions%20to%20compute%20-%20Preprint%20%28Philosophy%20of%20Science%29%20-%20Dimitri%20Coelho%20Mollo.pdf
Coelho Mollo, Dimitri (2019) Are There Teleological Functions to Compute? [Preprint]
https://www.journals.uchicago.edu/doi/10.1086/703554
10.1086/703554
oai:philsci-archive.pitt.edu:16443
2019-09-18T23:53:40Z
7375626A656374733D67656E:44617461
7375626A656374733D67656E:636175736174696F6E
7375626A656374733D73706563:6368616F732D7468656F7279
7375626A656374733D73706563:636C696D6174652D736369656E63652D616E642D6D6574656F726F6C6F6779
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:636F6D70757465722D73696D756C6174696F6E
7375626A656374733D73706563:65636F6E6F6D696373
7375626A656374733D67656E:6578706C616E6174696F6E
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/16443/
Big data and prediction: four case studies
Northcott, Robert
Data
Causation
Complex Systems
Climate Science and Meteorology
Computer Science
Computer Simulation
Economics
Explanation
Has the rise of data-intensive science, or ‘big data’, revolutionized our ability to predict? Does it imply a new priority for prediction over causal understanding, and a diminished role for theory and human experts? I examine four important cases where prediction is desirable: political elections, the weather, GDP, and the results of interventions suggested by economic experiments. These cases suggest caution. Although big data methods are indeed very useful sometimes, in this paper’s cases they improve predictions either limitedly or not at all, and their prospects of doing so in the future are limited too.
2019-09-18
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/16443/1/Big%20data%2012.pdf
Northcott, Robert (2019) Big data and prediction: four case studies. [Preprint]
https://doi.org/10.1016/j.shpsa.2019.09.002
oai:philsci-archive.pitt.edu:16484
2019-10-03T23:47:36Z
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:686973746F72792D6F662D7068696C6F736F7068792D6F662D736369656E6365
7375626A656374733D67656E:686973746F72792D6F662D736369656E63652D636173652D73747564696573
7375626A656374733D67656E:736369656E63652D616E642D736F6369657479
7375626A656374733D67656E:746563686E6F6C6F6779
74797065733D7075626C69736865645F61727469636C65
https://philsci-archive.pitt.edu/16484/
Scientific knowledge in the age of computation: Explicated, computable and manageable?
Efstathiou, Sophia
Nydal, Rune
Laegreid, Astrid
Kuiper, Martin
Computation/Information
Computer Science
History of Philosophy of Science
History of Science Case Studies
Science and Society
Technology
We have two theses about scientific knowledge in the age of computation. Our general claim is that scientific Knowledge Management practices emerge as second-order practices whose aim is to systematically collect, take care of and mobilise first-hand disciplinary knowledge and data. Our specific thesis is that knowledge management practices are transforming biological research in at least three ways. We argue that scientific Knowledge Management a. operates with founded concepts of biological knowledge as explicated and computable, b. enables new outputs and ways of knowing within biology, and c. risks enforcing objectivist epistemologies of knowledge as some one objective thing.
Euskal Herriko Unibertsitatea / Universidad del País Vasco
2019-05
Published Article or Volume
NonPeerReviewed
text
en
cc_by_nc_nd_4
https://philsci-archive.pitt.edu/16484/1/def_20045_Efstahiou_Theoria34-2.pdf
Efstathiou, Sophia and Nydal, Rune and Laegreid, Astrid and Kuiper, Martin (2019) Scientific knowledge in the age of computation: Explicated, computable and manageable? THEORIA. An International Journal for Theory, History and Foundations of Science, 34 (2). pp. 213-236. ISSN 2171-679X
https://www.ehu.eus/ojs/index.php/THEORIA/article/view/20045
10.1387/theoria.20045
oai:philsci-archive.pitt.edu:16602
2019-10-31T05:59:15Z
7375626A656374733D67656E:536369656E74696669635F4D65746170687973696373
7375626A656374733D73706563:636F676E69746976652D736369656E6365
7375626A656374733D73706563:636F676E69746976652D736369656E6365:636F6D7075746174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:6578706C616E6174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365:6D616368696E652D6C6561726E696E67
7375626A656374733D67656E:6D6F64656C732D616E642D696465616C697A6174696F6E
7375626A656374733D67656E:6E61747572616C2D6B696E6473
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/16602/
From Implausible Artificial Neurons to Idealized Cognitive Models: Rebooting Philosophy of Artificial Intelligence
Stinson, Catherine
Scientific Metaphysics
Cognitive Science
Computation
Computer Science
Explanation
Machine Learning
Models and Idealization
Natural Kinds
There is a vast literature within philosophy of mind that focuses on artificial intelligence, but hardly mentions methodological questions. There is also a growing body of work in philosophy of science about modeling methodology that hardly mentions examples from cognitive science. Here these discussions are connected. Insights developed in the philosophy of science literature about the importance of idealization provide a way of understanding the neural implausibility of connectionist networks. Insights from neurocognitive science illuminate how relevant similarities between models and targets are picked out, how modeling inferences are justified, and the metaphysical status of models.
2019
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/16602/1/Artificial_Neurons_preprint.pdf
Stinson, Catherine (2019) From Implausible Artificial Neurons to Idealized Cognitive Models: Rebooting Philosophy of Artificial Intelligence. [Preprint]
oai:philsci-archive.pitt.edu:16634
2019-11-13T12:47:32Z
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:636F6E6669726D6174696F6E2D696E64756374696F6E
74797065733D7075626C69736865645F61727469636C65
https://philsci-archive.pitt.edu/16634/
The Meta-Inductive Justification of Induction: The Pool of Strategies
Sterkenburg, Tom F.
Computer Science
Confirmation/Induction
This paper poses a challenge to Schurz's proposed meta-inductive justification of induction. It is argued that Schurz's argument requires a notion of optimality that can deal with an expanding pool of prediction strategies.
2019
Published Article or Volume
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/16634/1/metapredz.pdf
Sterkenburg, Tom F. (2019) The Meta-Inductive Justification of Induction: The Pool of Strategies. Philosophy of Science.
10.1086/705526
oai:philsci-archive.pitt.edu:16639
2019-11-18T16:52:22Z
7375626A656374733D73706563:6D617468656D6174696373:504D666F756E646174696F6E73
7375626A656374733D73706563:6D617468656D6174696373:504D6C6F676963
7375626A656374733D73706563:6D617468656D6174696373:504D7072616374696365
7375626A656374733D73706563:6D617468656D6174696373:504D70726F6F66
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:7374727563747572652D6F662D7468656F72696573
74797065733D7075626C69736865645F61727469636C65
https://philsci-archive.pitt.edu/16639/
Univalent Foundations and the UniMath Library. The Architecture of Mathematics.
Bordg, Anthony
Foundations
Logic
Practice
Proof
Computer Science
Structure of Theories
We give a concise presentation of the Univalent Foundations of mathematics outlining the main ideas, followed by a discussion of the UniMath library of formalized mathematics implementing the ideas of the Univalent Foundations (section 1), and the challenges one faces in attempting to design a large-scale library of formalized mathematics (section 2). This leads us to a general discussion about the links between architecture and mathematics where a meeting of minds is revealed between architects and mathematicians (section 3). On the way our odyssey from the foundations to the "horizon" of mathematics will lead us to meet the mathematicians David Hilbert and Nicolas Bourbaki as well as the architect Christopher Alexander.
Springer International Publishing
2019
Published Article or Volume
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/16639/1/UF_and_UniMath.pdf
Bordg, Anthony (2019) Univalent Foundations and the UniMath Library. The Architecture of Mathematics. in Reflections on the Foundations of Mathematics, Synthese Library, 407.
https://www.springer.com/gp/book/9783030156541
10.1007/978-3-030-15655-8
oai:philsci-archive.pitt.edu:16669
2019-11-29T05:41:07Z
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:636F6D70757465722D73696D756C6174696F6E
7375626A656374733D67656E:6578706C616E6174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365:6D616368696E652D6C6561726E696E67
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/16669/
Transparency in Complex Computational Systems
Creel, Kathleen A.
Computer Science
Computer Simulation
Explanation
Machine Learning
Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have suggested treating opaque systems instrumentally, but computer scientists developing strategies for increasing transparency are correct in finding this unsatisfying. Instead, I propose an analysis of transparency as having three forms: transparency of the algorithm, the realization of the algorithm in code, and the way that code is run on particular hardware and data. This targets the transparency most useful for a task, avoiding instrumentalism by providing partial transparency when full transparency is impossible.
2019-11-28
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/16669/1/Creel_TransparencyInComplexComputationalSystems.pdf
Creel, Kathleen A. (2019) Transparency in Complex Computational Systems. [Preprint]
oai:philsci-archive.pitt.edu:16916
2020-02-16T16:45:45Z
7375626A656374733D73706563:6E6575726F736369656E6365:636F676E69746976652D6E6575726F736369656E6365
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:636F6D70757465722D73696D756C6174696F6E
7375626A656374733D73706563:636F676E69746976652D736369656E6365:636F6E63657074732D616E642D726570726573656E746174696F6E73
7375626A656374733D73706563:6E6575726F736369656E6365:73797374656D732D6E6575726F736369656E6365
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/16916/
The physics of representation
Poldrack, Russell A.
Cognitive Neuroscience
Computer Science
Computer Simulation
Concepts and Representations
Systems Neuroscience
The concept of “representation” is used broadly and uncontroversially throughout neuroscience, in contrast to its highly controversial status within the philosophy of mind and cognitive science. In this paper I first discuss the way that the term is used within neuroscience, in particular describing the strategies by which representations are characterized empirically. I then relate the concept of representation within neuroscience to one that has developed within the field of machine learning (in particular through recent work in deep learning or “representation learning”). I argue that the recent success of artificial neural networks on certain tasks such as visual object recognition reflects the degree to which those systems (like biological brains) exhibit inherent inductive biases that reflect on the structure of the physical world. I further argue that any system that is going to behave intelligently in the world must contain representations that reflect the structure of the world; otherwise, the system must perform unconstrained function approximation which is destined to fail due to the curse of dimensionality, in which the number of possible states of the world grows exponentially with the number of dimensions in the space of possible inputs. An analysis of these concepts in light of philosophical debates regarding the ontological status of representations suggests that the representations identified within both biological and artificial neural networks qualify as first-class representations.
2020
Preprint
NonPeerReviewed
text
en
cc_by_4
https://philsci-archive.pitt.edu/16916/1/PhysicsOfRepresentation.pdf
Poldrack, Russell A. (2020) The physics of representation. [Preprint]
oai:philsci-archive.pitt.edu:16947
2020-02-26T00:38:20Z
7375626A656374733D73706563:6D617468656D6174696373:504D6F6E746F6C6F6779
7375626A656374733D73706563:636F6D70757465722D736369656E6365
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/16947/
Invention, Intension and the Extension of the Computational Analogy
Greif, Hajo
Ontology
Computer Science
This short philosophical discussion piece explores the relation between two common assumptions: first, that at least some cognitive abilities, such as inventiveness and intuition, are specifically human and, second, that there are principled limitations to what machine-based computation can accomplish in this respect. In contrast to apparent common wisdom, this relation may be one of informal association. The argument rests on the conceptual distinction between intensional and extensional equivalence in the philosophy of computing: Maintaining a principled difference between the processes involved in human cognition, including practices of computation, and machine computation will crucially depend on the requirement of intensional equivalence. However, this requirement was neither part of Turing's expressly extensionally defined analogy between human and machine computation, nor is it pertinent to the domain of computational modelling. Accordingly, the boundaries of the domains of human cognition and machine computation might be independently defined, distinct in extension and variable in relation.
2020-02-24
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/16947/1/Greif_CiE_2020%20v2.pdf
Greif, Hajo (2020) Invention, Intension and the Extension of the Computational Analogy. [Preprint]
oai:philsci-archive.pitt.edu:17008
2020-03-18T04:35:10Z
7375626A656374733D73706563:62696F6C6F6779:62696F6C6F67792D6D6F6C6563756C61722D62696F6C6F67792D67656E6574696373
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365:6D616368696E652D6C6561726E696E67
7375626A656374733D67656E:7468656F72792D6368616E6765
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/17008/
What Kind of Novelties Can Machine Learning Possibly Generate? The Case of Genomics
Ratti, Emanuele
Molecular Biology/Genetics
Computer Science
Artificial Intelligence
Machine Learning
Theory Change
Machine learning (ML) has been praised as a tool that can advance science and knowledge in radical ways. However, it is not clear exactly how radical are the novelties that ML generates. In this article, I argue that this question can only be answered contextually, because outputs generated by ML have to be evaluated on the basis of the theory of the science to which ML is applied. In particular, I analyze the problem of novelty of ML outputs in the context of molecular biology. In order to do this, I first clarify the nature of the models generated by ML. Next, I distinguish three ways in which a model can be novel (from the weakest to the strongest). Third, I dissect the way ML algorithms work and generate models in molecular biology and genomics. On these bases, I argue that ML is either a tool to identify instances of knowledge already present and codified, or to generate models that are novel in a weak sense. The notable contribution of ML to scientific discovery in the context of biology is that it can aid humans in overcoming
potential bias by exploring more systematically the space of possible hypotheses implied by a theory.
2020
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/17008/1/Novelties%20in%20Machine%20Learning%20%28penultimate%29%20-%20Emanuele%20Ratti.pdf
Ratti, Emanuele (2020) What Kind of Novelties Can Machine Learning Possibly Generate? The Case of Genomics. [Preprint]
oai:philsci-archive.pitt.edu:17027
2020-03-27T01:33:56Z
7375626A656374733D67656E:44617461
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365:6D616368696E652D6C6561726E696E67
7375626A656374733D73706563:70726F626162696C6974792D73746174697374696373
7375626A656374733D67656E:746563686E6F6C6F6779
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/17027/
The Big Data razor
López-Rubio, Ezequiel
Data
Computer Science
Artificial Intelligence
Machine Learning
Probability/Statistics
Technology
Classic conceptions of model simplicity for machine learning are mainly based on the analysis of the structure of the model. Bayesian, Frequentist, information theoretic and expressive power concepts are the best known of them, which are reviewed in this work, along with their underlying assumptions and weaknesses. These approaches were developed before the advent of the Big Data deluge, which has overturned the importance of structural simplicity. The computational simplicity concept is presented, and it is argued that it is more encompassing and closer to actual machine learning practices than the classic ones. In order to process the huge datasets which are commonplace nowadays, the computational complexity of the learning algorithm is the decisive factor to assess the viability of a machine learning strategy, while the classic accounts of simplicity play a surrogate role. Some of the desirable features of computational simplicity derive from its reliance on the learning system concept, which integrates key aspects of machine learning that are ignored by the classic concepts. Moreover, computational simplicity is directly associated with energy efficiency. In particular, the question of whether the maximum possibly achievable predictive accuracy should be attained, no matter the economic cost of the associated energy consumption pattern, is considered.
2020-03
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/17027/1/MLSimplicity_EJPS_Preprint.pdf
López-Rubio, Ezequiel (2020) The Big Data razor. [Preprint]
https://dx.doi.org/10.1007/s13194-020-00288-8
10.1007/s13194-020-00288-8
oai:philsci-archive.pitt.edu:17061
2020-04-10T04:20:09Z
7375626A656374733D73706563:6368656D6973747279
7375626A656374733D73706563:636F6D7075746174696F6E2D696E666F726D6174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:686973746F72792D6F662D736369656E63652D636173652D73747564696573
7375626A656374733D67656E:6D6F64656C732D616E642D696465616C697A6174696F6E
7375626A656374733D67656E:736369656E63652D616E642D736F6369657479
74797065733D7075626C69736865645F61727469636C65
https://philsci-archive.pitt.edu/17061/
“Only the Initiates Will Have the Secrets Revealed”: Computational Chemists and the Openness of Scientific Software
Hocquet, Alexandre
Wieber, Frederic
Chemistry
Computation/Information
Computer Science
History of Science Case Studies
Models and Idealization
Science and Society
Computational chemistry is a scientific field within which the computer is a pivotal element. This scientific community emerged in the 1980s and was involved with two major industries: the computer manufacturers and the pharmaceutical industry, the latter becoming a potential market for the former through molecular modeling software packages. We aim to address the difficult relationships between scientific modeling methods and the software implementing these methods throughout the 1990s. Developing, using, licensing, and distributing software leads to multiple tensions among the actors in intertwined academic and industrial contexts. The Computational Chemistry mailing List (CCL), created in 1991, constitutes a valuable corpus for revealing the tensions associated with software within the community. We analyze in detail two flame wars that exemplify these tensions. We conclude that models and software must be addressed together. Interrelations between both imply that openness in computational science is complex.
2017-10
Published Article or Volume
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/17061/1/1811.12173.pdf
Hocquet, Alexandre and Wieber, Frederic (2017) “Only the Initiates Will Have the Secrets Revealed”: Computational Chemists and the Openness of Scientific Software. IEEE Annals of the History of Computing, 39 (4). pp. 40-58. ISSN 1058-6180
http://doi.org/10.1109/MAHC.2018.1221048
doi:10.1109/MAHC.2018.1221048
oai:philsci-archive.pitt.edu:17063
2020-04-13T01:24:38Z
7375626A656374733D73706563:6368656D6973747279
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:686973746F72792D6F662D736369656E63652D636173652D73747564696573
7375626A656374733D67656E:6D6F64656C732D616E642D696465616C697A6174696F6E
7375626A656374733D67656E:736369656E63652D616E642D736F6369657479
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/17063/
Computational Chemistry as Voodoo Quantum Mechanics : Models, Parameterization, and Software
Wieber, Frederic
Hocquet, Alexandre
Chemistry
Computer Science
History of Science Case Studies
Models and Idealization
Science and Society
Computational chemistry grew in a new era of "desktop modeling", which coincided with a growing demand for modeling software, especially from the pharmaceutical industry. Parameterization of models in computational chemistry is an arduous enterprise, and we argue that this activity leads, in this specific context, to tensions among scientists regarding the lack of epistemic transparency of parameterized methods and the software implementing them. To explicit these tensions, we rely on a corpus which is suited for revealing them, namely the Computational Chemistry mailing List (CCL), a professional scientific discussion forum. We relate one flame war from this corpus in order to assess in detail the relationships between modeling methods, parameterization, software and the various forms of their enclosure or disclosure. Our claim is that parameterization issues are a source of epistemic opacity and that this opacity is entangled in methods and software alike. Models and software must be addressed together to understand the epistemological tensions at stake.
2018-12
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/17063/1/1812.00995.pdf
Wieber, Frederic and Hocquet, Alexandre (2018) Computational Chemistry as Voodoo Quantum Mechanics : Models, Parameterization, and Software. [Preprint]
https://arxiv.org/abs/1812.00995
oai:philsci-archive.pitt.edu:17079
2020-04-28T03:17:19Z
7375626A656374733D73706563:636F676E69746976652D736369656E6365:636F6D7075746174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E6365
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/17079/
Why do we need a theory of implementation?
Curtis-Trudel, Andre E
Computation
Computer Science
The received view of computation is methodologically bifurcated: it offers different accounts of computation in the mathematical and physical cases. But little in the way of argument has been given for this approach.
This paper rectifies the situation by arguing that the alternative, a unified account, is untenable.
Furthermore, once these issues are brought into sharper relief we can see that work remains to be done to illuminate the relationship between physical and mathematical
computation.
2020
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/17079/1/WDWN%20-%20Final.pdf
Curtis-Trudel, Andre E (2020) Why do we need a theory of implementation? [Preprint]
oai:philsci-archive.pitt.edu:17169
2020-05-11T03:07:31Z
7375626A656374733D67656E:44617461
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365:61692D616E642D657468696373
7375626A656374733D73706563:636F676E69746976652D736369656E6365:636F6D7075746174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365:6D616368696E652D6C6561726E696E67
7375626A656374733D67656E:7068696C6F736F70686572732D6F662D736369656E6365
7375626A656374733D67656E:736F6369616C2D6570697374656D6F6C6F67792D6F662D736369656E6365
7375626A656374733D73706563:70737963686F6C6F67792D70737963686961747279:736F6369616C2D70737963686F6C6F6779
7375626A656374733D67656E:746563686E6F6C6F6779
7375626A656374733D67656E:76616C7565732D696E2D736369656E6365
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/17169/
Algorithmic Bias: On the Implicit Biases of Social Technology
Johnson, Gabbrielle
Data
AI and Ethics
Computation
Computer Science
Machine Learning
Philosophers of Science
Social Epistemology of Science
Social Psychology
Technology
Values In Science
Often machine learning programs inherit social patterns reflected in their training data without any directed effort by programmers to include such biases. Computer scientists call this algorithmic bias. This paper explores the relationship between machine bias and human cognitive bias. In it, I argue similarities between algorithmic and cognitive biases indicate a disconcerting sense in which sources of bias emerge out of seemingly innocuous patterns of information processing. The emergent nature of this bias obscures the existence of the bias itself, making it difficult to identify, mitigate, or evaluate using standard resources in epistemology and ethics. I demonstrate these points in the case of mitigation techniques by presenting what I call 'the Proxy Problem'. One reason biases resist revision is that they rely on proxy attributes, seemingly innocuous attributes that correlate with socially-sensitive attributes, serving as proxies for the socially-sensitive attributes themselves. I argue that in both human and algorithmic domains, this problem presents a common dilemma for mitigation: attempts to discourage reliance on proxy attributes risk a tradeoff with judgement accuracy. This problem, I contend, admits of no purely algorithmic solution.
2020-05-10
Preprint
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/17169/1/Algorithmic%20Bias.pdf
Johnson, Gabbrielle (2020) Algorithmic Bias: On the Implicit Biases of Social Technology. [Preprint]
oai:philsci-archive.pitt.edu:17359
2020-06-23T04:45:26Z
7375626A656374733D67656E:636175736174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365
7375626A656374733D67656E:6578706C616E6174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365:6D616368696E652D6C6561726E696E67
74797065733D636F6E666572656E63655F6974656D
https://philsci-archive.pitt.edu/17359/
Causal and Non-Causal Explanations of Artificial
Intelligence
Grimsley, Christopher
Causation
Computer Science
Artificial Intelligence
Explanation
Machine Learning
Deep neural networks (DNNs), a particularly effective type of artificial intelligence, currently lack a scientific explanation. The philosophy of science is uniquely equipped to handle this problem. Computer science has attempted, unsuccessfully, to explain DNNs. I review these contributions, then identify shortcomings in their approaches. The complexity of DNNs prohibits the articulation of relevant causal relationships between their parts, and as a result causal explanations fail. I show that many non-causal accounts, though more promising, also fail to explain AI. This highlights a problem with existing accounts of scientific explanation rather than with AI or DNNs.
2020-03-06
Conference or Workshop Item
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/17359/1/ExplanationAttention.pdf
Grimsley, Christopher (2020) Causal and Non-Causal Explanations of Artificial Intelligence. In: UNSPECIFIED.
oai:philsci-archive.pitt.edu:17449
2020-07-10T02:00:43Z
7375626A656374733D73706563:6D617468656D6174696373:504D6F6E746F6C6F6779
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/17449/
Invention, Intension and the Limits of Computation
Greif, Hajo
Ontology
Computer Science
Artificial Intelligence
This is a critical exploration of the relation between two common assumptions in anti-computationalist critiques of Artificial Intelligence: The first assumption is that at least some cognitive abilities are specifically human and non-computational in nature, whereas the second assumption is that there are principled limitations to what machine-based computation can accomplish with respect to simulating or replicating these abilities. Against the view that these putative differences between computation in humans and machines are closely related, this essay argues that the boundaries of the domains of human cognition and machine computation might be independently defined, distinct in extension and variable in relation. The argument rests on the conceptual distinction between intensional and extensional equivalence in the philosophy of computing and on an inquiry into the scope and nature of human invention in mathematics, and their respective bearing on theories of computation.
2020-06-30
Preprint
NonPeerReviewed
text
en
cc_by_nc_nd_4
https://philsci-archive.pitt.edu/17449/1/Greif_Intensionality_2020-06-30.pdf
Greif, Hajo (2020) Invention, Intension and the Limits of Computation. [Preprint]
oai:philsci-archive.pitt.edu:17455
2020-07-11T03:25:59Z
7375626A656374733D73706563:6E6575726F736369656E6365:636F676E69746976652D6E6575726F736369656E6365
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:636F6D70757465722D73696D756C6174696F6E
7375626A656374733D73706563:636F676E69746976652D736369656E6365:636F6E63657074732D616E642D726570726573656E746174696F6E73
7375626A656374733D73706563:6E6575726F736369656E6365:73797374656D732D6E6575726F736369656E6365
74797065733D706974747072657072696E74
https://philsci-archive.pitt.edu/17455/
The physics of representation
Poldrack, Russell A.
Cognitive Neuroscience
Computer Science
Computer Simulation
Concepts and Representations
Systems Neuroscience
The concept of “representation” is used broadly and uncontroversially throughout neuroscience, in contrast to its highly controversial status within the philosophy of mind and cognitive science. In this paper I first discuss the way that the term is used within neuroscience, in particular describing the strategies by which representations are characterized empirically. I then relate the concept of representation within neuroscience to one that has developed within the field of machine learning (in particular through recent work in deep learning or “representation learning”). I argue that the recent success of artificial neural networks on certain tasks such as visual object recognition reflects the degree to which those systems (like biological brains) exhibit inherent inductive biases that reflect on the structure of the physical world. I further argue that any system that is going to behave intelligently in the world must contain representations that reflect the structure of the world; otherwise, the system must perform unconstrained function approximation which is destined to fail due to the curse of dimensionality, in which the number of possible states of the world grows exponentially with the number of dimensions in the space of possible inputs. An analysis of these concepts in light of philosophical debates regarding the ontological status of representations suggests that the representations identified within both biological and artificial neural networks qualify as first-class representations.
2020
Preprint
NonPeerReviewed
text
en
cc_by_4
https://philsci-archive.pitt.edu/17455/1/PhysicsOfRepresentation.pdf
Poldrack, Russell A. (2020) The physics of representation. [Preprint]
oai:philsci-archive.pitt.edu:17680
2020-07-29T04:13:26Z
7375626A656374733D67656E:44617461
7375626A656374733D73706563:6D617468656D6174696373:504D7072616374696365
7375626A656374733D73706563:636F6D70757465722D736369656E6365
7375626A656374733D67656E:6578706C616E6174696F6E
7375626A656374733D73706563:636F6D70757465722D736369656E63652D6172746966696369616C2D696E74656C6C6967656E6365:6D616368696E652D6C6561726E696E67
7375626A656374733D67656E:6D6F64656C732D616E642D696465616C697A6174696F6E
7375626A656374733D73706563:70726F626162696C6974792D73746174697374696373
74797065733D636F6E666572656E63655F6974656D
https://philsci-archive.pitt.edu/17680/
Learning from the Shape of Data
Rosenstock, Sarita
Data
Practice
Computer Science
Explanation
Machine Learning
Models and Idealization
Probability/Statistics
This paper examines the epistemic value of using topological methods to study the "shape" of data sets. It is argued that the category theoretic notion of "functoriality" aids in translating visual intuitions about structure in data into precise, computable descriptions of real-world systems.
2020
Conference or Workshop Item
NonPeerReviewed
text
en
https://philsci-archive.pitt.edu/17680/1/TDA.pdf
Rosenstock, Sarita (2020) Learning from the Shape of Data. In: UNSPECIFIED.
metadataPrefix%3Doai_dc%26offset%3D17681%26set%3D7375626A656374733D73706563%253A636F6D70757465722D736369656E6365