Show simple item record

dc.contributor.supervisorMartens, Rhonda (Philosophy)en_US
dc.contributor.authorNovelli, Nicholas
dc.date.accessioned2015-09-01T17:58:09Z
dc.date.available2015-09-01T17:58:09Z
dc.date.issued2015
dc.identifier.urihttp://hdl.handle.net/1993/30702
dc.description.abstractIn pop culture, artificial intelligences (AI) are frequently portrayed as worthy of moral personhood, and failing to treat these entities as such is often treated as analogous to racism. The implicit condition for attributing moral personhood to an AI is usually passing some form of the "Turing Test", wherein an entity passes if it could be mistaken for a human. I argue that this is unfounded under any moral theory that uses the capacity for desire as the criteria for moral standing. Though the action-based theory of desire ensures that passing a rigourous enough version of the Turing Test would be sufficient for moral personhood, that theory has unacceptable results when used in moral theory. If a desire-based moral theory is to be made defensible, it must use a phenomenological account of desire, which would make the Turing Test fail to track the relevant property.en_US
dc.language.isoengen_US
dc.rightsinfo:eu-repo/semantics/openAccess
dc.subjectPhilosophyen_US
dc.subjectEthicsen_US
dc.subjectArtificial intelligenceen_US
dc.subjectDesireen_US
dc.subjectPhenomenologyen_US
dc.titleAdventures in space racism: going beyond the Turing Test to determine AI moral standingen_US
dc.typeinfo:eu-repo/semantics/masterThesis
dc.typemaster thesisen_US
dc.degree.disciplinePhilosophyen_US
dc.contributor.examiningcommitteeShaver, Robert (Philosophy) Hannan, Sarah (Political Studies)en_US
dc.degree.levelMaster of Arts (M.A.)en_US
dc.description.noteOctober 2015en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record