?

Log in

No account? Create an account
Trevor Stone's Journal
Those who can, do. The rest hyperlink.
Philogeek talk. 
21st-Sep-2001 01:16 am
Trevor baby stare
Feel free to ignore this post if you don't feel up to a Gödel-related post. I'm sitting on a ball, nekkid, in the dark, with low monitor contrast because I just jumped out of bed and feel like this is the best place to write this down...

Psychological determinism is the claim that, given full knowledge about a person's state, we can predict his actions. That is, given the total input to a person's mind, we can determine his actions. In other words, a person's mind is a program.

However, from theoretical computer science, we know that one cannot write a program which determines, with absolute certainty, any nontrivial I/O property of a program. So, even if we can predict with 100% certainty someone's mental output based on mental input, we cannot describe how we do so.

Suppose I had the ability to determine whether a person would answer a question in the affirmative or negative. What is my answer to "Will I answer this question in the negative?" If I will, then my answer is affirmative, and therefore incorrect. If I won't, then my answer is negative, and therefore incorrect.

Hrm. That example implies that we don't even need to know how the determinism-evaluation process works, it's enough to have an I/O table regarding psychological I/O tables.

So does this prove free will? No. Humans may be psychologically determined to the fullest, but humans cannot predict with certainty the results of such determinism.

Does it mean we can't predict how other people's minds will react to something? No. The self-looping theorem is about certainty. It leaves open the possibility for very effective heuristics.

Do I have a good paper for this year's Rocky Mountain Philosophy Conference? Hells yeah.
Comments 
24th-Sep-2001 11:46 pm (UTC) - The Problem
This law only applies to unrestricted programs -- your everyday program running on your everyday computer is actually a finite state machine, although a really convoluted one. And we can know I/O properties of finite state machines. And predicting it would, most likely. involve knowing the exact status of all the neurons in the body. Impractical if not impossible. (Does this turn into a problem analagous to quantum state observation? Is it possible to measure the state of all neurons in a person's body without changing the state of some of them? I suppose you could keep track of them all after measurement.)

But a human mind could be turned into an unrestricted program by being given a limitless supply of writing media so as to create an indefinite amount of virtual memory.

Furthermore, there's still the somewhat Gödelian problem of "Will I answer this question negatively?" Lucas cannot consistently assert this sentence.
19th-Jan-2003 11:36 pm (UTC)
Was thinking about this at lunch this afternoon, and this seems like one good place to post it.

A person cannot experience the state of all of his neurons, except in the state of normal operation (i.e. no neuron acts as a representation of the state of another neuron). A computer program cannot display the status of every bit of its memory, except in the same trivial sense. This has implications for people obsessed about first-person experience, I suppose.
This page was loaded Aug 17th 2018, 11:30 am GMT.