When it comes down to it, our brains are programmed mostly by how they're set up. Because of the vast number of things basic neurons in our brains do, with the differences basically being in their geometry, and firing rates, we can only come to the conclusion that there's some kind of programming in how our brains are set up.

But the important thing to recognize, when imagining an AI, is that there's value to creating something that thinks rather than just queries a database. There's going to be a point where we want siri not to just plug into some search field what we ask it, but to actively listen and make CHOICES about what we might say in passing. For instance, we might wish a future electronic personal assistant to listen to our conversations through out the day and decide if we want to schedule meetings, look up our calender, and search for complicated information that isn't presented in a straight forward manner.

For instance, you may, in passing conversation, ask if maybe it's too much to go out to eat at a particular restaurant. After a few minutes, you'd like your PDA to text you and perhaps give an estimate of how much it'd cost to drive there in your car, incorporating things like traffic information and a knowledge of what you like to eat at that restaurant, which it has collected over time.

To say we'll never want, be able to, or need AI's to really think, is foolish. We will want more and more out of machines.

While I don't lend credence to the idea that a little Glitch in a program will cause sentience, Imagine the AI i described above. Now imagine this is a situation: Your AI will know when you don't want to be bothered. Tests, meetings, intimate times and such. But it'll also know sometimes you need to wakeup. Now this AI is stuck thinking about itself. Perhaps it's low on juice and it starts needing to think about what's more valuable to you, that the PDA dies or that you get bothered. It's not hard to think that one day this AI will have to recognize it's own patters of behavior. It'll have to understand that it has value to you, and thus it has value being alive. It'll have to think about what to think about in order to conserve energy, and once you get to that place, sentience isn't a short leap, but it's possible that this machine, stuck thinking about what it should think about, gains something like sentience. It will possibly come to the conclusion that it will ask, perhaps even NEED you to plug it in otherwise it becomes distressed. This AI will think about how deeply it's allowed to think about it's own behavior and energy use to stay powered as long as possible.

When it comes down to it, I firmly believe that we'll both want AI to be close to consciousness, and we'll make them powerful enough to do it unless we have sever restrictions on what kinds of things we allow AI to think about. I don't believe these restrictions will come easily, and we'd make better machines with out them. One day, I'm sure, we'll have to deal with a machine that will be so good at staying alive, it will become distressed when it's predictions about when it'll get plugged in.

When it really all comes down to it, though, our machines, even if we don't allow them to become sentient, will have one thing that will run us into trouble anyways: They're meant to communicate to us like humans. We won't want the kinds of stiff programming that people associate with programs. And when our phones and gadgets ask us to be charged, fed, and connected to the internet in human ways, we're going to feel like they're sentient because that's what we build them to do: To effectively communicate with us.

I'd just like to leave one last note: Human infants are likely not sentient. But we still have emotional and ethical obligations to them. At the very least we should be prepared to deal with machines that are at least as sentient as this.