https://dustycloud.org/blog/sussman-on-ai/
blog-post-sussman-on-ai#accountable1At some point Sussman expressed how he thought AI was on the wrong track. He explained that he thought most AI directions were not interesting to him, because they were about building up a solid AI foundation, then the AI system runs as a sort of black box. "I'm not interested in that. I want software that's accountable." Accountable? "Yes, I want something that can express its symbolic reasoning. I want to it to tell me why it did the thing it did, what it thought was going to happen, and then what happened instead." He then said something that took me a long time to process, and at first I mistook for being very science-fiction'y, along the lines of, "If an AI driven car drives off the side of the road, I want to know why it did that. I could take the software developer to court, but I would much rather take the AI to court." (I know, that definitely sounds like out-there science fiction, bear with me... keeping that frame of mind is useful for the rest of this.) blog-post-sussman-on-ai#accountable1
blog-post-sussman-on-ai#propagator-model-carry-symbolic-reasoning-with-it1And, Sussman explained, the propagator model carries its symbolic reasoning along with it. blog-post-sussman-on-ai#propagator-model-carry-symbolic-reasoning-with-it1
"if a child knocks over a vase, you might be angry at them, and they might have done the wrong thing. But why did they do it? If a child can explain to you that they knew you were afraid of insects, and swung at a fly going by, that can help you debug that social circumstance so you and the child can work together towards better behavior in the future."
blog-post-sussman-on-ai#carry-metadata-of-how-they-achieved-the-result1Cells' values are propagated from the results of other cells, but they also carry the metadata of how they achieved that result. blog-post-sussman-on-ai#carry-metadata-of-how-they-achieved-the-result1
blog-post-sussman-on-ai#source-code-as-way-of-eliminating-black-box1 2Free software advocates have long known that if you can't inspect a system, you're held prisoner by it. Yet this applies not just to the layers that programmers currently code on, but also into new and more abstract frontiers. A black box that you can't ask to explain itself is a dangerous and probably poorly operating device or system. blog-post-sussman-on-ai#source-code-as-way-of-eliminating-black-box1 2
paper-the-art-of-the-propagator
talk-we-do-not-really-know-how-to-compute