Retrospective 2024-21

This is a retrospective of week 21, 2024 (2024-05-20–2024-05-26).

This week, I’ve continued reading The Living Classroom by Christopher Bache. Christopher Bache says in this interview:

That book [The Living Classroom]…was basically my attempt to understand the fields of consciousness that were connecting my students, and my courses, and me… I had been pondering the riddle for a long time, but it just came through cleared 10 minutes in one session—and took years, of course, to open it up and to do the research for it.

—Christopher Bache https://youtu.be/pYw8ZqCqx18?t=1078

My experience has been…a growing understanding of the porosity of mind, of the field nature of our individual consciousness, the field nature of the collective consciousness—the way that individuals, when they come together and focus their intention, they create a field.

—Christopher Bache https://youtu.be/pYw8ZqCqx18?t=2003

Daniel Siegel uses phrases such as “relational fields” and “generative social fields” to describe what happens in the classroom:

Some people would use the phrase relational sensing, or relational fields. … We’re trying to figure out how to study those things in a classroom. So we talk about generative social fields… And we’ve picked out several teachers that seem to create exactly what you’re talking about in a classroom. We’ve…brought them together, and then we ask them very directly, what are you doing? because everyone realizes, when they walk in that room, they can feel it. And the teachers say the same thing, they have no idea. They just do it! So we’re making films of them, and then we’re going to study the films because they can’t articulate it.

—Daniel Siegel https://youtu.be/pYw8ZqCqx18?t=2319

I’ve also read this article by Edward Frenkel, which is a chapter from Artificial Intelligence Safety and Security, edited by R. Yampolskiy. Edward Frenkel emphasizes that the first-person perspective is highly relevant to AI safety (my emphasis in italics):

For centuries, scientists viewed the world as a collection of objects interacting with each other but independent of the observer, and for this reason any hints of the subjective were criticized and rejected. However, since the beginning of the twentieth century, with such landmark achievements as quantum mechanics, general relativity, and Gödel’s incompleteness theorems, science has been teaching us that we can’t really disentangle the observer and the observed. …there is still a long way to go for us scientists to welcome and integrate the subjective and the first-person perspective into our science-informed worldview. Furthermore, I think we urgently need to do that today precisely because of the challenges posed by AI safety issues.

—Edward Frenkel, AI Safety: A First-Person Perspective

ultimately, it is the humans who program the machines. … Therefore, if an AI researcher or programmer is not fully aware of what’s driving them, he or she may not be able…to ensure the safety of the system under one’s control and the safety of the people who are affected by it.

—Edward Frenkel, AI Safety: A First-Person Perspective

Clearly, no safety protocol can be made 100% proof if the people who design and implement these protocols do not act in good faith. Especially, if they are driven by their personal “metaphysical ideas” (for lack of better word)…which are in direct contradiction with the Asilomar AI Principles (such as “Value Alignment” and “Human Values”…).

—Edward Frenkel, AI Safety: A First-Person Perspective

our past experiences, our fears, insecurities, and other issues inform and influence our beliefs and our behavior. So, am I proposing that all AI researchers learn more about themselves? Yes, I do, but not in a way in which we are used to learning things. And that’s because those psychological issues may well be “under the radar” of our thinking and rational reason

—Edward Frenkel, AI Safety: A First-Person Perspective

Edward Frenkel then becomes personal and writes:

I wanted to rely only on logic and rational reason, refusing to trust my intuition. I approached my life’s situations as though they were mathematical problems, and would feel deeply disappointed and blame myself, or others, if my purported “solutions” didn’t work out as planned. Naturally, this led to suffering and misery.

—Edward Frenkel, AI Safety: A First-Person Perspective

our logical mind, working extra hard to shield us from pain, actually prevents us from knowing the truth. So, the more we rely on the mind in these matters, the farther we are from the truth. Paradoxical, isn’t it? This is especially troubling for us scientific types, because we rely on our logical thinking even more than other people (and of course, it’s not by chance).

—Edward Frenkel, AI Safety: A First-Person Perspective

Edward Frenkel’s conclusion is that:

We have to listen to our heart every once in a while… And the good news is that if we are willing to do it, then we can overcome our fears… We can learn who we are, we can be whole again. … We can then connect to each other anew and create a better world together.

—Edward Frenkel, AI Safety: A First-Person Perspective

when we share our stories, we will help each other to heal and to become more aware of who we are.

—Edward Frenkel, AI Safety: A First-Person Perspective

I’ve also played with ServiceSpace’s AI this week. I asked when Robert S. Hartman immigrated to the United States (since I happen to know the answer). The chatbot answered 1938, but the correct answer is 1941. How am I supposed to trust the chatbot unless I already know the answers?

Subbarao Kambhampati writes in Scientific American that:

The reality is: there’s no way to guarantee the factuality of what is generated…

—Subbarao Kambhampati, a Computer Science Professor at Arizona State University https://www.scientificamerican.com/article/chatbot-hallucinations-inevitable/

Despite Microsoft justifies AI-hallucination as “usefully wrong“,1 I fully agree with Stephen Farrugia that it’s an essential problem:

The elephant-sized problem in the generative AI room is the unpredictable veracity of the responses. As a problem, it should be considered more important than ease-of-use… It’s not just more important, it’s essential.

—Stephen Farrugia https://fasterandworse.com/known-purpose-and-trusted-potential/

I’m not impressed by the empty shine of ChatGPT.

Notes:
1. Baba Tamim, Microsoft introduces Copilot, justifies AI-hallucination as ‘usefully wrong’. https://interestingengineering.com/culture/microsoft-justifies-ai-hallucination. Published: Mar 17, 2023. Retrieved: May 29, 2024.

Related posts:
Playing with ChatGPT
A test of ChatGPT
Book Review: Freedom to Live by Robert S. Hartman


Posted

in

,

by

Tags:

Comments

Leave a Reply