Saturday, January 17, 2009

You Should've Known Better

I wasn't real happy with myself this evening.

The MindShift workshop -- the first one of the new year -- wasn't going as well as I would've liked.  We had a lot of new members, but the topic I'd chosen was a rather advanced one, and I found myself backing up and sidetracking a lot, to make sure that everyone could follow along.

That seemed to be working okay, but then the downside was that the workshop was running over its allotted time, and I wasn't going to be able to fit in everything I'd planned to.

Worst of all, just as I was getting close to finishing a working demonstration of how to change "have to" feelings into "choices", my DSL connection cut out and dropped everybody off the seminar before I could get back online.

Ouch!

It took a while for me to clear my head afterwards, and to realize that -- irony of ironies -- my own negative reaction to the problems was itself an example of the subject I was teaching!

You see, the workshop was about modal operators: words like "can", "should", "have to" and so on.  More precisely, it was about the frames of mind that underlie these words, and how our beliefs and "rules" determine which frame of mind we use for different tasks.

In particular, we were looking at rules like, "If I have to, I don't want to" and "If I put it on my to-do list, then I have to"...  and I was in the middle of showing how to change them into more useful rules when the connection went out.

Anyway, what I realized was this: my reaction to the workshop problems was being created by a rule.  One that worked something like, "If I was able to prevent it, then...

I Should Have!

Because as I was going back over how things went, I was second-guessing everything I did, seeing how I could have made different choices that would've improved things.

Like, if I'd called on more experienced students for examples, I could've finished a simple, successful example first, and then backtracked for the newbies later.  Or, if I'd not sidetracked at this one point, we wouldn't have had this bit of confusion later and had to backtrack.

And, if I'd signed up for cable a couple weeks ago when the first DSL outage happened, I'd have had another way to get on the internet.

Yeah, I know, it's all a bit silly.  And yet, a big piece of my career in computers has been about being good at preventing bad things from happening!

However, just because a compulsion can cause you to develop a skill, it doesn't mean you need to keep the compulsion around!  So, it's time to get rid of this one.

And how do I do that?

Mental Debugging and Disassembly

The trick to making any change in your brain's programming,  is to remember that abstract ideas do not have any direct power to change you.  Your brain is designed to build abstractions from sensory data -- not the other way around!

Thus, to change an automatic mental rule like "If I was able to prevent it, then I should have", I must first turn it back into sensory data.  Because in its present, abstract form, it represents an analysis or summary of the "program's" operation, rather than being the actual code of the program.

You might say, that in programming terms, we want to "view source" or "decompile" the abstraction, by looking at where the rule came from.

And you do that by asking questions like, "How do I know X?", or "How did I learn X?", "What's the classic example of X for me?", or other, similar questions.  Then, you pay attention to what responses come up in your mind.

In my case, the immediate response is an auditory memory of my mother yelling at my brother and I, about how we (or maybe just he) "should have known better" than to do something or other...  and it's accompanied by a feeling of guilt and shame.

Now...  here's the important bit about what happens next:

I Don't Analyze This!

It's not important for me to know if this is something that really happened, or whether my brother or I really did anything bad, or whether it's my mother's fault I'm messed up or any crap like that.  None of that is in the least bit important, because it just leads to more abstractions.

And abstractions don't help you find the bug.

In fact, any programmer can tell you that in computer programming, the more abstract your view of the program, the less likely you are to find a bug that's right in front of your face!

So, I don't analyze this mental response.  It's not important how it got there or whether it's logically or philosophically valid in some way.  All that matters is that this is a piece of information my brain is using in its calculation of how I should feel about certain classes of situation.

And if I change the "program" so that this piece of information is no longer considered relevant in that calculation, then...

I will change!

My actual personality will change, because that's what a personality is: a collection of rules that define what is -- or isn't -- "you".  And when those rules change, you change.

Automatically, and without effort.

So let's do this thing... I will change my personality, right here and now, for your edification and amusement (not to mention my own personal improvement)!

But it won't take long.  (Heck, I could've been done changing this half an hour ago, if I hadn't been typing all this stuff!)

I recognize the general pattern in an instant: something my fellow mind hackers in the Guild would quickly assess as a "judgment" or "Fourgiveness target".  Specifically, it's the sort of situation where you come to a general conclusion about what is or isn't acceptable behavior for you.

In my case, I came to the conclusion that if you don't prevent a problem that you could have prevented -- regardless of whether you were actually aware at the time your behavior might create that problem -- then...

You're A Bad Person!

Now, I could sit here and argue with myself that that's "not really true".  I could call that kind of thinking childish and stupid.

But that wouldn't help.  Again - it's just verbal abstraction, and that's simply not how the brain works, on the "gut" level where confidence and motivation -- or their opposites -- actually live.

I also sometimes explain it this way:  just because I-the-40-year-old know that this reasoning is wrong, it doesn't fix the reasoning of I-the-8-year-old who created this rule in my head.  My knowing that this rule is stupid, doesn't make him know that the rule is stupid, and thus the rule stays put.

No, there isn't really an "inner child" who must be convinced of anything here.  It's just that this memory is structured in such a way that, once the threat has been logged as existing, it remains cataloged in my brain until it is specifically disconfirmed.

It's like, if you think you see a tiger stalking you, then you're going to worry about it... until and unless you get some specific indication that you were wrong, like seeing it was just a deer, or a branch moving in the wind.

But simply arguing with yourself that there's no tiger there, doesn't help, because...

It's Not Changing The Sensory Data!

So to convince my brain to let go of this rule, I have to present it with new sensory data regarding the subject of the rule.  But luckily, this "new" data can be entirely imaginary...  as long as it's also believable.

See, if I try to just "make up" an imaginary scenario in which it's perfectly okay to make mistakes that are stupid in hindsight, then this will not jibe with my brain's expectations.  I will feel "incongruent" -- i.e., disbelieving, skeptical, etc.

So, what I do is this: I get my brain to come up with the scenarios for me, using various questions.  (Like the ones in my "Seven Laws of Belief" handout over at Thinking Things Done)

And these questions are "leading" questions.  Questions that (like "have you stopped beating your wife yet?") literally force my brain to imagine a world in which certain conditions are true...  and generate the matching sensory data by combining other memories.

For example, I can ask something as simple as, "Does my mother saying it necessarily mean it's so?"  And this forces my brain to...

Perform A Consistency Check!

You see, by default, our brains don't do any consistency checking on our beliefs.  We just believe whatever we believe, until and unless circumstances require us to do such a check.  But the machinery is there in our heads to do this checking, and it will automatically update any inconsistent beliefs...

As long as you ask it to.

And the checker works by comparing sensory data.  So on a check like "does my mother saying it mean it's true", I'm basically asking my brain to pull up all the cases where my mother said things, and see which ones were true.

And if enough things turned out not to be true, then this could immediately lower my brain's credibility rating for this particular belief!

Now -- it would not necessarily update any other beliefs I got from my mother, because the consistency check machinery is very "local" in its design.

It appears to have been intended to update beliefs like "where the best place to find food is" -- if you go there and there's none, the mechanism notices an inconsistency and seeks to update it.

But it doesn't go and wonder if you should reconsider where everything else is!  After all, if anything else had moved...

You'd Discover That Pretty Quick, Too!

So this is the big reason why we don't do what we intend, or change the way we think we should, just by having an "idea" about something.  Ideas don't connect to the consistency checker, which means they don't update your "gut-level" beliefs!

But I'm digressing a bit, as I said I was going to make this change now.  While, "does her saying it mean it's true?" seems to have lowered the credibility level of this belief a bit, it wasn't sufficient to make a change in my gut-level reactions.  I still feel like I "should have" done better.

So I'll try some other questions now, like "Is it really true that having a preventable problem means you're a bad person?" and "Do good people ever have preventable problems?"

In the first case, I'm asking for a general consistency check -- and the answer comes back, "Probably not."  (But it still feels true.)

In the second case, I'm asking for specific counterexamples, which would provide good sensory data for the consistency checker to use in updating my beliefs -- kind of like seeing where the new best place to get food is, or noticing that the supposed tiger was really just a branch!

The second question brings up something interesting: I'm seeing that when good people do have preventable problems...

I've Tended To Treat Them Badly!

This is an interesting facet, by the way, of how belief systems work.  The raw sensory data is used as a pattern template for behavior in more than one role.  It encodes the roles of both the accuser and the accusee.

That is, it tells me not only what to expect if I fail to prevent a problem, but also how I should behave towards others who do the same thing!  (A valuable learning-through-imitation pattern, that's found in many social animals besides humans.)

Okay, so I still have the pattern.  And at this point, I've pretty much confirmed that these tentative probes (which have taken all of 30 seconds out of this last hour-or-so of typing) are not going to do the job, and that this meets all the criteria for the use of the Fourgiveness technique (a systematic way of dropping this kind of social-relations rule).

I'm not going to go into the whole thing here, though, because even though it'll probably take me less than a minute to do it, describing all the possible nuances (not to mention the theory) would probably take me hours more typing...  and even then, it's better learned by practice and example.

But one of the questions I'll be using is...

"Can I Forgive Myself
(For Whatever My Mother Was Angry About)?"

Now, I wish I had the time and space to go into all the evolutionary psychology and biology behind that question and how it works, but I've already done that in previous workshops at some length, and this article is already a lot longer than I'd intended it to be!

So for right now, I'm going to stop all this typing, and fix the damn problem in my head already, then come back and only write down what I did.  (This typing and hacking at the same time just isn't working very well.)

Okay.  That took about a minute and a half -- because there were some complications!

The complicating factor was that this "memory" was actually two memories.  One, of laughing at my brother for getting in trouble over something he could've prevented, and one where I was getting in trouble myself.  And the earlier memory validated the later one, in the sense that, "I argued that he was responsible then, therefore I'm responsible now."

So this is what made the second memory seem "true" to me, which is probably why the consistency check on "reliability of statements by my mother" didn't have any real effect.

Because, it wasn't my mother's judgment that I was following to create this belief...

It Was My Own!

And in order to untangle this mess, I had to first step back and acknowledge that I was being a jerk for taunting my brother and gloating when he got in trouble...  and then forgive myself for that.

Because until I did so, my brain simply wouldn't let go of the judgment that I was responsible in the second case, even when I tried to forgive myself for it.  (And trying to sort all that out, while typing about it and trying to make it all into a nice neat linear story, just wasn't working for me.)

Now, I've never encountered a pattern quite like that before -- where a judgment made about someone else then cements the apparent "truth" of the reversed situation.  But the situation still followed both the rules of the Fourgiveness process in particular, as well as those of the general mental troubleshooting methods I teach.

Because one of those rules is, "you follow what comes up".  And in this case, as I tried to forgive myself for whatever I felt responsible for, the memory of taunting my brother kept popping up instead.

So I eventually took the hint, and realized that what I needed to forgive -- but first acknowledge -- was my jerky behavior.

Because after all, if I didn't acknowledge that busting someone's chops over theoretically-preventable mistakes was a bad thing to do, then of course it would only seem right to beat myself up for my own preventable mistakes!

My Work Here... Is Done

I've now done a few seconds additional follow-up, to install the more-useful pattern of thinking of the next time a mistake could happen, and focusing on how I'll prevent it in the future, instead of dwelling on the (unfixable) past.

That installation was easy to do now, of course, since the conflicting rule is gone.  But right after the workshop, I had tried to think that way...  but couldn't get myself to actually do it.

And when your brain doesn't obey you instantly in such a matter, it's a positive indicator that you've got a conflicting rule or belief in your head.

And it's time to go hunting.

So, I hope that this article has helped to illustrate for you, one of the least-known truths in personal development:

Fixing the bugs in your brain is easy.

It's finding them, that's the hard part!

Good luck in your searches.

 

Monday, January 05, 2009

Your Top 3 New Year's Resolution Mistakes

I was originally planning to write this as a regular blog article, but since one of my resolutions is to do more videos this year, here it is in video form: the top 3 mistakes that cause 88% of all new year's resolutions to end in failure:

 

If you can't see the video here, you can watch it here on Google video.  And, if you want to get the ebook and other stuff I mention near the end, you can find it here.