Yesterday, per my plan, I got in there and tore off some big chunks of the method combination work for generic functions. Just now, I was about to start in on the next phase, which was going to involve some additions to the current doctest to begin explaining and using some not-yet-written functionality. And I noticed that I was hesitating to start, because I was trying to figure out where in the documentation this particular functionality should be explained and demonstrated.
I had already noticed yesterday that it really didn’t matter where I put something; any spot would do, because it was going to get moved and edited as things went on, no matter how good or bad of a place I chose initially. But today, my mind was rejecting every place I initially considered as a candidate for placing the new functionality, as not being the “right” place to put it. As soon as I realized this, it was like finding the cause of a difficult-to-locate, longstanding bug in a program. Duh! So that’s where the problem is!
I was previously aware in a general way that my impossibly-high standards for myself can get in the way of accomplishing things, and the other evening I blogged about precisely that. What I was missing was that this is actually something I can get my hands around, as it were. It’s not just some sort of abstract concept, it’s a concrete, specific behavior that occurs in a particular context: when considering options for doing something, I’m validating them against criteria.
Well, that’s not actually the problem. The problem is what criteria I’m using. For “other people’s” projects, the criteria are whatever they’ve said is important, and any consequent criteria that arise therefrom. For my own projects, the criteria are whatever I happen to think of at that moment, or have previously thought of – which means that there is an ever-growing set of criteria to which my personal efforts are subjected.
Thus, over time, as I learn more and more about how to do things well, the less effective I actually get in my personal life! (On other people’s projects, I tend to filter these additional criteria according to their impact on the client’s criteria, so it doesn’t interfere nearly as much.)
I feel so stupid, because Ty and I have actually talked about this issue before, and it was right in front of me all along. We knew that defining criteria for personal projects was an issue, we knew that excess personal criteria were a problem, and that we needed to change that.
What we didn’t see – or at least I didn’t – was how to do that. The mind isn’t like a program whose source code you can inspect; to work on a piece of “brain code” you sometimes have to be able to “break into the debugger” when the target code is actually running. I did that today, so I now know precisely where the bug occurs.
Better yet, I think I know how to fix it. The primary inhibition code I found in my head is, “don’t do the wrong thing”. This is a simplified form of the actual code, of course; it contains a mixture of ideas such as not making mistakes, not redoing work, doing what is justifiably correct with reference to external criteria, and so on.
But the primary intent is to “avoid wrong action”, where “wrong” is defined as “not right”, and “right” is a function call to everything I know about what “right” might be, be it with respect to “right for business”, “right morally”, “right technically”, etc. (Luckily, the mind runs on massively parallel hardware, or this code would be slow, in addition to its other issues!)
Anyway, the fix is ridiculously simple: just bump down the priority on those criteria, putting a filter in place to only inform me of issues with potentially serious or costly consequences that cannot be undone. Cutting and pasting documentation and doing some rephrasing of it doesn’t count as serious consequences. Another way of thinking about it is this: don’t tell me what’s wrong, tell me if there’s something to do that’s right. (With the exception of serious irreversible consequences, of course.)
Of course, I’m going to also have to “write some new brain code” for that latter functionality. It’s sort of weird to realize that I have this enormous database of “shoulds” and “shouldn’ts”, but that neither one is directly very useful for deciding what to actually do; they’re just criteria for evaluating possible solutions, not generating them. However, at least in most low-cost/low-pressure circumstances, it will suffice to just do anything, then use my shoulds and shouldn’ts to improve whatever I did until it’s good enough to leave alone.
So now, as with any bug fix, I also want to do a post-mortem. How did this bug arise? How can I avoid creating such a bug in the future? Were there warning signs I missed? Did I have any knowledge that I did not apply here?
In retrospect, I’ve read plenty of things that advise “painting badly” or “writing badly” in order to get past one’s inner Editor or Critic. However, I always viewed this as some sort of ploy for getting started or getting past a block, and even when I’ve resorted to it in the past, I’ve always immediately discarded it thereafter. After all, it wasn’t the “right” way to do it, it was “cheating”. Thus, it never occurred to me to change the thought process that would lead to such a conclusion in the first place!
There’s an Emo Phillips joke that goes something like, “I used to think that the brain was the most interesting part of the body. Then I thought, ‘Wait, what part is telling me that?’”. This is the most annoying thing about brains: your thinking is filtered through your current perspective, so you can’t directly detect any systematic bias or bugs; it just seems like it’s the rest of the world that’s messed up! It’s almost like having compiler bugs: the code actually being run is not the code you think you wrote. Only the program’s output will tell the tale, but only if you look closely enough – and not necessarily in a way that points to the compiler bug! – leaving you with no alternative but to single-step your way through to finding it.