Jump to content

User:Three-quarter-ten/Fleeting thoughts

From Wikipedia, the free encyclopedia

2010-12-03: Retronymy suggestion: New Yorkshire

[edit]

Thought sparked by this edit chain. They should have named the city "New York" and its provincial countryside (that is, the rest of the colony) "New Yorkshire" (pronounced /new YORK-shər/). It would have been a much cleaner nomenclature convention. The problem with today's nomenclature is that countless people say "New York" every day without further specifier and rely on the context to make clear whether they mean the city or the state. That's OK, usually. But if the state had been named "New Yorkshire", people would not be able to "get away with" ambiguity in everyday speech. It would be precise without even trying. If retronymy could be consistently enforced in natural language, we could even still revise it now. But retronymy can't be consistently enforced in natural language, and that's probably for the best, because systems in which it could be thus enforced have ugly side effects, and the cure is worse than the ill.

Update, 2010-12-04: Speaking of this very topic, I just happened to rehash a rare example in life where linguistic prescription actually *can* be enforced with the threat of legal and financial penalty. It's the prescriptive taking back of a brand name from genericization, with the latter being a process that, etically and descriptively speaking, happens in the wild (not under anyone's control). Did you suspect that the thoughtcrime-fighting culprit would be Corp rather than Gov? I'm not picking on the folks at that company. It's not their fault. I'm just pointing out what can go wrong when humankind thinks that law controls natural language. It produces an environment in which some humans can punish other ones for thoughtcrimes that aren't actually criminal in nature, from a fair perspective.

2010-12-04: Clue-getting suggestion: Shoot for drill-down-ability

[edit]

(This idea is not new, but I've never recorded my own iterations of it till now.) In the information age, most people can clearly tell, and often complain to each other, that there is information overload, and that sifting through the vast amounts of information is a resource-drain (of time, effort, mental energy, attention span, etc). But what lessons do people draw from that?

  • The smart ones figure out that the answer is to continually work toward a structuring (via linking, tagging, taxonomy, expert systems, and other tools) whereby the information is still all easily available; it's just easy to filter and hide based on the view that any particular user is interested in getting at any particular time. Another name for this is drill-down-ability.
  • The dumb ones assume that the answer is to delete the information.

2010-12-13: Retronymy suggestion: Arabic numerals for world war numbers

[edit]

The convention of using Roman numerals to number the world wars really needs to be scrapped. It's merely an outdated practice inherited uncritically from the pre-IT era's quaint little fetish for Neo-Classical trappings. Arabic would be so much better, especially given that the more world wars that occur, the more annoying Roman numerals will get for writing about them, for the small group of Mad Max assholes who are still alive to do it. Remember, World War 4 will be fought with sticks and stones. The other great thing about Arabic is that it's interlingually [near-]universal among humans. [Virtually] everyone the world over uses those same number symbols, with pre-Kindergarten familiarity, no matter which language they speak. It's a rare [near-]complete triumph of i18n. (That's internationalization for those who didn't know. Now you do.)

2010-12-22: WP:NOTHOWTO is overrated, or at least frequently misinterpreted

[edit]

I've always thought so. But then, I'm just an inclusionist like that. Once again, from m:Inclusionism: "If someone finds something interesting enough to write about, then chances are that someone else will (one day) find it interesting enough to read about, so it should be in Wikipedia." Now, actually, I must admit, I don't think that *all* how-to info belongs in Wikipedia. I just think that the threshold is not as high as some people think. So I guess what I really should say here is not that WP:NOTHOWTO is wrong—just that Wikipedians sometimes engage in "NOTHOWTO nazism". They fail to successfully differentiate the instances (i.e., the instances of which how-to bits belong and which are too digressive or otherwise inappropriate). Cast a wide net for content. Let it flow on in. You can always toss bits later.

2010-12-26: On intellectual property protection lifespan choices

[edit]

Discuss: Patent lifespan is too short for some products, given the R&D that goes into them; meanwhile, copyright lifespan is way too long for most written works. The responsibilities to be balanced in deciding intellectual property (IP) protection lifespan choices (i.e., picking appropriate expiry timeframes) are (1) the need for innovators to recoup their investment (of time, effort, and/or money) and collect a reward for their behavior; and (2) the need for society to be able to use an innovation affordably after a while—especially given that after it disseminates far enough throughout the material culture, you reach the point where everyone "needs" it, in the sense that regular people can't continue living the "normal" lifestyle without it being affordable, and those who do without it are at risk of becoming economic and social outcasts. I'm not sure that 17 years for a patent gives the right balance (sometimes too short), nor that 70-120 years for copyright does either (usually way too long). How about 3 decades for both? Dare we get smart?

2010-12-26: How many years before there will be no low hanging fruit left in WP content development? A lot.

[edit]

There's always more to be found; usually it waxes and wanes in spurts. Some people have commented that WP already contains everything it could, or ought to. That is so far from true that it's almost insane. At the moment it's striking me as embarrassingly ignorant. Seems to me at the moment that if you surf around WP and don't see any content gaps, you're either not looking in the right places, or you're kind of ignorant about what's out there in life that hasn't been adequately covered yet.

2010-12-30: Learned a new lesson about being a Wikipedian this week: NEVER take the bait of flames, even once

[edit]

This because even one instance of temporary counter-incivility (forgetting WP:CIVIL in the heat of a moment of defensiveness) drags you down into "their world", and you may make impressions on some unsavory dwellers therein. Afterwards, it's hard to wash all the dirt off, because some unsavories may have taken note of you as supposedly being one of their own (i.e., a fellow dirt-flinger, a fair future target or peer). Symptoms of the bottom-feeders include poor reading comprehension, refusal to read at all (or to read rather than skim), or maliciously intentional speciousness in counterargument. And even the cops who deal with them need to be careful not to turn into them. Ugh. To be avoided in future. Those of us who truly have better things to do with our time (i.e., advancing content development, as opposed to getting our social interaction [especially of the pathological variety] from WP even at the expense of content development) should avoid the whole scene, like smart people avoid neighborhoods where ex-cons concentrate. In other words, I was re-taught the old lesson of WP:DFTT.

2011-01-12: On what you can't kill because it just won't die

[edit]

I read today (via the Signpost or its outbound links) some comments from an academic who was bitterly bemoaning Wikipedia as the academic analog of a sweatshop. A place that academics would wade into in response to "extortion". (Basically, "This is where humanity will usually come for its info, whether you like it or not, so if you want it to be good, and accurate, then you better come work on it.") And then they find themselves toiling away nonremuneratively forevermore, battling idiots and reverting vandalism. Well. He's not wrong that it's not ideal; but it's going to happen anyway, because you can't kill it because it just won't die. Additionally, we would not be better off if it did die, despite what he may think. In an internet-pervaded world, things like Wikipedia are just the baseline of free info. Before Wikipedia it was any person or dead-tree publication you happened to have handy. Now it's Wikipedia. Guess what? Wikipedia is orders of magnitude better for the purpose than the old alternatives were (i.e., nearby people and dead trees). Is one of the side effects of it the problem that he mentioned? Yes. But that won't stop Wikipedia from existing; (and here's the part he'd probably find scandalous) nor should it. Life is full of choices between relative levels of evil. They come bundled; side effects can't always be filtered out. Now, before anyone accuses me of espousing the problem, what I'm saying is that we need the medication despite its adverse effects; and now (next step after that), let's work on mitigating the adverse effects. One thing I would offer to seriously mitigate the side effect that he bemoans is the endowments for good Wikimedia editing. Then some university-type people (profs, graduate students, good undergrads) could get paid for doing some/much of the work instead of only being milked for pro bono contributions. Another channel for this funding could come from here. Give it some thought, world.

2010-01-15: Keep gradually getting yourself ready for what's coming, because it's not not coming

[edit]
[edit]

I'm casually skimming Republic of Korea Navy, and I once again think a thought that I've often thought before while reading Wikipedia recreationally: the fact that U.S. copyright law dictates that photos taken in the course of U.S. government work are generally public domain by default is a *huge* boon to people around the world, most especially now that Wikipedia and Wikimedia Commons exist. Look at how much and how often PD-US (public domain–U.S. government) photos have enriched thousands and thousands of Wikipedia articles, in scores of languages (i.e., not just en.wikipedia). Many of the most informative photos on Wikipedia are PD-US work taken by the soldiers, sailors, marines, and airmen of the U.S. military. Especially now that even average people (of developed economies) can have a great-quality, affordable digital camera in their pocket. I'm generally not, philosophically speaking, an advocate of having the MIC be an extremely powerful force in our peacetime lives; that can have some unpleasant (and existentially risky) side effects that come along with it. *Nevertheless*, I think any objective person has to admit that the deep pockets of the U.S. DoD are one of the sharper double-edged swords that exists in this world. They pose a serious and perennial risk of misuse (e.g., "oops, we done started a war"; "oops, we done overspent ourselves out of our own international creditworthiness"); yet they also accomplish positive things in life. Of course, that very duality is what makes the MIC so complexly double-edged to begin with. Humans dare not unilaterally disarm themselves, because the other side will certainly take advantage of that. Given that fact, you've got to have an MIC; yet then, since it exists, you've got to manage its side effects. Hmmm. Kind of like medication for a chronic illness. You've got to have it; yet you've got to mitigate its side effects (medical and economic both). And like any knife in the kitchen. Can't be a chef without it; yet better watch out not to cut your own damn fingers off in an instant. Humans ride the line every day just by existing. All living things do. (Too risk-averse, they starve; too foolhardy, they get eaten.) Being alive is just a risky business, at the irreducible bottom of the analysis. Well, as long as we've got to have an MIC, I'm proud of some of the things that the U.S. MIC has gotten right (e.g., earthquake relief; flood relief; vocational training and discipline for young people who needed that to turn their lives in the right direction [e.g., Antwone Fisher's experience is a noteworthy extreme example; but other less dramatic but equally valuable examples abound as well]); even though I shake my head at its various mistakes. You wanted black and white in this world? Welcome to graysville, baby. Population: all of us, whether we like it or not. May as well make the best of it; and actively, intelligently, and perennially try to guard against the worst.

2011-01-26: wikis plus online office suites (office as web app, office as SaaS): time to converge?

[edit]

A thought on some technological convergence that might happen soon—that is actually already happening, but only at a few companies, and may (?) be moot—I haven't determined the exact significance yet.

What is the history of wikis, when you think about it from a certain perspective, except "the concept of word processing forked into a native online-collaboration version several years before word processors themselves were ready to take that leap?" Put it another way: If you were going to start Wikipedia today instead of in 2001, would you perhaps use something more like Google Docs or Microsoft Office 365 than like MediaWiki? (Leaving aside the login requirement as irrelevant at the conceptual brainstorming level. Think "Google Docs but nonprofit and with anonymous editing available".) Think about that one. Maybe get some coffee and let that one simmer on the back burner meantime. Now think about this: sometimes at work I want to collaborate with a (remotely distributed) group on a spreadsheet; but my company and my colleagues are too ignorant and behind the times to use a Google Docs spreadsheet on their own initiative, without being dragged into it and whining about having to create a Google Account (dinosaur users) or whining that they demand to limit/control what third-party IT we use (corporate IT folks), but they can't get off their asses spending-wise enough to provide an authorized equivalent instead (the senior management who control the budget of the corporate IT folks). (With that attitude and outdatedness, they're fixing to have something more substantial to whine about in a few years, namely, unemployment; but anyway, I digress.) So anyway, back to my point. I want my existing private wiki, but with a Google Docs spreadsheet capability available within it. Aha! Who does that right now? Well, Google Apps does, although they don't call themselves a wiki, nominally; but they provide Google Sites (wiki equivalent) with Google Docs [and spreadsheets] (online-collaboration office suite). Office 365 soon will (playing catch-up to Google Apps). Who else does it? Wikipedia? Wikia? Wikispaces? PBworks? AFAIK, none of the above. Which is interesting, because it's the next logical step, in my view. A wiki, ie, online collaboration of word processing, is "just Google Apps without the spreadsheet capability" in terms of "what it can do for me". But while it was once sufficient just to provide online collaboration of word processing—that alone was a neat trick, and is what allowed something like Wikipedia to spring into existence—what about the rest of an office suite? E.g., What about spreadsheets? With the dawn of Google Docs, Google Apps, and MSO365, the one-trick pony of a wiki is probably not enough anymore, I might imagine. That's probably going to catch some people with their trousers down pretty soon, no? I think companies like Wikia, Wikispaces, and PBworks will have to come up with a collaborative spreadsheet app to integrate into their services in the next few years, or at least license one from a third party like Google or Microsoft and port it into their services. *If* Google or Microsoft are interested. But will they be? Isn't that like Toyota inventing a killer new car paradigm, and then GM and Honda asking to license it for their own cars? Hmmm, lemme think. Maybe I gouge you for it; or maybe instead I just tell you that you're SOL and your customers will probably be flocking to me pretty soon. Especially in a future where the apps themselves are free to use. No one can compete trying to sell them. So how does a corporation like Google or Microsoft make money? By having an audience of users who can be served ads. Right? What do they want, then, from their competitors? License fees? Perhaps; but perhaps more so, they want to poach users—semi-captive ones, ideally. People who come to the motherland (Google/Microsoft/Facebook/whoever) and then stay all day. And look at/listen to some targeted ads here and there while going about their business using free apps that are awesome in their performance, features, and reliability.

Am I off-base with this train of thought? Or on track? Not sure yet.

2011-01-29: Quad Cities

[edit]

I never really realized until now that the Quad Cities of Iowa and Illinois, straddling the Mississippi River, were a rather important industrial center during many of America's "golden years" of manufacturing (years when U.S. manufacturing, despite cyclical recessions and depressions, had recurring eras of generally limitless potential, at least psychologically—a span that I'd call 1840 to 1970, roughly). The amount of metalworking that went on in the Quad Cities region between 1870 and 1970 was quite huge—including Deere, Caterpillar, Case, International Harvester, Minneapolis-Moline, and the ecosystem of smaller companies that they no doubt engendered.

2011-01-29: Interesting reading for today

[edit]
  • Hind, Dan (2010-12-14), "Time to democratise science", New Scientist, 208 (2790): 26–27, doi:10.1016/S0262-4079(10)63044-8. Interesting argument; interesting counterarguments, too, from the commenters. Lots to think about. Themes touched on by the article and its comments: Science funding never untouched by human interests, and conflicts of interest. MIC is a primary driver of science investment choices. As are other business fields, e.g., pharmaceutical corporations. What might be improved if the public could vote on some of the funding choices? On the other hand, what of the fact that the public includes a gigantic dose of, well, idiots—or at least adults who behave like children cognitively and emotionally? And in a representative democracy, non-idiots already have opportunities to influence the congressional committees that steer funding choices. Etc.{{citation}}: CS1 maint: postscript (link)
  • Austen, Ben (2011-01-13), "The terminator scenario: are we giving our military machines too much power?", Popular Science, 278 (1). Lots to think about. Themes touched on: Human-machine interactions. Transferring control on the fly. Making machines smarter. Making humans less flaky. Multitasking. Avoiding robot rebellion. Avoiding overreliance and undermanning. Autonomy or lack thereof. Being "in the loop" versus "on the loop". Ways in which machines outperform humans, including even maybe in avoiding wartime atrocities? Etc.{{citation}}: CS1 maint: postscript (link)
  • Onion News Network, In The Know: Are We Giving The Robots That Run Our Society Too Much Power? Panelists discuss whether controversial decisions by the Robot Congress and President Executron indicate robots have too much control over our lives.. Because the Popular Science article's title reminded me of it.{{citation}}: CS1 maint: postscript (link)
  • Billman, Jeffrey C. (2011-01-28), "David Urban: The Patriot. He's a decorated war hero, an accomplished lawyer and an all-around good guy. So what's David Urban doing in a racket like lobbying?", Philadelphia Magazine (February 2011). Funny I read this today, because one of the themes touched on is that one can make a counterargument based on the fact that "in a representative democracy, non-idiots already have opportunities to influence the congressional committees that steer funding choices" [compare nearby above—New Scientist op-ed piece]. In this case, the counterargument is against the idea that all lobbying is evil, which is tied to the argument that the public currently doesn't have proper control over funding choices made via legislation [because "the system is broken" according to the argument, which is where "lobbying=always_evil" fits in]. In the case above [New Scientist op-ed piece], the counterargument is against the idea that the public currently doesn't have control over funding choices made via legislation [because "the system is broken" according to the argument, which is where "MIC=too_evil" fits in]. I think the common denominator here is that the popular themes among the general public—that the system is rigged, and that the public has no power, and that the public isn't dumb, and that those who do have power are pure evil—are facile arguments called out on their distortion by an intelligent and objective look at actual reality. To expand in my own words, the truth of reality is less glamorous than that—i.e., less speciously appealing to the human storytelling instinct and naturally selected cognitive biases (self-aggrandizing; self-excusing; rationalizing; outgroup demonizing; grape sour-izing) than that. It's more of a big, messy, complex, mixed, imperfect morasse of good and bad coexisting in inextricable tangles. And the ways to make things better and fight injustices are much more difficult and tedious and unattractive than the ones that people tend to speciously imagine. If you want to improve the world, throwing a violent temper tantrum may not be the correct way to do it successfully. The correct way may be to do more tedious things—more mundane yet less instantly gratifying—such as educating yourself, being circumspect, and bothering to participate in the channels that are available if you can shift yourself to apply yourself. So goes the analysis. I don't know. I think mad-as-hell people often have a point in many cases. So I would never dismiss them as idiots in a wholesale, offhand way. And yet ... a lot of people choose not to think responsibly. Not that they're stupid [lacking IQ] so much as just willfully ignorant or lazy or childish. It's very easy—facile, in all senses of the word—to simply blame The Man for all of your problems. It takes maturity to differentiate the details of problems and to engineer better systems.{{citation}}: CS1 maint: postscript (link)

2011-02-04: Taking refuge in the mental company of smart people

[edit]

At the end of a long day, where ignorant people tell me that my brief cross-references to various pieces of knowledge are irrelevant (because ignorant people aren't capable of judging true relevance), it brings comfort to read Wikipedia recreationally and come across the ideas of smart people, written about and cited by other smart people. And to read a book like Sloan 1964, written by a smart person (actually by several smart people working together, and I plan to read about that, too, when I'm done Sloan 1964)—and thereby to take refuge in the mental company of non-idiots. Sloan was far from saintly, like most humans; from what I've inferred so far outside of his book, he was probably more of a corporate jerk, social darwinist, plutocrat, and entitled snob than most of us would want to be around; but my point here today is that at least he wasn't defiantly ignorant or dull-witted. Please understand that I'm not calling him a good guy—his philosophy on the WWII era was cynical and amoral to the point that it's a stretch not to call it immoral. But as with many smart humans, you have to hold your nose against the bad aspects while you're paying attention long enough to learn from the aspects that are worth learning from. At the end of a long day of drowning in ignorance and cognitive myopia, I thank fate that I was born in a time and place where I had the privilege of learning to read. To have literacy, and to have books (and, in recent years, Wikipedia) within affordable reach for reading. To bask for a while on an island of intelligence surrounded by an ocean of dullards praising dullness. I'm not big on God-talk, but the phrase that comes to mind is "God be thanked." Or at the very least "thank heaven for small miracles."

2011-02-11: One man's privacy is another man's secrecy; both are overrated

[edit]

Both are essentially just variants of information hiding (blinding). The parameter values are different, of course. Privacy is what we call it when we perceive positive results coming from it. Which tends to mean this: privacy is what we call it when we're the ones benefiting from it. But of course forget even thinking to ask who might be being taken advantage of. "That's irrelevant." (There's that word again, the refuge of the ignorant). Secrecy is what we tend to call it when we see someone else unfairly taking advantage of it to screw others. I've been thinking lately about how the generation before mine grew up in a pre-Web era and overvalues it (information hiding/blinding) in some ways. Before recent information technology advances (in the past 15 years), hiding information from others was easy by default. Meanwhile, the very point of information technology is making information easier to capture, store, retrieve, and share. Is it any surprise, then, that both privacy and secrecy are getting harder to achieve in an era of improving information technology? Privacy chisel = Facebook, for example; and secrecy chisel = Wikileaks, for example. Next question: Is it really all as evil as the older folks tend to believe it to be? They're not the first generation of humans to grow older in a changing world and perceive it as all going to hell in a handbasket. "The world was better in my day." And just like previous iterations (generations) of that theme, they're only half right, and what they're getting wrong—what they're overvaluing, undervaluing, ignoring, fearing, exaggerating, downplaying, or distorting—is important to pay attention to for objective thinkers (while they're busy ignoring it or denying it or failing to be able to see it). Here's an idea that I'll throw out there tentatively that people of my parents' generation will find heretical: privacy is overrated. In fact, so is secrecy. When humans can hide shit from each other, that's when the abuses and mental weirdnesses are at their most pathological. The lies, the crimes, the embarrassment, the shame, the awkwardness, the uncertainty, the ignorance, the wondering, the not knowing, the not caring, the taking advantage, the screwing over, the getting away with, the abusing. When humans are in situations where they can hide very little information from each other, they still often treat each other like shit, of course, being humans; but at least no one tends to have as much of a leg up on the others. It tends to be that no one can get away with screwing the others over as much as they otherwise would. Do you think that that's a bad thing? Maybe the degree of badness that you perceive has something to do with your age, ethnicity, and social and economic comfort. There's that underlying parameter again—it's good when I do it, but it's bad when you do it. Information hiding/blinding: When I do it, it's privacy; when you do it, it's secrecy. News flash: perhaps the best answer is "mutual assured destruction" of a sort. If neither of us can count on getting away with shit, maybe neither of us will attempt shit. Perhaps Marge Simpson said it best: "You know, the courts might not work any more, but as long as everybody is videotaping everyone else, justice will be done." I'm not saying that privacy has no value. Just like anyone else, I wouldn't want to live a Truman Show. But I don't need privacy of the type that lets me get away with screwing my fellow humans over. For example, the "privacy" to commit tax evasion (perhaps only rich jerks need extreme "financial privacy"?), or the "privacy" to commit sexual abuse (perhaps only pedophiles need extreme "interpersonal privacy"?). I don't know. I'm not saying this fleeting thought is fully baked. But it seems like there's at least something to this line of thinking.

2011-02-12: Interesting reading for today

[edit]

2011-02-15: Interesting reading for today

[edit]
  • Krugman, Paul (2011-02-13), "Eat the Future [op-ed column]", New York Times, New York, NY, USA. I know what some of you are saying. "These damn leftists", right? Well aren't you glad that the reactionaries have all the "right answers" then? There, there, ostrich—keep your eyes closed, you'll get less sand in them.{{citation}}: CS1 maint: postscript (link)

2011-03-03: horse cavalry : mechanized cavalry :: mounted police : powered exoskeleton police? robot-mounted police?

[edit]

Interesting insight today. Perhaps horse cavalry : mechanized cavalry or mechanized infantry :: mounted police : powered exoskeleton police or robot-mounted police. Thought train prompted by Wikipedia:Picture_of_the_day/March_2011#March_2_-_Wed, showing mounted police, with the caption "Mounted police are often employed in crowd control because of their mobile mass and height advantage." Just as mechanization helped existing cavalry units fill their existing role with new, non-animal tools, so would it for mounted police. As with military, so with paramilitary. And as with the cavalry example, the transition would take decades, and the traditional form would never really go out of style, although its context of implementation would narrow down to only ceremonial or special-purpose. My first thought prompted by the caption quoted above was of powered exoskeletons ("employed in crowd control because of their mobile mass and height advantage"). But then I realized that maybe it won't be human exoskeletons alone, but rather in various combinations with a robotic horse descended from efforts like BigDog. (Cool stuff here if you haven't yet seen it.)

2011-05-27: Is X amazing and special, or are you just cognitively impaired?

[edit]

After setting Sloan 1964 aside for a while pending higher priorities intervening, I returned to it in recent days. I'm now about 80% through the book. I have to agree with Sloan (per Drucker's 1990 introduction) that Drucker was maybe not quite on target by calling the book "enjoyable". It's not entertaining in the bread-and-circuses or car-chases-and-explosions sense. But it is very interesting, and edutaining, even though dry in some passages. So it's in line with Sloan's expectations; it's not exactly "enjoyable" so much as "cognitively satisfying" and "interesting" and "educational". I don't doubt, though, that Drucker was one of the few souls who would find it downright enjoyable, being as endlessly fascinated by management theory and practice as he was. I have to admit that, having just gotten to the chapter on Electro-Motive and other nonautomotive GM businesses, I am downright enjoying reading about the development of practical small diesels by Winton, Electro-Motive, Kettering, and colleagues in the late 1920s and early 1930s.

The jacket flap copy (well, I guess it should be called the inside back cover copy, since that's where it's printed in this paperback edition) set me to thinking again about a theme that's been bugging me in the past year or two. The copy essentially says, in so many words, that you should read this book to learn the magic of how Sloan grew GM into the giant that it became. That you can then apply these amazing techniques to your own organization. I'm paraphrasing here from memory—the book is not at my elbow for quoting as I jot this down, but an exact quote is not the point; it's the overall theme and emotional tone.

On the surface, you might say, sure, what's wrong with that; of course teaser copy would say that sort of thing. Yeah, OK, no doubt. But my problem with it is that most people probably believe it uncritically, whereas it's actually wrong. Which is to say that I truly believe that its theme is specious and only *seems* true to people who are too cognitively impaired to see reality one or two levels above the specious one. And that would be most people.

What's my problem, you ask. Why do I say this? Here's the rub: What Sloan and colleagues did with GM in the 20 years between 1908 and 1928, as well as their accomplishments afterward, was not magic at all, as a careful reading of this book makes quite clear. It was simply what happens when smart, hardworking, circumspect, prepared people get lucky enough to catch the right sequence of related opportunities and are wise enough to recognize it, grab it, develop it, and keep from fucking it up too badly.

But that, of course, is what makes it seem magical to most people. Because most people can't even do that shit. Which is to say that most people can't manage to do the minimum that needs to be done in order to say that one has simply at least done what ought to have been done. I'm not sure that I'm successfully getting across (into my readers' minds) how this strikes me in my own mind. I guess you could sum it up with the old theme that I've wrestled with before in various avatars. The nature of humans. A negative image; a negative-space carving. Where incompetence is the norm, competence is the exception. In objective reality, this doesn't make the competent one amazing; it only means that the incompetent ones are inadequate. But the trick, the funhouse-mirror twist, is that (1) through the eyes of the incompetents, they themselves are viewed as competent and the unusual ones as amazing/magical/remarkable; and (2) it challenges many of the definitions themselves. When most samples are inadequate, what is the meaning of adequacy? From within this system, adequacy is defined with an internally consistent definition as being the level of OK-ness that most samples have. Which means that the concept of inadequacy is reserved only for that narrow class that's even below *that*. But from an external view, that is, from outside the system, this is seen to be bullshit. Well-intended and non-self-aware, sure—but still bullshit in the sense of "not reality". In the sense of "a distorted model through which to try to understand reality". The model is as good as the modelers were capable of making it. It's not their fault that it's not good enough. They were just incompetent. Yet it's still a flawed model, nonetheless. But what action can be taken on that point, from within the system? Especially given that it might require that the system dwellers, those who dwell exclusively within the system, understand and appreciate the external perspective? My experience is that when you try to explain it to them, they get annoyed and angry, and either they fail to comprehend, or they fail to *allow* themselves to comprehend—because to admit, and to accept, and to accept the corollaries, would be no fun.

This is only the ancient theme of the land of the blind where the one-eyed man is king. Or stoned to death for being a witch. You pick. A third option is that he just burns out. At any rate, the point for me tonight is the riddle that it presents: Are they adequately sighted and he amazing and special? Or are they just blind, and he halfway to what ought to be? And they not close to what ought to be? If what ought to be is defined as sightedness, and this is defined as adequate, then what are they—what could they possibly logically be considered—other than inadequate?

This is one of my problems with typical human bitching. People bitch because things aren't as they should be. And this is defined as a problem, which is why it is considered bitching-worthy, because problems are bitching-worthy. But let's get real here: Are you sure you want to define adequacy—absence of problem—as what ought to be? Don't start something you can't finish, my friend. Here on planet earth, if you wanted to be true to the real truth, and to act on your convictions, you would have to recalibrate your sense of adequacy. Else stamp an "F" on your whole species, in terms of overall, final grade (individual tests, quizzes, and homework assignments notwithstanding—some of which, no doubt, were A's). Only logic. Nothing personal. But I know you'll probably take it that way anyway.

"The old theme that I've wrestled with before in various avatars." I'm reminded of that Rammstein song, "Das alte Leid". I've always been drawn to it. No idea what inspired the lyrics. No idea what it's "about", in the sense of the artist's (writer's) intentions. For me the song is somewhat tainted, reduced from perfection, by that one single line, „Ich will ficken“. Ist *das* das alte Leid? Wirklich? I can think of older and sadder sorrows. You should be so lucky that that's your biggest problem. But I'm talking out of my ass, because the rest of the song suggests that I'm simply misunderstanding the relationship (if any) of that line to the rest of the song. The rest of the song suggests too strongly to me that it's about something real—something substantive—not just „Ich will ficken“. All of this is probably senseless pondering anyway. The point of such things (songs and such) is that they're sufficiently abstract to have many traces and echoes of meaning, of interpretation, overlaid onto them. It's part of what makes art art. In this sense, the artist's intention is irrelevant anyway. So I should just sit back and enjoy the song, and hear in it what I want to hear. Which I think I will go do right now. Then off to bed, too late.

Some minutes later. Just spent some time chilling and listening, nodding off and coming back again. "Das alte Leid" is good, but another one that I've always been drawn to is "Sonne". Gute Nacht.

PS: A few days later. I looked again. The paragraph that galled me most was this: "Draft your own blueprint for organizational success by using the tools found in this book. Learn the valuable lessons that only Sloan could teach." I realize that this was good-faith writing by someone assigned to churn out some back-cover marketing copy, and I'm not faulting the writer for not knowing any better; I'm just faulting the alleged ideas themselves, and the fact of questionable competence in general, across many people. Consider that at least 5 people who saw this before it got printed had the authority to object and revise, but probably none did object. Probably none of them even realized that it was so flawed. There are serious problems with these alleged points. (1) They say "the tools found in this book." What "tools"? What Sloan presents is not "tools" in the sense of "special techniques that the readers had to be handed because they couldn't independently reinvent them based on logic and common sense". Except for readers who are idiots, I guess. There are no "tools" in this book—only smart, sensible people analyzing the reality around them in straightforward ways, forming committees to study its details, and authorizing decision makers to set policy based on it and middle managers to administer the policy. These are not "tools" like Joe Blow's ingenious improved mousetrap or Jane Doe's new type of pliers, deserving of a patent for their creativity or something. THIS IS ONLY LOGIC AND COMMON SENSE. Now let's turn to the second sentence, "Learn the valuable lessons that only Sloan could teach." How is it that, given that the lessons are only logic and common sense, only Sloan could teach them? No one else on earth, you claim? Isn't that like saying that there's only one musician in the whole world? It's like, "listen to this recording, then use the miracle found therein, called a piece of music, to give you ideas about how you might make use of some barrels (drums) and cane reeds (wind instruments). Learn the lesson (what music is) that only this one man on the whole planet, this one musician, was capable of teaching to the rest of humanity." That might make sense if music weren't an inherent talent that is distributed widely throughout humanity. But since it is (inherent, distributed), it of course obviously doesn't (make sense). Just because *you're* not a musician doesn't mean that there's only one musician on earth. Just because *you're* an idiot doesn't at all mean that everyone on earth is an idiot except the one dude that you're lionizing and trying to fit into your Great Man theory (which requires you to distort the rest of reality all around him). I don't know. I realize this all sounds cranky and no one cares. Part of my point is just that GM could still have arisen if Alfred Sloan had never existed. Just like electric appliances could still have arisen (albeit maybe a few decades later than otherwise) without particular individuals, such as Faraday or Tesla or Edison or Westinghouse. But it just bugs me that adequate competence is treated like a miracle just because incompetent ignorance is so common and is considered the baseline for comparison. In other words, maybe incompetence is the wrong thing to pick as a baseline for passing judgment on the nature of what competence is, dimwit. Maybe I'm not crazy, maybe the whole damn system is crazy.
PPS: A few days later still. I just have to note another instance of the same theme. What do humans call a human who's consistently nice, honest, generous, unselfish, fair, and so on? They call such a person a (literal or figurative) saint. The concept of a saint is of the special/amazing/one-in-a-million archetype mentioned above; yet, as with the material above, I see that same discrepancy again. If <consistently nice, honest, generous, unselfish, fair, and so on> are defined as what ought to be—slipping below which minimum puts you in "shouldn't-have" territory—and their opposites are defined as bad and annoying and evil and bitching-worthy and problematic—yet humans readily point out that they are special/amazing/rare—then what are humans obviously pointing out about themselves? That most normal humans are failures, in important/non-negligible ways. As someone who manages to live while hardly ever employing white lies and never cheating or stealing or lying his ass off, or feeling the need to do those things, I find that this special/amazing/one-in-a-million thing just seems self-evidently problematic. Again, I'll just repeat: When the adequate is unusual, that means that the usual is inadequate. Yet humans have a very uneven way of acknowledging this reality. Often, the same guy who will say that people suck is patently obviously one of the sucking people himself. Yet he would angrily resist any school curriculum that calmly explained to schoolchildren, such as his kid, that the human species is incompetent on the whole, with a heavy sprinkling of exceptions. I don't know, something about that just reveals the incompetence yet again. Yet that man would disagree with the assertion that he himself is incompetent. Oh well, enough musing, past time for bed. — ¾-10 04:51, 6 June 2011 (UTC)

2011-06-03: Machine element musings

[edit]

It had previously occurred to me that the concepts of a split bushing and a collet converge depending on which parameters you vary and what values you give them.

Today it occurred to me that, similarly, the concepts of a split bushing and an annular gib converge depending on which parameters you vary and what values you give them.

Regarding the earlier observation, I've often found it interesting to ponder the fact that the same object, a split bushing, can be either an excellent bearing to encourage shaft rotation or an excellent brake against shaft rotation, merely depending on the parameter value for degree of clamping force. (That is, how hard you squeeze the kerf shut.)

What is a gibbed box way, in abstract? Thematically? From an abstracted (thematic) perspective, what is a gibbed box way except a bizarro collet, square instead of round and loose instead of tight?

Start imagining some practical applications of the abstract ideas above.

Instead of adjusting your box way gibs with a row of 4 screws on each gib, you could make the way round and adjust your one gib by turning one nut to drive your annular gib into a conic segment, then locking it down at the perfect spot. That is, not too loose, and not too tight, but just right. A sliding fit.

Drill presses already have round columns that serve as ways for the table "Z-axis motion" (such as it is, to give it an overdignified name). But one difference is that there is no annular gib. The table locks by clamping one shoe or pair of shoes against the column via screwing them toward each other. There would be no reason to build a drill press with the annular-gib/split-bushing model, because it's a needless expense for the application. However, imagine an entire milling machine constructed on this model. Each way is a round way with a collet-style gib.

The gib adjustment and the axis locking can occur via the same motion—squeezing the collet shut—but merely with a different amount of force. Compare this to traditional old-school milling machine dovetail ways, where the gib adjustment and the axis locking can (and often do) occur via the same motion—pushing the gib toward the dovetail. Just loosely in the former case, and tightly in the latter case.

A pair of round-column ways would establish an axial-motion axis that couldn't spin (no rotary axis).

Some machine elements whose neighborhoods this line of thinking approaches are the sliding headstock bearing (that is, collet) of a Swiss-turn, and the twin round rams/overarms on some old horizontal mills.

What practical innovation is that train of thought leading toward? One might expect that if I'm writing this here, I must be building to such an interesting answer. But nope. I couldn't think of the answer today. So this is just a bunch of abstract musing for today. Entertaining although "pointless" (in the short term). No such thinking is pointless in the long term, though. It's just the groundwork for a later applications-related insight. In other words, the R in R&D.

So no creative application brainstorm today. Nevertheless, some confident predictions can be made, though:

  1. A creative application brainstorm is probably forthcoming, whether next week or next decade.
  2. I am not the first person ever to have these thoughts, or a similar, partially overlapping group of thoughts.

The latter two confident predictions get their confidence from empirical experience.

When I was a little kid, from the time I started becoming decently aware of machine elements—around 8 or 10 I guess—and for many years afterward—I had a hard time having confidence in the reliability of state in machine elements that relied on a "mere" state change—a "mere" parameter value dial tweak—to produce two "opposite" outcomes. (Of course, I didn't have the words to say that back then, but that's the unnamed feeling that bugged me, as I can now adequately verbalize. [Update: 2014-05-16: Yet another layer of "now I know better": I just found out that there are names, and Wikipedia articles, for these concepts: tipping point (physics) and hysteresis. I was already familiar with the phrase "tipping point" in its broad sense that any layperson would recognize, but it was interesting to read the description of a tipping point as shined through the physics lens.]) The example that sticks in my memory (easiest immediate recall) from back then is the concept of a friction clutch. Engaged or disengaged depending solely on the throwout bearing force. I was never comfortable with it back then. Oh sure, I "got" the concept; I just didn't trust the controllability of such an element from a predictive point of view, developed on a speculative basis (idea first, implementation second). Could you trust the clutch not to slip under a heavy-ish load? Could you trust it not to grab too hard when you're trying to slip it? What exact size of spring would you build into the pressure plate? Where exactly is the dividing line between reasonable slipping for smooth operation and "burning up the clutch by riding it"? I read some book or magazine article providing a layman's overview of types of clutches, and I knew that I "liked" dog clutches better than friction clutches on an intuitive level, because the "positive engagement" (as engineers call it) of the former provided certainty that the state—locked up or disengaged—could be relied upon to stay that way until you really meant to change it. Nevertheless, I also clearly knew that the reliable behavior of friction clutches was the truth (as successfully functioning cars and trucks proved empirically every day all around me) whether I was comfortable with it or not; and I knew that in order to get myself right—that is, to understand reality as it actually is, regardless of little old me and his mental blind spots—I needed to become comfortable with it. Well, it took years, but I'm long since there nowadays; and for having overlearned in order to learn, I think I may be more comfortable with it now than others who absorbed it more easily to begin with, because now I look for it. In other words, a tortoise-and-hare situation. The tortoise is the winner in the end, despite the hare's early lead. The difference now is that you can't surprise me with new instances of the abstraction anymore, because as soon as I see the abstraction poking through, I'm comfortable believing—I understand. It's just our old friend in his latest avatar, is all. Das alte Leid—but no, no grief; rather, just an old friend.

It seems to me that being at ease with this old friend is part of what it takes to design insightfully simple mechanisms—things that make you say, "Wow, it's so simple, I can't believe the useful behavior that it provides." It's the thing that prompts you to consider that maybe your generator should also be your starter motor. Or that regenerative braking makes more sense, cosmic-justice-wise, than non-. Why waste the "no-longer-wanted" energy ["previously I wanted to be going fast, but now I don't"], by frittering it away as waste heat, when you can recapture it for reuse?

The other thing you need to put your mind at rest is the paradigm of establishing the theme speculatively/predictively, but being at ease because you know that you have the resources to refine the parameter value by mere blind (or at least "visually impaired") trial and error. Another way to say this is that you need not worry about hitting the parameter *value* on the head the first time out; it's enough "merely" to identify the parameter at first. Thus, in the friction clutch example, you could design a friction clutch into your machine with confidence even though you can't guess ahead of time (predictively/speculatively) what particular strength of coil spring to build into the clutch-disc-and-pressure-plate pack. But I was stymied in coming to grips with that because I lacked resources for monkeying around—the proverbial "taking things apart just to see how they worked and just to see if I could manage to put them back together again in working order." An old theme that you see repeatedly in biographical descriptions of people like Thomas Edison and Henry Ford. Those lucky kids, they had the resources and the leisure, or at the very least, they had parents who had rational priorities, and even if they were somewhat poor, they knew that they should let the boy wreck a few cheap watches in the course of teaching himself to be a watch repairman. I never felt free to do that when I was a kid. If you have a box of assorted springs, you can simply swap various ones into your prototype friction clutch until you hit upon the right combo. Having had some opportunities for that sort of thing by now, I too have had a chance to have my learning curve, belated though it may have been. Of course, I live in a different era than Edison and Ford lived in. Some gadgets defy layperson methods more than others—for example, electronic as opposed to mechanical. I won't be taking apart any printed circuit boards to "see how they work". *And yet*, look at all the people in the last 40 years who taught themselves a lot about computers by kitbashing them, soldering gun in hand. Different, yes, but impossible, no. However, again, it helps to have a parent who can at least provide you with a succession of computers—even if only used, outdated ones—and will encourage you to rip into them even if it means that you'll not be putting them back together again.

Of course, professional engineers will object that you can't engineer a Hoover Dam based on the idea of "oh, just monkey around with different flood gate sizes once you've already built the thing." There is a place in life for working every parameter value out beforehand. Yet even in those cases, take a close look at how they do it. They do R&D in an engineering laboratory, including cut-and-try experiments. So, for example, the only way that they can successfully say ahead of time (which they do) that the concrete for the dam has to have X.YZ percent of Portland cement, and sand that's not more than screening grade N, is because they did some applied-science experiments in a test tank with different grades of sand and they drew lessons from it, tweaking their earlier theories to incorporate empirically determined correction factors or to include a variable that had been missing earlier.

I'll tell you one thing. People like Henry Ford and Walter Chrysler had a lot of monkeying-with-spare-parts under their belt. The stuff they accomplished was not all 3D-CGI-CAD/CAM simulated ahead of time. And that's OK. Swapping in different springs is a perfectly fine way to develop a working prototype. It's OK that there was a bit of "blind" cut-and-try to go along with the predictive theory.

That summation of the theme that I achieved in the previous sentence reminds me a little of this, too.

Back to clutches and so forth. The concept of "taking advantage of identity between objects with a mere state change to bring out a new behavior" (to give it a working title) can be viewed as either "natural" or "backwards", depending on how you look at it. Using the wheel rim as the brake drum/disc is not a new idea; moreover, it's not only old but ancient. And it's not just an ancient design cue; it was the original brake design. That's because starting from nothing and working forward on the historical timeline, first you invent the wheel and the cart it carries, and then later, the way that you invent a brake is to stick a shoe next to the wheel and jam on it when you want to slow down. But consider what it looks like with the machine-element-development video playing in reverse. If you think of modern automotive brakes as the default way to design a brake, then the brake on a bicycle (caliper-style) can be viewed as slightly "ingenious", in a way, for having "brilliantly" spared the expense of a separate brake disc by sticking the caliper directly straddling the wheel rim. Of course, that view is inside-out, historically speaking. But if you didn't know that, it might seem natural.

Machinery from the period of 1840 to 1940 often strikes our eyes today as cleverly simple. Of course, it was usually also devoid of any safety features (belt guards, wheel fenders, grounding conductors). But that was part of the philosophy of life back then, unconsciously imbibed through enculturation. "Who needs a damn belt guard? Damn waste of money making one. If you don't know to keep your fingers outta there, then you deserve what you get." This was the prevalent worldview in Henry Ford's heyday, and one can empathize with his stance in the mid-1920s, when the world told him that electric starters and cute colorful paint jobs had now become "necessary" after having been extravagances, and he told the world that it didn't know what the hell it was about, and that it should listen to some damn sense and remain prudent (prudence involving frugality in large doses). And yes, there's a solid grain of truth and justice in all of the above (no damn belt guards, no damn electric starters). But of course, the people who overvalued that grain, say with the belt guards, also simultaneously undervalued the pondering of the fact that humans can't be hypervigilant all the time. Mistakes will happen. Why not put a guard in front, deflecting the occasional mistake harmlessly away? It'll pay for itself if that one event, even though rare, is horribly costly when it does finally happen. But there was a time when you couldn't compete by thinking that way. The guy who took his chances to a higher degree could underbid or undersell you. Later the culture became such that no one could get away with taking those inordinate chances, because if regulation, statute, and their enforcement didn't even the score, then litigation would. Thus one could now compete despite the extra expense of those "nonessential" features. Because the other guy's doing it too now.

Before you wail that we lost something by letting that pendulum swing too far the other way—which no doubt is true in some respects—remember that things got thrust in that direction for a reason. Because it used to be bad in the other direction. A factory worker shouldn't have to stick his fingers near an unguarded belt and an ungrounded circuit just because those pieces of equipment were 10% cheaper than otherwise for the factory owner to buy.

Notice, though, that the pendulum of life in the U.S. has been swinging back that way again, though, to some extent. Driven in part by the competition introduced by globalization. If the guy in China doesn't have to pay for a belt guard because the Chinese version of OSHA has fewer teeth than the U.S. version, well now that matters back in Peoria, too, because Peoria man has to sell his product in direct competition with Shenzhen man.

In all fairness to Henry Ford or to the Shenzhen factory manager, it is possible to take things too far in the risk-averse direction, and I believe that some Americans did that in the post-WWII era. Well, two areas, in fact: not just risk aversion, but also socioeconomic entitlement. Not only did many an American arrive at the mindset that no litigation was too frivolous for our judicial system to entertain, no union demand too cushy to cede, no product safety feature unwarranted on a cost-benefit basis informed by actuarial risk and the need to keep your damn fingers out of the hole and your damn eyes on the road; but also, they arrived at a mindset that The Man's pockets were limitlessly deep, till the end of time, and there was no economic demand that wasn't feasible to obtain if The Man's selfish hand could just be forced enough, and the developing world is always somehow inferior to America, till the end of time, and non-IT-geeks are entitled not to have to understand *anything* about IT, till the end of time, and they should still be paid top salaries, though, because they're entitled to top salaries by birthright, even if they can't even manage to download and rename a file or reboot the fucking computer.

When U.S. labor unions were weak in the 1880s through 1920s, the industrialists bent the workers over and shafted em good in many (too many) cases. When the Great Depression and its cultural aftereffects powered U.S. labor unions up to get industrialists over a barrel to a halfway decent extent in the 1930s through 1970s (which was really only fair for the have-nots, given that the haves had largely caused the magnitude of the Depression with their own stupidity, greed, and myopia), they ended up putting cushiness above economic efficiency and managerial propriety, at all costs, in many cases. (For example, by the 1970s it was the era [at many a company though by no means all] of louts that couldn't get fired no matter how drunk they showed up for work [or didn't show up]; the company lacked the dominance in the pure power struggle even to make their own firing-for-incompetence decisions. Was that a proper balancing point?) When a combination of Reagan revolution and then globalization ripped that rug out from under them, they were a little shocked in some ways, tending in some cases to take it as the world reaching its end times. Pure insanity, hell in a handbasket, no local stops. But is that what it truly has been? Or has it been just another big pendulum in life swinging back, rebalancing, misinterpreted by a generation of people who mistook the historical moment of their birth and upbringing as an eternal birthright entitlement rather than a mere temporary cosmic coincidence? But on the other hand, in all fairness to them, I think it's clear that the Reagan/laissez faire/deregulation pendulum long since swung past its midpoint and headed off into territories of wretched excess. Time for the next swing back. This time, though, there's at least one difference: the world's economy is so globalized now that the swinging is going to have to be international in character. No more separate national destinies, economically speaking, as far as I can tell.

Almost done reading Sloan 1964 now. In recent days I read the chapter on personnel management and labor relations. Sloan points out, as someone who was alive to see it (which no one today is anymore), that General Motors never even had to deal with labor unions to any appreciable extent until the 1930s. Yet by the 1950s, only 20 years later (a timespan that doesn't feel so very long to those of us who've got some decades under our belts), many Americans, out of sheer ignorance and historical amnesia, mistaking a historical moment for eternity, viewed the twin institutions of Big Industry and Big Labor as timeless monoliths. As if it had been going on forever. But large corporations, other than early anomalies like the Dutch East India Company and the British East India Company, never existed as an institution of life—in profusion—until the late 19th and early 20th centuries. Big Labor didn't really exist until the 1930s, yet by the 1940s and 1950s it was a huge force to be reckoned with. That didn't last forever; by the 2000s we were kind of back to unions-on-the-sidelines in many respects of life if not all. America as economic superpower didn't exist until the 1890s, I'd say roughly, and as a military superpower not until the 1940s. Yet a generation of people born the next day (historically speaking) allowed their minds to be entirely boxed in by the concept that America as it was when they first woke up is America as it had long since been and forever would be no matter what.

Are the preceding 3 paragraphs unfair? Inaccurate? Unreasonable? Or are they somewhat on track? I would never claim that they're masterfully omniscient analyses so much as offhand glimpses attempting to sketch, however roughly, a complex landscape. Does everyone (left, right, up, and down) hate them, because each can find parts of them that call out the excesses/stupidities/myopias of their faction? Are they equal-opportunity reality checks?

I'm overdue to go to bed. There's no grand lesson or political slant to be given here at the bottom line of this long musing session. Just the realization that life is full of forces that must be continually rebalanced if comfort and sustainability are to be had and maintained.

2014-10-09: Something pretty much anyone can do

[edit]

I was skimming Quora tonight. The following resonated in several spots. It's a chunk quoted from an answer written by a Quoran named Matt Johnson. In his post, he goes on to give links citing the antecedents.

"The classic Einstein epiphany went like this (apologies for mixing metaphors):

1. Get the lay of the land.

2. Take note of the the "knots" - the contradictions & dead ends.

3. Trace back the threads leading to the knots.

4. Cut the faulty thread, and attach an already existing thread in its place.

5. Point to the new thread connection, and pause for the world to collectively gasp 'Oh, that's simple, why didn't I think of that?'.

"Einstein didn't have the epiphany because he was a genius better able to function through complexity (which is more your typical 200-ish IQ person), he got there by simplifying. He was phenomenal at recognizing the key set of assumptions relating to a problem, and then creating thought experiments that re-arranged (or even left out) some of the assumptions.

"I find this uniquely powerful because it seems like something pretty much anyone can do with a wide variety of problems [...]"

If you've read my 2011 post above, you'll see, I hope, why these spots resonated. Compare "[...] part of what it takes to design insightfully simple mechanisms—things that make you say, "Wow, it's so simple, I can't believe the useful behavior that it provides." Also, there's resonance in the idea that some way of thinking that's so "amazing" or "mind-blowingly clever" (or whatever) and that was employed by a supergenius such as Einstein is actually just a smart, simple approach that any of us mere non-geniuses can take, if we're open enough cognitively to take it. I would add, too, that the latter (being cognitively capable of taking it) is also simple as dirt (speaking again, on a level behind the foreground, of smart and simple) for anyone not neurologically hobbled by being too much of a dick to think completely. Oh, wait, that's right—sorry, half or so of humanity, guess you'll just have to sit this one out on the sidelines and take the others' word for it regarding how simple it is.

Speaking of thinking completely instead of incompletely, that is, holistically instead of in poorly linked fragments—while I'm looking at the same Quora answer, I also note the resonance from another snippet in a nearby answer written by a Quoran named Satvik Beri. He says, while trying to explore some patterns of thinking shown by unusually smart people, that "They're extremely talented at going up and down levels of abstraction. Geniuses tend to be able to fit seemingly unrelated facts into the big picture almost instantly, and drill down to any level of detail. On a related note, when learning they tend to learn at every level of abstraction at once, rather than simply building from the bottom up or top down like most people." Good points—ones that I, too, have thought about before. He also brings up the concept of being able to free one's rational thought processes from being constrained and hobbled by one's emotions. That concept is interesting and it's one that I've often pondered before, but I see two different kinds of it, one with high potential to be detrimental in the grand scheme (long-term net effect) and one much more positive (much more valuable to "dystopia avoidance"). He mentions the detrimental one—that some smart people can put aside any moral qualms while working on developing or unleashing, for example, horrible weapons. Well, yeah, and that was good for the Manhattan Project, no doubt; but I would point out that it's easy to put aside moral qualms for people whose brains possess only rudimentary moral qualms anyway. And plenty of supersmart humans over the centuries have been plenty sociopathic/psychopathic. Meanwhile, the more valuable type, in terms of net value in the grand scheme of existence, is when smart people can put aside those of their own emotions that would make them dicks if they couldn't control them—impediments such as arrogance, jealousy, impatience, egotism, poor empathy or lack of empathy—when they can put those aspects of emotion aside to let rational analysis and synthesis proceed unhindered. For example, you can't synthesize the best ideas from a wide range of thinkers if your brain is busy trying to keep you from acknowledging someone else's smartness and insight (because, for example, you have a hypertrophied but fragile ego that is built on feeling like you are better than everyone else [and thus cannot spend time pondering and duly acknowledging the achievements of others], or because you can't tease apart any valid bits of thought from the thinking of some other dick that you hate so feverishly for his dickish components that you can't let yourself admit that there could be any single thing that he was right about).

Oh well—I promised myself that I would go to bed early tonight for some nice extra sleep, and it's already too late for that. Enough capturing of fleeting thoughts for tonight.

— ¾-10 02:56, 10 October 2014 (UTC)

2015-01-06: Ramachandran developed a brilliantly simple device

[edit]

"Ramachandran developed a brilliantly simple device—an oblong wooden box with its left and right sides divided by a mirror, so that looking into the box from one side or the other, one would get an illusion of seeing both hands, where in reality one was seeing only one hand and its mirror image. Ramachandran tried this device on a young man who had had a partial amputation of his left arm ... [which was] subject to painful cramping that his doctors could do nothing about.... [Ramachandran said], 'This immediately gave him the startling visual impression that the phantom [limb] had been resurrected.... He cried out, "It's like it's plugged back in!" Now he not only had a vivid impression that the phantom was obeying his commands, but to his amazement, it began to relieve his painful phantom spasms for the first time in years....' This extremely simple procedure (which was devised only after much careful thinking...) can easily be modified for dealing with phantom legs and a variety of other conditions involving distortion of [body schema]." —Oliver Sacks, 2012

2011-06-16: du Pont : GM :: Pratt & Whitney Machine Tool : Pratt & Whitney Aircraft

[edit]

This analogy has been flitting about the edges of my cognition for a couple of months now, but today was the first time that it moved forward into my conscious thought. Having just read Sloan 1964 and Fernandez 1983 in overlapping succession, and having finished Sloan last week and Fernandez today, I found that this analogy struck my consciousness. Explanation follows.

du Pont : GM :: Pratt & Whitney Machine Tool : Pratt & Whitney Aircraft

In both cases, you find an established industrial corporation, in the early 20th century, that looks around at the commercial landscape and says to itself, "Where can we invest our lump of capital that we've built up? What's an area where the growth and development prospects look promising?"

In both of these cases, the established corporation is one that has made its fortune in weapons, one way or another. Either making them or making the equipment or ingredients that are used to make them. Why? Well, how you answer that "why" depends on which scale of abstraction you're focusing on. But at the highest level, looking at true root causes, it is simply because humans like to invest resources into weapons as a way to dominate other humans; and there are always some humans ready to do the supplying, for a price, if other humans are able to do the demanding. It's what humans do. Which is to say, more accurately, it has long been one of the highest priorities of humans' resource allocation choices.

To be fair, though, looking at du Pont and P&W Machine Tool in these moments, the growth industries they're now eyeing are not just all about war. Motor vehicles and aeroplanes are every bit as much about peacetime—civilian use, commercial opportunity, private-citizen use, agriculture, leisure—as they are about military applications. Of course, no one's disputing that motor cars and motor trucks and motor cycles and caterpillar tractors and aeroplanes are going to make boy-howdy nifty military equipment. And boy what a market it's going to be selling to ministries of war/defense as well as to civilian customers. But humanity has a choice. Swords or plowshares, it's up to humankind in general; this is dual-use technology, and "we're just the engineers and business executives and bankers and shareholders who are going to supply it. If not us, someone else like us." So they (of the early 20th century) say to us (of today). Of course, therein lies great rationalization potential; but in fairness, we ask, are they wrong? No, you can't honestly say that they are. If you're going to look at culpability in warmongering, you're going to have to dig deeper than simply finding out the names of the people who work at certain companies. That's the specious level at which one might imagine one can stop. But for every prostitute there are 500 johns, and whose fault is the existence of prostitution? Where's the root cause, my friends? Jane's behavior is a symptom; John's vices are the pathogen. Now, does that make Jane not a sex worker? No. Would Rachel step in if Jane left the market to her competitors? Probably. Does that make Jane wholly innocent? No. But Jane is not the beginning and the end of the story. If you truly seek to understand reality, as opposed to just claiming that you do—a questionable premise among humans, but let's entertain it for the moment—you have to look at all of it; you must account for all of it. This is what I love about most engineering. Most engineers, at least while they're on the clock building a new mousetrap, can't live in make-believe world where they only account for the parts of reality that they find convenient. You can't build a jet engine unless the applied physics is correct in every detail. If not? Kaboom. This is a much more honest way of life than many other spheres of human affairs, where people get away with ignoring some (or many) aspects of reality. Now, I'm not extending, in this thought, to the question of who's buying the jet engine and what nasty things they might be planning to do with it. I realize that that's the next step that must be taken. But my point in this paragraph is that at least sometimes, some humans are forced to deal with reality in toto, at least for the moment. I'm finding that even that, when it happens, is quite an achievement in humans' mentally challenged relationship to reality.

Now, it's what? 1909? 1918? I forget the exact timeline offhand, but it's somewhere right in that decade that du Pont looks around at young industries pursuing new technologies with large growth potential and looking to borrow capital from someone, whether bankers or investors (not always the same thing, although there's a lot of overlap). Aha. Billy Durant's General Motors is in trouble and needs rescuing. du Pont puts up large amounts of money, becoming a principal stockholder in GM. Most of this is the du Pont corporation investing its profits, although some of it is the du Pont family investing its personal fortunes (which, of course, came from the same egg basket in the grand view—the actuality-oriented view if not the nominality-oriented one). Besides capital, du Pont also injects some management know-how to keep GM from imploding of its own mistakes.

Now, it's what? 1925? Yeah, 1925, and Niles Bement Pond looks around at young industries pursuing new technologies with large growth potential and looking to borrow capital from someone, whether bankers or investors (not always the same thing, although there's a lot of overlap). Aha. Fred Rentschler has plans to cherry-pick George Mead and the other best people from Wright Aeronautical. Mead's got this whack new engine he's gonna build. It promises to be a huge money-minting machine if all goes well. He's one of these Kettering type of people. Damn genius that's gonna invent new stuff for us to manufacture and sell, and the whole world's gonna eat it up, because this is useful stuff, and Progress to boot. About Mead's engine idea: Armies (/newfangled army air forces), navies (/newfangled naval aviation forces), postal monopolies (/newfangled postal monopoly air mail services), private-sector passenger airlines (imagine!) and cargo air transport (paging Jules Verne, this is some air-castle shit!). Now, it's 1925—all this shit is already well underway, mind you; it's not "new"; but it's still clearly infantile compared to what any intelligent, informed, street-smart businessman can see that it's going to become over the next 10, 20, 30 years. And I ask you, are they wrong? I ask you, oh reader of 2011 who knows a little something about how the rest of the 20th century turned out, are these guys of 1925 wrong about this investment opportunity? Hell no! The 1925–1955 period proved them right. Now, never mind what might happen after that, as technology becomes complex and expensive, and as markets become mature and even saturated, and as the rest of the world catches up with the U.S.'s manufacturing capabilities. That's all in the distant future of the 1960s, 70s, 80s, 90s—hell, even the 21st century—but forget that pie-in-the-sky stuff of the year 2000 (and beyond) for the moment; God knows we'll all be wearing silver jumpsuits and flying to the moon in our family sedans for Sunday picnics by then. Let's talk about ROI for the 1920s, the 30s, the 40s! Gad, what a time to be alive!

By 1950, although the Reds are very scary, America is on top of the world. Man, this feeling's gonna last forever! What a beautiful world this will be / What a glorious time to be free ... As long as the Reds don't decide to push the button down, we'll be A-OK.

By the 1990s, some of that euphoria(/myopia) is over, yet even so, some people think that maybe this time we've finally arrived at "the end of history". This time is different ...

Silly rabbit, Trix are for kids.

I guess I have one more thing to say tonight about Fernandez 1983, which is that, in a way, it's a shame that he titled it as he did; he probably did himself a disservice. The title probably makes many people assume right off the bat that the book is a Merchants of Death–style morality tale, a haranguing of evil capitalist warmongering. (I say this having not yet read Merchants of Death. Really I should read that before I buy into that image of it.) But that's not what Fernandez 1983 is at all, in my assessment. It is simply an NPOV, evenhanded examination of the inherent challenges of having a military–industrial complex of a non-nationalized character, creating a dance between government and for-profit corporations. The particular steps of the dance must be figured out, and the figuring out can be painstaking. I admit that it seems fairly clear between the lines, only as you reach the end of the book (the part about the late 1970s and early 1980s), that Fernandez was a little disgruntled by the way UTC (like some other conglomerates) had eventually become a forced-merger M&A vacuum cleaner (or shark perhaps) that in some cases milked federal money where it wasn't deserved. But the first 90% of the book is an entirely evenhanded, rational look at the inherent difficulties of for-profit R&D with dual-use applications—the parts that are unfair to the public, sure, but equally the parts that are unfair to the engineers and businessmen (e.g., being called warmongering profiteers one decade (WWI) and patriotic, heroic nation-saviors a decade or two later (WWII), both for the same business activity).

2011-06-26: People who are less famous than they ought to be: number N+1: Edward A. Deeds

[edit]

When you realize that Edward Andrew Deeds was instrumental in various aspects of automobile and aircraft technology and business, being a cofounder of Delco with Charles Kettering and a cofounder of what became United Aircraft and Transport with Fred Rentshler, you realize that he is not nearly as famous as he ought to be based on his impact on history. It would be neat to start a list of people who are underfamous—whose fame is disproportionately small compared to their historical impact. Of course the list criteria could never be objective. But the list would still be neat.

The other thing I would add here is that Dayton, Ohio (actually the Dayton metropolitan area) is another interesting hub of industrial activity in the 20th century that, like the Quad Cities, is not as famous for that fact as it really ought to be. But when you look at a map and realize that the Cincinnati metro area and the Dayton metro area form one big corridor, it makes a lot of sense. People underestimate the influence of machine tools, and the social networks of bright engineers and tool and die makers who develop them, on history. It's not just the mere technological chains of connection. It's more especially through the social network chains of connection. You can export technology to anywhere, but that's only one batch of airplanes landing in an undeveloped country. A spark landing on water; a seed landing on poor soil. Creating a nexus of aircraft development is another matter. You stand a higher chance if the area is dotted with a social network of people and companies immersed in engineering and R&D and machining (particularly tool and die work) and fabricating. Even if you classified the fact that the Wright brothers were from the Dayton area as a random meaningless starting data point (although it's not as random or meaningless as it may seem, I believe), the data points downstream are not so random. You don't develop the new aircraft industry in an engineering and manufacturing vacuum. It's kind of like a spark and a tinderbox. Sparks may arise randomly anywhere, but fires only flare up and spread where there's a tinderbox to feed them.

Incidentally, that last point is why economic underdevelopment is such a waste of potential. Who knows how many Wrights or Charles Ketterings are born every year in underdeveloped economies and never get a chance to do the development work that they're talented enough to do. The sparks are arising everywhere, over both land and sea; the seeds are continually landing everywhere, on poor soil and good soil. Thus only an improved environment will result in maximized fire/plants/whatever your metaphor is. Notice that this is also the bane of the culture-growing bacteriologist or pathology lab tech. Some species are notoriously difficult to grow in the laboratory. It's not because the techs don't have wild samples with which to seed the medium. It's because the medium or temperature or light isn't quite right for proliferation.

2011-06-29: Human motivations for complaining

[edit]

Some humans complain because a problem needs to have attention and resolution applied to it. Other humans complain because their ego is built partially on the finding of fault as a general principle, even divorced of specific instances, and serving as some kind of pathologically overused defense mechanism.

The former is referred to as "constructive" complaining. It is based in rationality and responds to rationality in kind. It also tends to be used sparingly, because its users tend to realize that the world is never going to be perfect and one has to pick one's battles—prioritize one's use of bandwidth, socially speaking—prioritize one's taxing of others' attention and problem-solving burden.

The latter is referred to as incompetence. It is not based in rationality, but rather on the specious assertion of rationality; and it does not respond to rational reply with rational counter-reply. It only squirts ink to muddy the waters and scurries away. It never says "good point", except to prove that it has and can. That proof is worthless because hollow.

——

There are (at least) 2 kinds of people with a sharp eye for noticing subtle flaws (latent errors, imperfections). The first is the pedant, who's motivated to find and point out flaws as a way to make himself feel superior by keeping others from being superior, which is to say, turned inside-out, as a way to keep himself from feeling inferior by making sure that others are made to look inferior. The second is the victim of the pedant, who's motivated to find and fix flaws as a way to try to deprive the pedant of ammunition. This effort is possibly futile and certainly past the point of diminishing returns, because the pedant does not need a sound foundation for finding fault; just a sufficiently specious one. And because humans generally have a great talent for sniffing out specious arguments that will advance their own interests, and because the world is so full of flaws, this (specious grounding) is seldom hard to achieve, no matter how much anyone tried to sweep the place clean before the inspector arrived. In this universe, fault can usually be found, fairly or unfairly, reasonably or unreasonably, if one's goal in life is to find fault.

2011-07-04: Humans reducing their make-believe quotient in the internet era

[edit]

Here's a fleeting thought that's currently still very much half-baked, but worth jotting down. It's sometimes kind of comical (although also annoying) watching occasional random editors trying to censor, rephrase, or bitch about Wikipedia in spots where it's too honest. We see the Wizard of Oz, rendered newly pathetic where once was specious grandeur, screaming frantically that everyone should pay no attention to the man behind the curtain. In many cases these editors grew up in the print-only era, when, because it was so easy to hide information and put on a fake show, there developed quite a strong culture of doing so. Participating in the implicit rules and mores of that culture was how to be successful as an individual within it—how to climb the social-rank ladder and be "superior" to others. This culture infused even the very tone of published content—writing, audiovisual (TV, films, radio), and so on. It was the culturally ingrained dishonesty that declared that colloquial register was embarrassing and inappropriate to ever find sprinkled into published writing (the implicit "because" was that mindsets such as cynicism, iconoclasm, and "breaking character in the stage show" [for lack of better words here] were how to get fired from the cast of actors). And even today we still find people messing with Wikipedia "to try to put the curtain back over the man" in some cases. In some cases they think they're doing Wikipedia a favor by rescuing its credibility via increasing its cover-up-reality factor; but what they don't realize is that they're enshrining their own shrill, pathetic discredited-Wizard-of-Oz mindset ("Nooo! Put the curtain back up, quick! Everyone erase their memories of having seen the truth!!") within the permanent contribution record of Wikipedia, where future generations will pity it or laugh at it for its ridiculousness. A ridiculousness that to their eyes will be quite patent, and whose apparent lack of patency to some eyes of today will seem puzzling to them. "Didn't you realize that you were embarrassing yourself? How could you not, from any logical, rational perspective?" In searching for an analogue from previous human experience, with which I could help people of today to understand how this will probably seem to people of tomorrow, the analogues that I keep coming up with are things like Soviet propaganda of the 1970s and 1980s. "Look at how great our economy is! Look at how much equality and liberty our people have!" It was clear to everyone, even the people shouting it, that it was an overstretched half-truth (well, actually, a smaller fraction than half, which is why so many people considered it a lie—at an 80/20 ratio of lie to truth, it got classed as a lie in a truth/lie dichotomy classification scheme). So why did they keep shouting it, then? "Didn't you realize that you were embarrassing yourself?" I think the answer is that, from the view inside a culture looking out, for many people (not all), the culturally endorsed lie is more comforting than allowing oneself to face the truth squarely and consciously. Why? Maybe in part because the truth has been built up in the [collective and individual] imagination to be something much more horrible than it really is. Maybe in part because disassembling the ingroup's collective fabrication (pseudoreality) involves stripping its high achievers of their ranks and badges. From the viewpoint of those achievers, it's better to scream at the rest of humanity to preserve the now-discredited illusion than to accept its discrediting, because one must accept one's own social-rank demotion in the latter case. Implicitly and semi-consciously, the attitude is, "Yeah, I know it was just a game, but it was the game that *I* was winning, so I insist that we all keep playing it." But the thing to understand is that, from the objective, outside view, which will win in the end anyway, it is already too late once the curtain has been yanked off. The curtain removal's corollaries are already set in motion, and can't be reversed any more than the breaking of a ceramic dish can be reversed. Sure, you can glue the shards back together, but only as a museum artifact whose seams are apparent. Not as a return to a lost culture. OK, I guess I've daydreamed enough for today. Back to productive tasks, whittling down the to-do list.

2011-07-05: If half the people who run their gums about Wikipedia actually made substantial contributions to it, I might respect their opinions

[edit]

You see it reported in the Signpost frequently. So-and-so talked about Wikipedia, managed within only one sentence to get one or more of the simplest basic facts wrong about how it works, then made some sort of generalized statement about its effect on the world. Meanwhile it's obvious that they have nary a clue about it and haven't done much more editing than a test edit here or there—including every watchlister's favorite, the subtle and/or verisimilitudinous fake edit just to see if anyone would notice and revert it. In many cases, yes, fool, I did, and I did. Maybe it doesn't matter anyway. They don't seem that bright, or knowledgeable; perhaps they have nearly nothing, in terms of useful, accurate knowledge, to contribute anyway; so no loss to WP. Yes I'm pissed off today.

2011-07-21: The power of evolution

[edit]

This is interesting, about mice evolving resistance to warfarin mouse poison products: DeNinno, Nadine (2011-07-21), "Mutant mouse resistant to poison", International Business Times, retrieved 2011-07-21. I've always wondered if/when that would happen. I always leaned toward "when", not "if", but I'm too remedial in microbiology to have ever claimed to authoritatively know better than any experts who may have said that it would never happen, or not for centuries, or whatever they may have said. It just seemed inevitable to me, just given how natural selection works. It doesn't create a statue by adding marble in the right spots with intelligent design; it creates one by removing everything that's not part of the sculpture, via undesigned erosion guided only by the mindless, nonsentient principle that the only thing that doesn't die is the one thing that randomly was resistant to the killing mechanism. It's a negative-space kind of thing. The sculpture isn't there; what's there is simply everything that turned out not to be not the sculpture. Woah, man, don't explode my head, you may complain. So—the only thing that doesn't die is the one thing that randomly was resistant to the killing mechanism. And eventually, two such freaks will be the only ones standing, happening to be in proximity to each other by dumb luck. Then, just like the billions before them, they breed like flies. This is the same reason why even though I know I have read somewhere in the past 8 or 10 years that experts confidently assure us that pumping our rivers, lakes, and oceans full of triclosan will never come back to bite us in the ass, I find it too difficult to believe them. To have religious faith in their religion, basically. I know people say that antiseptics are different from antibiotics in this respect, because it's much "harder" to evolve resistance to them. But again, that's ignoring how natural selection works. To speak in cryptography metaphor, natural selection is not the cryptogram in the newspaper, solved by intelligent minds with idle time. It is a dumb-as-rocks brute-force attack with a supercomputer—or, actually, a mere big-enough botnet will do just as well. The shorter the gestation and maturation of the relevant creature—the more rapid the passing of generations—the more monkeys are banging away on countless typewriters, so to speak. The sooner that one freaky mutant will breed with the only other partner left standing, who happens to be his freaky mutant sister. And bam. Give it 3 more years, and the world is jam-packed with founder effect–amplified freaks, immune to your precious antiseptics that would "never" become ineffective. Your Titanic that could "never" sink. Bam. First time out of Belfast, baby! At the risk of trotting out a line that's a little trite by now, I couldn't help it; I had to say it: "Amateurs built the ark; professionals built the Titanic."

2011-07-28: Hey, here's some pretty fucked-up shit! Water on a moon causes rain on its planet

[edit]

International Business Times (2011-07-27), "Moon rains water on saturn in baffling phenomenon", International Business Times. {{citation}}: |author= has generic name (help)

OK, now I really must get to bed ... Good night.

2011-07-28: Do normal humans need help from abnormal ones? Consider.

[edit]

2011-07-30: Don't have to settle for IBM Watson OR neural-network AI. Just give the self-same robot both at the same time, the latter querying the former.

[edit]

Of course this can't possibly be a novel thought. But just now was the first time it consciously occurred to *me*, anyway. I feel like I've read a dozen times now articles touching on the question of "which architecture can really achieve strong AI? This one has pro X but con Y. Will this other one beat it? Is the first-mentioned architecture a technological blind alley?" The question, I just realized for the first time, is stupid. What's to stop one robot from having both available? From using one's output as the other's input, all in real time?

Just another reason why meatbags like us had better merge with machines rather than try to compete against them.

2011-07-31: Remembering the truth about aviation: it's RECENT

[edit]

Here's a new collection that looks like a really cool read. I may not have time and money to get around to it for a while. But I'll put it on the queue-slash-wishlist.

2011-07-31: Relationship of trolling to lazy, ignorant drive-by tagging

[edit]

It's a sound principle to collect user feedback about Wikipedia article quality, and to allow tagging such as {{refimprove}}, {{expand}}, and so on. However, if you are a person whose "contributions" to Wikipedia lie mainly in tagging incomplete or underdeveloped articles, but doing nothing to help develop them, then you are more of a troll than a contributor. This is because we all already know that Wikipedia is incomplete and needs lots more development and reference citations. I swear, we should just take the bottom 80% of all articles and have a bot go through and slap {{refimprove}} and {{expand}} on all of them, just so dicks have their drive-by-tagging fun spoiled. "We already know, and we already tagged them all. If you don't like Wikipedia's quality level and you're not helping, or capable of helping, to work on improving it, then just go away, or be a passive reader. You're welcome to read if you choose to, but we don't need to hear from you until you have something to contribute that's actually useful."

2011-08-01: This is interesting because a year ago it was cynical outsider suggestion, and now it's mainstream fact. The world changes.

[edit]

"Many so-called emerging countries, which have typically been charged higher interest rates because of their perceived risk, are now paying as little to borrow as developed nations, if not less." — Wirz, Matthieu; Phillips, Matt (2011-08-01), "Sea change in map of global risk: with crises shaking confidence in U.S. and Europe, investors turn to debt of emerging lands", The Wall Street Journal.

I specifically remember reading within the last 6 to 12 months where some cynical private citizen was saying in a post somewhere online that the global bond markets were 180° backwards in their logic [true], because the concepts that G7 countries are good credit risks [false], and that developing nations are poor ones [often now false], were antiquated fossils left over from an era lost to history [yeah, about true, I'd say]. And the fact that bond markets were still treating the concepts as truth [true] just went to show that You Can't Trust The Man, and the man on the street sees through the evil corporate-government empire, which is so evil that it can never be saved and must be overthrown, blah blah blah, etc, etc. [OK, that's where you started to lose me.] Here's the thing: Corporations and governments are run by humans. In countries with republics and halfway-decently free and fair elections, and a tendency for bad news to end up in the media *eventually* (usually several years too late), such as the G7 countries, you've already got the best basic framework for governance among humans that you're gonna get. From there it's just a matter of bending over backwards to call out and punish the perennially arising wrong-doings. You're not going to improve the world by violently overthrowing governments like those of the US, UK, western Europe, Japan, and others. Just keep on cynically pointing out the stupidities that you see—not just among the elected officials and corporate executives, but also among the greedy, semianonymous, amoral investors, many of whom are mere private citizens—and give it a few years for the pendulum to swing way too far into the ridiculous-wretched-disgusting-embarrassing-braindead-excess range. Keep pointing to the man behind the curtain and pointing out what a greedy drooling clown he is. Next thing you know, you're reading a boring, mainstream, ho-hum story in a paper like the Wall Street Journal, duly reporting on the emerging trend whereby bond markets have a sea change and begin turning their logic in the direction of actual reality. For the sad fact is, you see, this is about the most you can expect from the neurologically incompetent, morally/ethically/empathetically retarded species known as Homo sapiens. They're simply not capable of much more than this. The few who are are neutralized in their net effect by the neurologically challenged majority. Just be glad in the certainty that all shit will eventually hit a fan. I know it sucks that people had to suffer first. We'll keep trying to engineer ways to minimize that, increase transparency, etc.

As with everything in the universe, a balance of opposing forces is needed. When it comes to G7 countries, and probably quite a few non-G7 ones, you, as man-on-the-street, can't just roll over and take the abuse without complaint; but you also are misguided if you think that you see a landscape of pure evil and perfect goodness, where the one must overthrow the other. Life is a lot grayer and muddier than that. What one should aspire to is being a filter that takes murkier water and darker-gray hats and washes them into less-cloudy water and lighter-gray hats. Like an oyster, among countless others, in the oyster beds of bays. People talk about "the world being your oyster." But in this view, you must be the world's oyster. One of billions. Aggregate result, whole bays' and seas' worth of water being exchanged/filtered each year.

2011-08-10: Earworm group of the month

[edit]

Occasionally you rediscover an album that you've been aware of for years (whether 2 or 10) but never particularly cared about previously. Or at least *I* do. Do you?

This week my brain latched onto Black Gives Way to Blue and now it won't let go. I think this is my favorite AIC album now. I think it's the highest quality of any of them. I think "Check My Brain" is the weakest track, but at least it's got a hook and doesn't need skipping. Various other tracks really hit the nail on the head. "Private Hell", "Your Decision", "When the Sun Rose Again", "All Secrets Known", "Acid Bubble", ... hell, all of them, really, although some more than others.

When this happens (earworm group of the month) it's always interesting to wonder why one's brain is fixating on this particular music at this particular time. I don't think it's always as obvious as you'd expect it to be.

Back in late 2008 it was Toxicity for me. I had been aware of the singles since they were new. I even bought the thing around 2003 or so, and listened a few times, but then I went 5 or 6 years without feeling any desire to listen to it. Then, in late 2008, my brain freaking exploded. It was all I listened to for about 6 or 8 weeks. Kind of insane that I didn't get tired of hearing it so much.

In early 2009 I bought Mezmerize, and got hooked on that one for a month or two. Since then I haven't felt much desire to listen to either one except once occasionally. But one thing that I did find is that "Lost in Hollywood" has some sort of interesting neurological effect on me. It calms me like a drug, for some reason. Literally like I swallowed some kind of anxiolytic pill or something. Very strange. It seems to have something to do with the tempo, melody/harmony, breath pattern encouraged by one's absent-minded subvocal singing along, and his voice timber itself. They all just work together to totally short-circuit some of my brainwaves—fortunately, the anxious ones, not the rational-thinking ones.

Anyway.

No one plans to take the path that brings you lower
And here you stand before us all and say it's over
It's over

It seems you prophesized
All of this would end
Were you burned away
When the sun rose again?
Hate, long wearing thin
Negative, all you've been
Time to trade in never-befores
Selling out for the score
It seems you prophesized
All of this would end
Were you burned away
When the sun rose again?

And I always paid attention to all the lines you crossed
Forgive this imperfection it shows and know
I am the child that lives and cries in a corner
Dies in a corner
Unloved inside your mind
Hides in a corner
Dies in a corner
Unloved inside your mind
Intend obsolescence
Built into the system

Just another brick you toss
Stone the number one deceiver
Multiply the added cost
Easy to become a believer
Nowhere to buy in
Most of us hiding
Others are shining
You know when you find it
In your darkest hour, you strike gold
A thought clicks, not the be-all end-all
Just another lesson learned

2011-08-13: On laziness among some Wikipedians

[edit]

My comment at Wikipedia talk:Wikipedia Signpost/2011-08-08/News and notes, pasted here.

Regarding the reversion of newbies, I realize that it is a large, multivariate topic, but I know from experience that there is a tendency of laziness among plenty of experienced Wikipedians to just delete a newbie contribution rather than bother to fix the problems with what is clearly a good-faith contribution motivated by a valid content-creation or -improvement motive. One can counter that the experienced Wikipedians are too overworked to take on the added effort of bothering to fix rather than delete. I know that many people have validly discussed that aspect. But my argument is that it doesn't matter what the complication is—what matters is the end result. If we don't stop this laziness, we are gradually killing the project, by killing its community of volunteers. It will be a gradual death of natural attrition failing to be offset by natural growth. If you see a complication that is standing in the way tactically to a strategic goal, you either suck it up and deal with it, or you allow the strategic goal to be defeated. Look at this from a soldier's perspective. Specifically, a soldier who believes that he is fighting for the "right side", and that the war needs to be won. When a tactical obstacle gets in his strategic way, he doesn't cave in to laziness and say, "Well, I'm too lazy to ford that stream or scale that concrete wall, so I guess I'll just turn around and wander off in another direction." That's the way to lose the war, regardless of whether it's cosmically fair or unfair. If building a properly constructed, comprehensive, dynamic, powerful, positive-influence Wikipedia while swimming against the current of apathy and entropy is the war that we are choosing to fight, then we can't win it unless we overcome our own laziness and fatigue. — ¾-10 14:51, 13 August 2011 (UTC)

2011-10-09: How it is nowadays?

[edit]

I've been really busy in non-Wikipedia life these days and have barely had time just to catch up on my watchlist. But I'm bummed today by the realization that it feels lately like the only editors left on Wikipedia in recent months are people whose "contribution" is to delete things. Including in articles on topics that they know little to nothing about. How about *building* content, *adding* information, *augmenting* what someone else developed? But no, there's been a lot lately, it feels to me, of the "gee, I don't know much about nothin, but I'll see fit to delete what *someone else* contributed." — ¾-10 00:07, 10 October 2011 (UTC)

2011-10-24: Downshifting and upshifting

[edit]

Driving an underpowered vehicle was good practice for running not-exactly-small parts on fairly small machine tools (e.g., turning 3-inch diameters on a 1-hp machine). In both contexts, a common theme: one finds oneself downshifting and upshifting a lot. The pleasure of a machine with power to spare, whether it be truck, car, tractor, or machine tool, is that you don't care as much which gear you're in; it could be any of several—they all work. Now, shifting gears has its own appeal, too; it feels thrifty to put your equipment through its whole range of paces—to tax it through its whole range of capability. After all, good engineers know that if your application isn't fully taxing your equipment, then strictly speaking, you overpaid for too much equipment. But that theory has its practical limits. In practice I would rather drive a vehicle with some power to spare; it's easier, less annoying, more comfortable. Same with machine tools. It always feels great to take a part design that you've been fussing and worrying on a small machine tool, and make it on a larger one with ease. No chatter. No multiple light cuts because you couldn't take one deep one. Smooth and rigid.

2011-11-15: A noncompetition agreement between humans and machines: Situationally appropriate, Yes-No? If yes, then what are its parameter IDs and their values? Some hints.

[edit]

New book just out, haven't had a chance to read it yet, and doubt I will get any chance in the next few months—busy (once again) doing what realistically ought to be the jobs of several people. (But then what's new in G7 countries? Old news. Still a pain in the ass, of course. Hyperproductivity pushed to the point of no longer being realistic, or strategically or tactically robust. But what can one do; it's logically the only way anymore to still get paid more than any non-G7 counterpart. And they think the U.S. real-estate-prices bubble correction is done yet, when the economic potential difference between G7 and the rest, and the insulation that allowed it to exist, have fallen away. Anyhow ...

The new book is "Race Against the Machine" by Erik Brynjolfsson and Andrew McAfee from the Massachusetts Institute of Technology. So far I've only had a chance to skim the reactions from a few people, namely Martin Ford at his Econfuture blog (2011-11-07 entry), and correspondent J.P. of The Economist's Babbage blog (2011-11-04 entry) (link to there courtesy of Econfuture), as well as some of the user comments at the latter (i.e., Babbage) post. One initial potential problem with the book's message that I preliminarily detect was also detected by a commenter who pointed out that the catchy phrase about competing not *against* machines but *with* them may be speciously problematic. I agree with the critique (pending reading and pondering that might change my mind) that there may be something hollow and possibly circular about it—it seems to ignore (1) the question of *how* to do that in the new, positive way that the authors suggest is possible (what are the tactical details of a system) and (2) the fact that, after all, that's already what's happening (in an old, negative way), and currently causing externality problems in the overall system. For after all, the people who currently do the best job of competing "with" the machines, as Brynjolfsson and McAfee put it, are forcing the competition-losers to compete "against" their (the winners') machines. The competition that is done "with" the machines, as currently done, leads directly to the competition that is done "against" the machines. The game-losers who did less well "with" the machines lose their jobs as the economic output is supplied by the machines; thus those people have now competed "against" machines. It seems like it may be a hollow or circular argument or something—at least that's how it seems to me up front, pending finding time to investigate and confirm this initial suspicion. I wouldn't even be venturing to type this fleeting thought tonight at all, except that I doubt I'll get time to follow up properly in coming months, and I want to get this half-formed thought out there in case it prompts anyone else to pick up where it (out of time-starved necessity) leaves off. Because I think this train of thought that I'm about to finish laying out may be important to share and to get out there for follow up by anyone who has time or inclination to extend it.

Here's the thing that I need to get out there tonight: Brynjolfsson and McAfee are absolutely correct that there is an element here, a theme, that the solution lies in humans somehow intentionally ceasing to compete against machines. But here's the part that they may have missed (I say prematurely before having read their book): It will involve a noncompetition agreement—*not* between humans themselves (who must continue to compete both with and against each other for the economy to stay healthy), but only between humans and machines. For that's exactly what the mirror-image new-market variant idea is about. It breaks the circuit in which humans all must compete against machines for the chance to earn a living (at all). But note, vitally, that it does *not* break the circuits in which humans (and teams of humans) must compete against *each other* for the chance to earn a living. Those circuits must remain hot. Without them we have nothing but dysfunction.

So the point here for tonight is that Brynjolfsson and McAfee are absolutely correct in one respect and may simply have an incomplete circuit in another respect. Which is *not* the same as to say that they're "wrong" in that respect—just that they may need to iterate to the next build to remove a bug or close a circuit. Which is no big deal, either as a general principle of life on earth or as a particular instance that needs revising. So they're not "wrong", they're just possibly incompletely correct currently, but that state could change. And we already know right now that they're already partially correct in an important way.

Then again, maybe this whole fleeting thought is off-base, because once I actually *read* the book, I'll know the whole story about their ideas, and I may find out that they are actually not missing the components that I suspected they were missing. Which is why I normally wouldn't even publish this draft until I had checked on that upshot before publishing. But in this case I just needed to bang out this typing session, because the idea, that engineering a noncompetition agreement between humans and machines (*not* between humans) is probably situationally appropriate, had already occurred to me independently back about a year ago, so I was pleased to see the theme echoed by others recently, and I just needed to bounce this initial follow-up (incomplete though it is) off the universe in case anyone else is in a position to catch this particular ball and run with it. Because I'll give you a hint: analysis and synthesis has so far pointed to the idea that the key to the kingdom is being fumbled with here, with this noncompetition-done-*right* idea (with the right parameter values in place to put limits on its potential to do harm via nasty side effects). It's just like was said earlier, and bears repeating: "The invisible hand by itself is an animal. It doesn't necessarily care who it bites, or how bad. But it's our animal, and it turns out that we can't live right without it. The trick is that we have to keep it under rein as our servant. We can't just cut it loose and let it make us its dinner. Wolf, or guide dog? Canine either way, but the devil is in the parameter values." Somewhere between lionizing and demonizing this dog there is the right degree of regulation—of calibration. Life is lived on slippery slopes. You think you'll bulldoze them all into leveled terraces if you can only just manage to whine loudly enough and often enough? The game was rigged before you got here, friend. The chips were long since carted off. Speaking of agreeing not to compete! Will you agree not to compete with—not to irrationally deny—the fact that life's not fair? Instead, regulate.

2011-11-17: Can't vs teach, and developing content versus deleting it

[edit]

The old criticism, albeit only partially true or fair, is that "those who can't, teach". It's a somewhat misleading criticism that is often trotted out and parroted as a defense mechanism by people who understand the world less than they like to believe that they do. By which I mean, it is often not fair to teachers, and it's often cited by idiots for ill-founded reasons. But to the extent that there *is* a grain of truth mixed in with all that exaggeration, I've realized what the Wikipedian corollary is. "Those who can't develop useful (or at least interesting or cognitively stimulating) content, delete others' content". In both cases, you see people with pedantic streaks who think that they're accomplishing something useful for the world by telling themselves that they know better than the ones actually doing, the ones actually developing; what you see in these cases is the ones who *can't* telling themselves that they are superior to the ones who *can*, and do. But fortunately no one's fooled except the fool himself. Just as most people can feel in their gut the truth that *some* teachers are only pedants who couldn't make it out there in the 'doing' sphere—that is, they aren't fooled by the teacher's spin on his own character—they aren't fooled by deletionists on Wikipedia, either. What do most people come here for? That is, what do most people (that is, readers of Wikipedia who aren't writers of it) come here (to Wikipedia) for? People come here for information, sometimes just the basics and other times also the interesting-if-academic. They *don't* come here to find out which censored, incomplete dregs are left after deletionists get done being pedantic and obtuse in their passing of judgment on the content that nondeletionists created.

I guess I can let it go and relax. All I'm doing here is railing against the incompetence of human neurology in its aggregate state. That's not necessary. Moreover, it's, of course, futile. We already know that many humans are incompetent, unpleasant pricks. We don't need to rehash it anew as if it's fresh news, or as if we're going to find an implementable solution to it if we sit here agitatin on it tonight for a few more hours. I get up early these days. Time for bed.

PS: Yes, time for bed, true, but let me just, real quick, summarize the themes:

  • Deleting is not editing. Editing is not deleting. The one is only a subset of the other.
    • Deleting can be a part of editing, but you know what they say about people whose only tool is a hammer ... Everything looks like a nail to them. Regarding people whose only cognitive tool for editing is the deletion knife: they mistakenly believe that editing is all about deleting, that deleting is editing (like A=B or X=Y), and that because they do a lot of deleting, they are therefore smart editors, or good at editing.
  • Removing entire chunks of content because they're over your head is not copyediting, so please don't write edit summaries that say "copyediting" or "c/e" when you make these misguided changes. Instead, write your real reason for deleting, that is, "I'm deleting this out of ignorance." Ask engineers and surgeons if they appreciate their writing being "edited" by people outside the specialty who think that whatever's over their own head must be superfluous, unneeded, or wrong.
  • OK, now I'm back to stewing and venting. Once again, bedtime.

2011-12-24: On the incidence and prevalence of human understanding of various spheres of life

[edit]

This one is extremely vague, preliminary, and half-baked, but here goes.

When you look at people who became unusually well-known (or even famous) in a capacity of leading manufacturing businesses (not just "managing" them", not just "staffing" them, but rather providing true leadership in all of its facets)—when you look at case-study examples of such people, it seems to me (preliminary hypothesis) that you see a theme, a set of characteristics:

  • Not only
    • talent and facility with both the social and technical aspects of the business—that is, talent and facility with both the people side and the technique/equipment side
  • But also
    • A willingness and ability to step back and forth across human-mind-imposed social "barriers"

What I mean by the latter factor is this:

Employee side

In my career I have seen people with excellent technical skill, knowledge, and ability, on the employee side, who are probably smart enough, in terms of intelligence alone, to be managers and leaders. But some of them are way too hung up on securing a certain niche in the social matrix (or "pecking order"—that's a good word for it, although it sounds too flat/2D, whereas a social network of people in a corporation is more like a multidimensional pecking order). These are people who place limits around themselves because, as far as I can tell, behind a false front of bravado, cynicism, egoism, haughty disdain, and disgust for others, they secretly (or subconsciously) are exactly the opposite—way too hung up on what other people think about their position as a node in the social network. They won't venture too far into the territory of being a good leader, or a leader at all, because it opens them up to the risk of being exposed for being no better than anyone else. Whereas in their little niche—which they defend with all the preemptive viciousness of a dog defending a good hidey-hole from newcomers—they get to pretend, with plausible deniability, that they are smarter than everyone else. They get to tell themselves this, and not risk having to be proven wrong.

Meanwhile, on the

Employer/manager side

In my career I have seen people who were born with spoons more or less silver in their mouths—certainly a spoon of some metallic content, even if not pure silver—who came into a business either with a college® degree™ in hand, or as the son/grandson/nephew of someone, some big thing—as the Spanish put it long ago, an hidalgo (son of something)—and decided to fill what they perceived that a manager role should be, which includes, in their mind, a hefty dose of not stooping below one's station, which is surprisingly similar to the skilled worker mentioned above, who also is very careful not to do anything that he believes is "stooping" from his "vaunted" (in his mind) position.

I don't really have time to finish developing this essay right now. But to cut prematurely to the hook, here goes: all the truly best leaders are those few people who are secure enough in "who they are" that they are willing to take the risk of stepping outside their little social network niche long enough to go out and do the tasks that a leader is needed to do. This means that when they are having a conversation with the janitor, they will pick up the mop themselves and discuss the fine points of mopping technique. (And if you don't think there are, or could be, any of those, then you are just another idiot who has never mopped floors in the course of their work.) And they will listen to the janitor's thoughts on such, and duly acknowledge the parts where he's right, while also suggesting the parts that he hasn't thought about yet.

The best leaders are renaissance men of a sort—decently talented at the technical aspects of the work (even if they can never be the very best at them), while also being talented at schmoozing those bigwigs who need schmoozing to the extent necessary (even if they never really enjoy it); who are not just willing to get dirty, but happy to jump down in the hole and crawl in the mud, as long as something constructive will come out of it. This is why sports coaches and military officers supply some of the best case examples of truly good leaders. They've been the one on the field playing the ball, or shouldering the rifle—they can discuss the technique with you anytime—but they also aren't afraid to take risks, such as putting themselves in harm's way (military officers) or risking "social network node downgrade" by being seen "stooping" to the janitor's station by the small-minded people who think pecking order rank is the only thing that matters in life.

The best leaders are tough, kind, technically competent people who don't give a shit who sees them swinging a mop or a shovel, because they aren't insecure about their "rank" among humans, and (because) they know that such rank is mostly just a bunch of shit anyway, that only shit-throwing monkeys think is desperately important.

Many bad leaders are selfish pricks; but I'm sensing that most of the good leaders have a certain freedom from selfishness. They're willing to put their own social standing at risk in order to make the system of people and equipment do what needs to be done, which requires, sometimes, leading people by the noses (the less valiant ones), or at least nudging them gently in the needed direction (the more valiant ones), while simultaneously not disrespecting them—one must show them respect in order to get any respect from them in return; and unless one is an unusually good actor and liar, the only way to show them respect that won't backfire is if you can find it in yourself to have some actual respect for them. And only some people are "man enough" (or "woman enough") to be capable of doing that. All of the above requires, if not "selflessness"—I don't think there's any complete self-less-ness involved in most real-world cases—if not "selflessness", then at least "freedom from selfishness". Which requires (1) respect for others, which requires empathy; and (2) a certain amount of courage and risk-taking ability (for calculated, necessary risks, at least—whereas mere dare-devilry is just for fools).

The writing above did not do justice to this topic, but it's a good first draft to bang out when I haven't got time to spend hours refining it—I've got better things to do.

2012-01-09: A short-circuit of sorts. Shorting out while exploring logic, cutting out the power, cutting out the melted spots, getting back in the chair, trying again.

[edit]

I've heard many people repeat an aphorism whose origin I don't know, and won't bother to google at the moment since it's irrelevant, but anyway: "The definition [or a definition] of insanity is doing the same thing over and over again and expecting a different result."

Sounds great, and I do in fact agree that it's a pretty good, pithy, accurate maxim. But it presents a real bitch of a corollary:

If one lives one's life as if one expects humans to treat each other properly or decently, with few enough exceptions that they qualify as exceptional, then one has thereby met the definition of insanity. Because living each day, week, month, and year among humans, and expecting them to act right or being surprised when they don't, is an instance of doing the same thing over and over again and expecting a different result. With the "thing being done" being the living of life on Earth among humans.

It's a bit of a short circuit, because if you think about it too hard, it seems to encourage you to give up on life. Why that strikes me as so aptly analogous to a short circuit is maybe not so clear outside my own head, so here's a stab at explaining. Electrical circuits are useful things, and they usually work as intended. Same thing with pondering logic and philosophy and trying to think of ways to make the world a better place. Useful activity. But if you're tinkering around with building one (whether a circuit, or a piece of logic-infused world-improving thought) and you accidentally short it out, it sort of pulls the plug on everything all at once. It takes away the meaning of having carefully calculated just exactly which diode or resistor to solder into location X (or just exactly which behaviors to encourage and rules to enforce), because if all the current's dumping to ground, all those other regions of circuitry suddenly are completely moot—they lose all significance because without any current reaching them, they may as well be nothing but dust, for all that they matter, that is, for all that they affect outcomes. If you're sitting around thinking of how to improve the world, or life within it, and you suddenly come to a conclusion that the world (full of people) can't be made right, no matter how hard anyone tries, well, it's like a short—it renders the whole circuit, and the whole circuit-building exercise, pointless.

I think the key to unshorting, and getting back into the soldering seat with diode bin at the ready, is to realize that the improvement effort is not about making the world perfect. It is, rather, merely about making the world suck less than it currently does. Traveling downward on the suckage gradient. Not to zero—which, we've established, is like a limit, in the mathematical sense of that term—a point possibly approached but definitely never reached. No, not to zero on the suckage scale, but rather, instead, merely lower on the scale, closer to zero. One of the interesting things about approaching limits is that there is always room to get closer, no matter how close you've already come. Which is interesting as a motivation for people interested in doing something worthwhile or meaningful in life. "Well, at least we'll never run out of opportunity to improve." Thus, as corollary, "we have the potential to never get bored, and to never run out of life-purpose, i.e., reason for living." Of course, it must also be acknowledged that another trait of "getting warm" in the approach toward a limit is that you dwell in the region beyond the point of diminishing returns. Which is to say, in equivalent terms, that the marginal cost of each quantum of improvement proves ever steeper than the last. But then, if you've gotten close enough, this may not bother you so much, because, well, for example, in the case of suckage reduction, once you've engineered and worked your way to a low level of suckage, you don't feel so injured by the steepness of price in attaining more, because, well, it's not like life isn't fairly good enough already. It's not like we're dyin' over here for even the slightest modicum of anti-suck. So we'll take whatever else we can get (with the getting requiring a lot of effort), but we won't sit around bitching that what we've already got isn't good enough. Because, let's face it, it ain't half bad compared to what else also might be.

Now, once more with regard to specific cases, regarding then the house-training of humans—getting them to stop biting each other and shitting in each other's water dishes as well as on the carpet and so forth—please understand that I really don't think we've gotten that far yet. Which is a kick in the teeth, being that the corollary is that the world of today bears quite a heap of suckage. However, it is also an opportunity, if one tries speaking optimistically, because it means that there is still room for the picking of low-hanging fruit. Which has its own satisfaction. One easy reach and bam, this whole fuckin bushel fell down. Speaking of harvesting and bushels and other containers, what's that aphorism about shooting fish in a barrel? The nasty and malicious and unfair behaviors of humans are yet but fish in a fuckin barrel. Get your gun! The era when the prey is scarce and the hunt thus hard has yet to dawn! It will be OK living in that era, too, given what was hashed out above, but yet it's also true that this era can be worthwhile, too.

So let me just sew this up tonight by ending with the following: Please understand, when you open your mouth to bitch about something, that you live, as Biggie had it, surrounded by criminals. Perhaps, like the player rapper ("if robbery's a class then I'll pass it"), you yourself are one, too, simultaneously. Inmates running the asylum. So don't act too horribly indignant that the world contains suckage and that people are sometimes assholes. If you do, you're meeting the definition of insanity. Better to chill and take it one day at a time. Swim against the current, travel against the suckage gradient, travel to a lower point on it. Make one small positive difference and move on. World-improvement as membrane transport against concentration gradients. Or maybe as pumping of entropy to outside the system? Scary thought because externalities are known to come back to bite on the ass, a few decades or centuries after they were oozed out among the effluent; but maybe a universe as one total system can only increase entropy, never decrease it, in which case a bordered subsystem within it is the most that one could ever hope for, and hell, maybe it's enough ... spaceship Earth and shit ... I don't know, because my brain is fried for the night, and I'm all out of half-baked physics and chemistry metaphors for the night. Electric circuits, barrels of eels, electric no doubt, conductivity in and out, megaohms of resistance against your shit, resistin like a fuckin resistor / bet you didn't know I would rap at you tonight, sister / I'm out

2012-01-15: On pass-through

[edit]

One of the concepts relevant in incorporation, at least in the U.S., and at least to the extent that I as a non-lawyer and non-accountant understand it, is pass-through, which is the state in which corporate profits pass through the corporate entity to end up in the hands of the owners, having only been taxed at the final level (the owner's tax return) rather than double-taxed (once at the corporation's tax return, and again at the owner's).

In recent days I've been thinking about how value passes through certain passage points on its way to other places. The term "value stream" has a certain aptness, aside from any buzzword quality that it also brings along as baggage. For a century or two, capitalism has depended mostly on the one particular mechanism, in which governments collect value from corporations via taxation and then distribute it via government services to the whole gamut of public goods and services, from defense to infrastructure to law enforcement and regulatory enforcement to entitlement programs (retirement income, disability income, health care). Now we're entering the era in which alternate mechanisms will be explored (although many people don't realize that yet), such as possible futures in which each corporation will have some say in which public good or service the value is applied to (although not the authority to decline to pass the value on at all). This will scare a lot of people because they know the ridiculously abysmal track record that corporations have had thus far when it comes to looking out for anyone's benefit other than that of the wallets of executives and shareholders. But we have to consider that doing nothing doesn't look to be a good option at this point, and that maybe the DNA of corporations can be reengineered to some extent. After all, the one, old mechanism has its own abysmal track record of sorts, in the sense that if we try to extend it out into the future of extensive automation and aging populations (where many young or middle-aged adults can't find [traditionally defined labor] work, and many retirement-aged adults aren't able to work), government in this model becomes something rather confiscatory, broke-ass, uncreditworthy, overly powerful (because holding too many of the reins and purse strings of human life), and finally thus overly corrupt (because humans are a touch incompetent in aggregate)—or some combination of those. Heck, one can easily argue that we already arrived into that doorway in the past few years, and now we just have to decide whether to keep walking down that particular hallway, or to seek passages into other ones.

Returning to the theme of passing through. Value in a value stream conventionally (under today's convention) passes through companies and then through government to end up as public goods and services. The trouble with this is the extent to which the value "leaks" from this vessel, "sticks" to the walls of the pipes, "evaporates" through the holes and past the gaskets, and gets "diverted" into corrupt side channels ending in the blind cul-de-sac of somebody's own pocket. Meanwhile, what are the hopes and dangers of a value stream in which value passes through companies straight into the public goods and services, without an intermediate stop in the bowels of government? The catch is that they're required to employ people. But that's not so bad. Government is already familiar with the notion of carrying people on a payroll for employment's own sake. But the big difference is that governments end up doing it in some of the most inefficient, least useful ways. Because there's a lack of competition, usually. Whereas organizations competing against each other are likelier to keep things useful and efficient. Not in any ideologically pure way, mind you. Merely in the soiled, simpleton, if-you-aren't-the-best-or-the-cheapest-then-we'll-just-switch-to-your-competitor way. Reflecting the truly brutal selfishness of the customers. But yet still serving a higher function nonetheless.

In the era where technology makes possible countless Little Brothers, maybe we're better off playing them off against each other (given that they're all irredeemably morally incompetent) rather than allowing one Big (Bad) Brother to arise to fill the distribution void that they may be capable of filling. The model of a thousand thousand little interests competing against each other makes for a very complex and cacophonic environment, to be sure; but one thing that it does do, as well, is render inoperable an alternate model in which a few Big, Bad interests enslave everyone else. And in the many-Little-Brothers model, you've always got public ridicule to socially censure the cases that arise where egregious abuses crop up. In a world where expecting everyone to behave nicely is only just insanity, I guess you've got a choice: extensive complexity or extensive corruption. I think most people, in the absence of any playing-nicely option, might prefer the complexity in the end. Therein, at least the average peon has a shot at not being screwed. Maybe not a guarantee, but at least a shot. For those weary of being screwed, that has a certain appeal.

2013-04-06: Writing for people and robots

[edit]

This has occurred to me before, and I've probably talked about it before elsewhere on my "thoughts" and "ponderings" subpages, but today it again reasserted itself at the fore of my thinking. It is the realization that as I build Wikipedia's encyclopedic coverage, I am doing it for the good of people (as a high-level goal), but also in fact, on the operational level below that, the audience I am writing for is both people and machines. Today it is perhaps something like 95% for audience=people and 5% for audience=machines, but in future decades that ratio is going to shift, and perhaps 50% or more (perhaps 80%?) of the audience for the content I built (wrote/edited/marked up) is going to be machines—machines who are told by people, as it were, "Go read Wikipedia (and Google Books, and the Library of Congress) and then come back and perform tasks for me, along the lines of expert system tasks and augmented reality tasks." And robots will read the content I helped create, and then they will use the knowledge/information therein to produce economic value, and thus to raise living standards for humans. Of course, we have to hope, and work toward the goal, that this process will not just be used for downside purposes—such as all-guns-no-butter, or to deepen the rich/poor divide, or to help machines to conquer/enslave humans, or other nefarious purposes. But one must realize that the technology (near-AI, weak AI, and later strong AI) is coming either way, regardless. So now the task is to make sure it gets used for good (there is no task of preventing the technology from happening, as far as I can see in any real-world model).

2013-05-09: Playing the wrong game; changing to a new one: or, if you want people to work, OK, but then there have to be jobs

[edit]

Funny, this link on Google News, MarketWatch > Commentary > Capitalism is killing our morals, our future: In a Market Society, everything is for sale, by Paul B. Farrell, April 29, 2013, was just so strange for something under the Wall Street Journal and MarketWatch names. I had to click just to see what the angle was. Well, I still don't know for sure, and I didn't read the whole thing (I stopped after a few pages). But it plugged into something I've been thinking about in recent weeks. I have some people in my life who are permanently dyed from the Reagan/Thatcher Kool-Aid, and it seems clear that they will always keep laissez-faire in their hearts as their religion till the day they die. Because it is just that, for them—their religion, their faith, their meaning and purpose and certainty in life, the thing they can believe in with blind faith for all eternity. Now, I'm not against markets; I think they're an integral part of the solution—a quite necessary part, without which nothing. But I still am under the operative hypothesis that people such as these friends have mistaken a mere algorithm for God—mistaken a mere iteration, a mere particular build of a particular app, for God. The thing I've been thinking about lately is how deeply they hate The Dole—whether the real and actual dole as it exists in my country, or their vision of how they imagine it works—but yet they also aren't dealing adequately with the reality of underemployment—the lack of non-poverty-wage jobs that non-rocket-scientists are capable of doing. At any rate, there are visions of losers lining up for free and undeserved handouts while the real hardworking folk work for their living. Now, this contains a grain of truth—I actually agree, up-to-a-point and regarding the-thing-in-and-of-itself, and all that. I've seen some true overly entitled loser jerks in my life, and I hate to think of them getting any handouts. However, jobs in manufacturing—which is where regular schmoes could traditionally find work for many decades—are clearly moving in a certain direction in this era—fewer, and more demanding of knowledge, aptitude, work ethic, and experience. I'm not alone in noting this—see for example Time, Careers & Workplace > The Curious Capitalist > How 'Made in the USA' is Making a Comeback, by Rana Foroohar, April 11, 2013. The problem is that at the same time that my religious-enthusiast friends are insisting that regular schmoes should work jobs for their living, they're also enthusiastically embracing this paradigm where a regular schmo is no longer good enough to hold today's jobs. The conventional wisdom today, which hopefully will (belatedly) change in coming years, is that education and worker training are the solution to this challenge. If jobs are becoming scarcer and more demanding of skills and aptitudes, then the answer is to embrace ever more education and ever more training (and ever more re-training). One can be forgiven for thinking that way, but it's childish logic in the end. The solution to a process parameter trending out onto a ledge is—what, to learn how to walk ledges better? To learn to live on ledges? (and eventually to take crash-falls off of them?). Well, yes, in the short term, boning up on ledge-walking is a thing to get cracking on. But to grown-ups it should be painfully obvious that the long-term solution is to stop playing that game, and to start playing a new hand, essentially the same but slightly different in the details. I'm short of time to write—it's already past midnight—so I'll just spit out my point: the wrong game is being played here. It's like Puritans chucking women in the water, and if they sink and drown, they were goodly people, and if they float they're a witch. "Regular people should work for their living—if they don't they're evil socialist usurpers—but yet we have no non-poverty jobs to offer to regular people." It's a highly dysfunctional game. It's illogical. It's a definition of a system that can't and won't work. "Regular schmoes (people who aren't geniuses, and weren't the top of their class) should work for their living or they're evil scum", but also, at the same time, "there are no longer any jobs for anyone except genius/top-of-the-class types". As I said, it's already past midnight, and I lack time to develop the path toward my point, so I'll just spit it out: the way out of this mess, this illogic, seems to flow through this. It involves a world where regular schmoes (non-geniuses) can have meaningful, middle-class (non-poverty-wage) work. You have to create some system where regular people (that is, not-good-enough people—and for the proof of the regular=not-good-enough equivalence, see here and here)—where regular, not-good-enough people can actually have non-poverty jobs. But you can't do it with The Dole. There has to be true productivity infused throughout the system, even if machines do all the power-lifting in that regard. Quite different from productivity being absent from the system (which can't work). This is where the new-market engineering comes in. A system where you can pay people to perform services along the lines of (what is today) nursing aid/hospital volunteer, but actually afford to pay them good wages for it—not just poverty wages, and not just volunteer work (wageless). These are the things that people could someday do, the litmus test of "you have to be anti-lazy enough to do something if you want a living—you can't have it handed to you for free." OK, fine—sounds great, have to work, no free rides or bumming—but then there has to be a job available (a non-poverty-level job). And in the end, yes, you must have markets to get discipline. Yes, you need markets, and capitalism. But you simply need the next iteration. The next build of the software. That's all. It's not rocket science. And—you will note—it's not religion, either. It's just logic.

  • 2015-02-21: Some links along the lines of "it's not just me", which is to say, "you don't have to take my word for it":
    • Regarding "There has to be true productivity infused throughout the system, even if machines do all the power-lifting in that regard", there is this.
    • Regarding how you share (or don't share) the value or monetary gains from the productivity gains, there is this.
      • I also remember some spark or echo of the same theme in The Wal-Mart Effect, although without rereading the whole book, I can't find it to quote (index, skim, and search didn't dig it up).
    • Regarding "mistaken a mere algorithm for God", that is, "mistaken a mere iteration, a mere particular build of a particular app, for God", there is this.

2013-09-13: The cleverness of the Gore-Tex name works on several levels

[edit]

The fact that the name Gore-Tex is phonologically similar to the word cortex is one of those observations that in hindsight makes you wonder why you didn't realize it earlier, because it seems so obvious, like it was pretty much begging to be noticed. And yet I didn't realize it earlier, and I'd bet that >90% of other people don't realize it, either. I think the reason for the lack of recognition is simply that "cortex" doesn't make it out of most people's reading vocabulary and into their speaking vocabulary; in fact, it's one of those words that most people recognize as familiar (having seen it before) but probably couldn't define unprompted.

The first level of cleverness going on with the trade name is that Gore-Tex is an excellent fabric with which to form a cortex around oneself, for the purpose of weather-resistance. The second is that /g/ is but the voiced version of /k/, so Gore-Tex doesn't merely rhyme with cortex (which would be clever enough, trade-name-coining-wise, on a garden-variety level of cleverness) but is almost the same word—as close as it can get to being completely homophonous without being so; in fact, so close that some ESL speakers, those with certain native tongues, would have trouble differentiating them. The third level of cleverness is that the name manages to incorporate both the eponyms (Wilbert L. Gore, Robert W. Gore, et al) and a syllable that connotes both textiles and technology, which are both highly appropriate connotations for the context (technical textiles). Although the first application of Gore-Tex, according to the W. L. Gore and Associates article, was as cable insulator rather than textiles, textiles are the application that most people are familiar with—specifically, high-tech textiles.

2014-03-08: Release 2014-03-08

[edit]

Prose normally,

but sometimes poetry because one can;

What's that they say? Launch fast and iterate?

Release early, release often?

Sometimes the point of releasing is just to relieve the pain—

to find release, and to relieve pain points—

although along the way, over time,

one may relive the painful points in time,

like, for example, "I just, you know—"

"—No, I don't know [and that's the point]."


You see, they found me sitting on the dock of the bay (as it were),

stopping time (as it were);

Dude, (I might as well have said,) I can watch paint dry (for cryin out loud)—

You don't even know

("—No, I don't know")

Well, OK, no—yeah, I know—and that's OK;

for cryin out loud (I might as well have said), and well I might,

as one might say;

but that was it for that day

and for today.

2014-06-03: June again

[edit]

This one materialized on 2014-06-03 but wasn't wholly formed.

On 2014-08-21, some recalcitrant pieces fell into place.

On 2014-08-26, I asked myself whether it was done or not, and I decided that it was as done as the thing need be. (Is that anything like when "it struck me that I was as wide awake as a man need be"? Eh, ha—if you will excuse.) The point, then, is that spending time—expending time—is nice, but sometimes, especially if it's eventually, well, it's like I said—you may need to power through.

I took a map and the early morning sun on a trip in my mind, with the hope to make a real one soon,

and I remember it as June—

it wasn't like some years later when I could steer the wheel in my own hand,

but the feeling was of a piece, and anyway, I've sketched the theme.

Like other episodes that would follow, it was all about sun-shiny clock-stopping, I now think—

things like stopping for some food.

Not that such moments never happen, because they do—

but they're so short and so few.


I took a nap in the early morning sun with a book on my face and the hope to connect soon,

and I remember it as June—

it wasn't like the earlier years when I thought that the world was as planned

but the feeling was of a piece, though of course I'd learned some things.

And again—it was June; and now again it was June—

and I could still make an apparent hour of an afternoon,

like a ghost stuck in a moment—a blessing and a curse, a bane and a boon.

Not that those moments must never happen, but when they do,

one may need to power through.

2015-08-03: Illusions and mandates of consciousness

[edit]

Yin: Four illusions of consciousness

[edit]

Under the illusion of cognizance, we believe that we are adequately aware of what exists, what matters, and what is irrelevant.

Under the illusion of comprehension, we believe that we adequately understand those things, their interactions, and their significance.

Under the illusion of competence, we believe that we are adequately competent to judge, to act, and to critique.

Under the illusion of control, we believe that we adequately can manage choices, risks, and outcomes.

All of this is true. And the more we dispel the illusions, the more accurate is our consciousness. But just as much, by the same token, the more deflated is the childish part of our enthusiasm. The part that once had been overly confident. Circumspection kills much of the bliss of ignorance and much of the proselytizing fire in the belly. And conversely, the observation of those things in others who are old enough to know better should often tell us that they most likely don't, even if they ought to.

The dispelling belongs to the essence of education.

Yang: Two mandates of consciousness

[edit]

All of this is true, and yet nonetheless, life goes on. This world is real (as far as we can know), and we live in it (as long as we can manage to). A lesson of the fates of the revolutions of February 1917 and November 1918 (the one fate sealed within 8 months, the other within 15 years, but both fatal in the end) is that even if you aren't stupid or ignorant enough to imagine that you have all the answers, you still need to know that some answers, ones grounded in ethics, are better than others, and that they must be adequately defended against facile ones that get their facileness, both the ease and its invalidity, from lack of ethical soundness or specious counterfeit of it. These mandates conscript you.

The conscription belongs to the essence of dharma.

Not that I know as an expert knows—just that I'm finding out as a conscript finds out.


2019-01-25: Some things that've snapped

[edit]

You thought you'd learned to milk things just a bit when you could take a walk before punching out—

Just a quick inspection, you understand—

It's in the company's interest, after all, you know


You know you've snapped some things when you can take a walk after punching out

just to prove that you can

just to prove to yourself that you can,

and that it doesn't matter;

Seven days a week, what's another twenty minutes?

It doesn't matter, and that's why you can do it

and feel just as fine about it as not.

You know you've snapped some things—

some chains—

You know you've snapped—

just some, some things.

What was I about?

It doesn't matter, and that's why you can do it

Seven days a week, what's another twenty minutes?

Some things that've snapped—

that you snapped—

Something's snapped—

You know it—

But it doesn't matter, and that's why you can do it—

You know it


Out the farm lane, under the trees along the way, the lane under the autumn leaves,

to the far back, the setting sun and the tall grass—

to the trees, then turn back—

You know it

a pause



crack



Or snap, and turn back