Saturday, September 22, 2012

Doing Anti-Social Engineering: Part Two

Doing Anti-Social Engineering: Part Two - Typification of Hyper-Manipulation

Recommended Reading: http://www.anti-socialengineering.com/2012/07/the-logic-of-philosophy-generator.html
 

It goes without saying that we, short of becoming an expert on everything, must take the word of others in many, many cases. This brand of social engineering, a sort of universal or general movement that courses through a society, is as old as society. This is found in things like jurisprudence or morality. Accepted, instilled ideas such as that “murder had best be judged upon,” or that “faith must be directed to a particular set of options” are rules of old. It is certainly not the case that these considerations are less powerful or dangerous than what we fear from modern methods of social engineering. It is just that these methods are blatant. They insist upon themselves, “You will do this. You will act thus.” etc. Some types of social engineering, regardless of source, simply tell you what it wants you to do or not to, think or not think. “Law A exists to control result B.” These kinds of obvious controls are widespread throughout society, down to your family organization. They are, for the most part, a necessary component of any healthy social organization. We shall call this type Transparent Social Engineering because we are told what the “rules” are and they are what they are. (They are not a secret, nor a lie, nor a trick.)


This does not mean that there aren't hidden motivations for what appears to be transparent social engineering, regardless of its age, type or dispersal methods. The example of human caused climate change, although somewhat controversial, is only controversial because of the distance between the empirical science of it and the layman. Consider the Catholic prohibition of birth control and the “suggestion” that people go forward, be fruitful and multiply. The prohibition is subject to punishment, the suggestion is not, although it is suggested that being “fruitful” would be rewarded. The suggestion, taken in context, could easily be determined to present the idea that there is both a need and ability presented by our existence, to multiply. The prohibition, disguised as the will of God and implicating both “his” desires and eliminating the option by creating a rule, carries with it an intention. Some may argue that this intention is simply in keeping with nature, but intention is only half of social engineering, the other half is made up of results. For if there are no results, the engineering will just change to what produces results. (These engineers don't give up easily.) The result of contraception being banned for Catholics is, of course, a whole lot more Catholics. I am not commenting on the “rightness” or “wrongness” of this intention, nor am I even saying that this product is the intention of the rule. This is just the result of the engineering and no one can deny that. I also feel it safe to assume that the results, if placed in obvious intentional statements, put the proof in the pudding, so to speak. However, nowhere in the Bible does it say, “We need to have a bunch of Christian babies so that we can outnumber those “other” babies. This is Semi-Transparent Social Engineering. We are clear on the engineering, not on the “why.” We must make a decision, based on a decision. We are somewhere in between assignee's prerogative and hyper-manipulation. 
 

Think back to when you were a little child, leaning the lessons we all do. Perhaps you had a parent or teacher who didn't bother to always explain the “why” to you. They might just say, “Because I said so.” (An appeal to authority.) This is a short term solution. It's short not because the statement is brief, but because you are present. The child can easily ask, “What do you mean? Why can't I have another cookie?” (Or, if he or she is really clever, “Why do you say so?”) What does “I say so” mean? It must mean, “my saying so is enough to answer your 'why,' because there really is no answer, or I don't want you to know it.” However, what if your parent lied and said, “The cookies are all gone.” Here we would have a case where the parent said something, rather than nothing, yet it is just as valueless to you as “because I said so.” You still have no cookie. The only difference is now we're not bugging Mom for another cookie, we believe her when she says there are no more and we understand what “no more” means. Remember, any excuse and a good excuse only differ in results by two percent. This is a small example of Opaque Social Engineering. (Opaque is the opposite of transparent.) Here we enter the domain of the strict social norm, the place where we don't know what we're basing our paradigm building on. It could be a lie, it could be nothing, unknown or arbitrary. Let's not forget, social engineering is goal oriented.
 

The “transparency” of social engineering is how I describe the difference between persuasion and manipulation. When one is persuaded, one has been given an intention and can make a choice. However, if one is misinformed, makes a choice based on that information, thereby making an inauthentic decision, or is unaware of the choice being made, then one has been manipulated. If what is being manipulated, (paradigm or association of,) is an idea that was inherent in the first place, hyper-manipulation has taken place. The engineer, in the case of hyper-manipulation, is working a programme of a programme.

Doing Anti-Social Engineering: Part One

Doing Anti-Social Engineering: Part One - Seek and ye shall find.

Anti-Social Engineering means different things to different people dependent upon their intention. Anti-Social Engineering, which I may periodically abbreviate to ASE, has three possible definitions and specifically not four. I dispute the idea of ASE being the eking out of information by way of trickery, such as may occur when a hacker tries to get your password, or a salesman might try to get the bosses extension from his secretary, via conversation. To me this is a silly definition on every possible level. Another probable and more reasonable definition could be: ASE is the phenomenon of being engineered to be anti-social. It could even be argued that this is currently manifest in most human cultures, perhaps with internet anonymity at one end and the continued exemplification of violent desire at the other. 
 
The key definition and, I would argue, the right interpretation, is that ASE is a response to social engineering. Social engineering is a forced intentional stance toward an idea. Anti-social engineering develops a response to the phenomenon. Anti-social engineering isn't just an idea, it's an action. Like philosophy, we can “do” anti-social engineering. By developing the habit of doing ASE we are able to do the best possible choosing, whatever the intention might be. This is a very reasonable goal, just to have the opportunity to do our best possible thinking. Isn't it likely that being able to do so would lead to best possible living?



Before one can socially engineer anything one must know about who is being engineered. In modern times this information has become quite easily discovered through polling on attitudes and trends. Broadcast media makes easy the large scale dissemination of any idea you might care to spread. It is possible for someone or some group to be convinced of some idea, or to have their ideas changed about any given subject, without knowing what their original ideas were, or even if they had any on that given subject. If we wish to influence people to a specific goal, we have to find a way to make that goal desirable to the target audience. By engaging in the type of engineering one is confronting or confirming the subjects' ideas directly, with or without the subjects knowledge. Herein lies the ultimate possible dangers that hidden social engineering represents:

    1. You don't know you're being programmed. If this is the case, you may never know. You are a robot with push buttons. You may as not be real.
    2. You don't know what the programme is. If this is the case, even if you are aware that someone is attempting to change your mind, you are unable to determine to what end.
    3. You don't know if the programme worked. If this is the case, even if you know #1 and/or #2, you don't notice the effected change.
    4. You don't know who the programmer is. This may or may not matter at all, but knowing could prove to be a useful shortcut.
    5. You don't know the programmers intention, even if you're told what it is.
    6. You don't know if the intention is leading you to a byproduct of itself. To you it's a crazy story about a gaggle of flappers lighting up in protest during a during a high profile parade. To them it's doubling their potential cigarette market in the 1940s.
    7. You don't know if the intention or the programme is worthy of “running” or not.
    8. You don't know if you're spreading the programme.

This list could go on, but these are some key reasons to contemplate the hidden social engineering in your life. The engineering has an intention, but the dangers presented by ignorance can backfire. It is not just the engineered that might suffer from possible faulty social engineering. For instance, if the engineering was something that is meant to be global, species wide. You may or may not agree that humans are the major contributing factor to climate change. You might not believe in the existence or evidence of global warming at all. Without taking into consideration your paradigms on the subject(s) before applying any engineering, we have greatly limited both our chances of success individually and wasted our resources on ineffectual targeting. What this means is “if we don't know what we're thinking about, how can we know what we think, about what we're thinking about.”