Against Interpretation 2.1

Notes written during the compilation of first sample of an Open Intelligence Mini-Report.

Compiling

I am working at the moment on compiling the Clips on Amplify (www.openintelligence.amplify.com) classified as Economy / Risks on a comparative analysis of recorded “risks” and recorded “opportunities” where the self signifying numbers and juxtapositions tell their own story. It is only a tiny sample, but it could still encapsulate our coverage in a significant way. The “disintermediation” is that the story is built directly out of the juxtapositions of the pertinent statements and the stats. After that, of course, one can interpret as much as you like.

The Economy / Risks Mini-Report is based on a *very small* compilation of the statements in our Amplify Clips classified as pertinent to the question e.g. in this case, Economy / Regulators – Risks. At this point, the goal is still to compile the pertinent statements, not interpret their meaning or significance. The findings are already of some value — they surprised me — even though they are only based on such a tiny sampling. I look forward to the time when we have much bigger samples coming from many clipper-classifiers (cc’s).

For example, one of the statements in the forthcoming Mini-Report is “Previous recessions were caused in the business sector. This one was caused by consumers which will make it harder to correct.” The concept or question of Risk pointed towards that explanation of the genesis of the recession and an associated Risk. The material is saying that. It is simply one of the Economic Risks that people are talking about. No more, no less. As such, it is neither right, nor wrong, but is an indicator of what concerns the sources and is a characterisation of the nature of that concern.

Prejudice and the need to be right have to be kept to a minimum at this early stage to help in being open to what the material is signifying. There are no points to prove right or wrong, only indicators of what is significant. People can then do with them what they like afterwards, but this stage the duty of the compiler is to quote the pertinent statement. The job of the compiler is simply to cite what is being said in a way that makes narrative sense. The objective is to enable the content to express itself as directly as possible in the channel opened by the question inherent in the classification. The story is what is being said under Risks. Claiming the statement as an interpretation by the compiler is tantamount to plagiarism in this context. Indeed, far too much academic interpretation commits this kind of plagiarism, i.e. claiming others ideas (if not their direct words) as those of the interpreter.

The Clipper-Classifiers have no idea of what the compiled synthesis will be in advance. Each item is treated as an indicator, not as a content receptacle. Of course, somebody with devious intent might purposefully skew his classification so that sometime in the future he would control the compiled narrative. “We don’t do that, honest Gov!” Dare I say Clipper-Classifiers (cc’s) must be professional?

This “voice problem” is one of the greatest pitfalls in the game of self-signification. Can a compiler work synthesising what is being said under a topic without interpreting? Nobody can claim to be totally objective when classifying, but people can try to be a disinterested as possible. With a social network of people classifying, the protection against individual bias would increase. Indeed, science would be impossible without degrees of abstraction, classification and operating directly on experimental data. This is exactly what compilers should be doing.

Inferring

The next stage in the process is to use the compilations and the counts of how statements are classified as indicators pointing to significant topics (e.g. government incompetence) or significant relationships between topics (e.g. consumer spending and oil demand). It then becomes possible to monitor the change in the counts enabling analysts to make inferences about trends and turning points in discourse often before they become apparent to the participants in the discourse. Note the difference between inference which uses indicators to point to another level of rational, reflective consciousness (beyond right or wrong) and interpretation which looks backwards to judge whether the content is accurate, to understand it as right or wrong, or simply to put it into other words.

In the currently dominant content-centred universe, compilations can also serve as high quality summaries (apparent interpretations) of content, and have a value as useful form of evidence in the conventional judgemental debates of academic discourse.

Nevertheless, when we get enough material (and statistics) together, their most potent purpose and highest value will be to use them as evidence for making useful inferences about changes – trends, turning points, emergence, and even the future.

More interesting than debating whether the content of articles is right or wrong, is using the perspective as a way of looking at the way groups of people are thinking, and more “significantly” how that thinking changes over time

Indeed, I would say that the process which I am advocating is a way of escaping “entrained” ideologies. The attempt (one can use awareness of ‘voice problem’ to help) to be “disinterested” during the classification process is simply to enable better “critical responsibility” at a later stage in the process. It could be argued that the classification channel is a kind of “entrainment”. The response is that it opens up a direct window on events and ideas which did not exist before … indeed as many windows as there are classifications and the questions which they ask.

How the sources are classified is the primary data from which it is possible infer significance. The findings are a different kind of knowledge to the right / wrong knowledge people are used to. The process which brings it into being is open. The classification is simply enables questions to be asked of the sources in advance.

If the compilation process has worked properly, the inquirer is not interacting with the analysts or their opinions, but with what is being said and its character opening up new kinds of questions to be asked, such as why it is being said that way. After time passes, it becomes possible to compare “what is being said” during different periods, then we are in a position to “sense” turning points and trends in the data, and it all becomes much more interesting…

And if statements are found which cannot be classified, it indicates that new questions must be formulated and asked.

Of course, it is impossible to have this meta-level of direct enquiry into information flows without the underlying content. At the moment, the problem is that there is far too much content and far too little inquiry into its significance. Open Intelligence is designed to help redress the balance.

Could this be the basis for a real disintermediated “information science” which does not depend on competing theories, such as mental models, or social construction, but inquires into the significance of actual information flows and their patterns as the object of study, rather than a way of trying to determine the “rightness” of the underlying content?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: