Against Interpretation 2.1

Notes written during the compilation of first sample of an Open Intelligence Mini-Report.

Compiling

I am working at the moment on compiling the Clips on Amplify (www.openintelligence.amplify.com) classified as Economy / Risks on a comparative analysis of recorded “risks” and recorded “opportunities” where the self signifying numbers and juxtapositions tell their own story. It is only a tiny sample, but it could still encapsulate our coverage in a significant way. The “disintermediation” is that the story is built directly out of the juxtapositions of the pertinent statements and the stats. After that, of course, one can interpret as much as you like.

The Economy / Risks Mini-Report is based on a *very small* compilation of the statements in our Amplify Clips classified as pertinent to the question e.g. in this case, Economy / Regulators – Risks. At this point, the goal is still to compile the pertinent statements, not interpret their meaning or significance. The findings are already of some value — they surprised me — even though they are only based on such a tiny sampling. I look forward to the time when we have much bigger samples coming from many clipper-classifiers (cc’s).

For example, one of the statements in the forthcoming Mini-Report is “Previous recessions were caused in the business sector. This one was caused by consumers which will make it harder to correct.” The concept or question of Risk pointed towards that explanation of the genesis of the recession and an associated Risk. The material is saying that. It is simply one of the Economic Risks that people are talking about. No more, no less. As such, it is neither right, nor wrong, but is an indicator of what concerns the sources and is a characterisation of the nature of that concern.

Prejudice and the need to be right have to be kept to a minimum at this early stage to help in being open to what the material is signifying. There are no points to prove right or wrong, only indicators of what is significant. People can then do with them what they like afterwards, but this stage the duty of the compiler is to quote the pertinent statement. The job of the compiler is simply to cite what is being said in a way that makes narrative sense. The objective is to enable the content to express itself as directly as possible in the channel opened by the question inherent in the classification. The story is what is being said under Risks. Claiming the statement as an interpretation by the compiler is tantamount to plagiarism in this context. Indeed, far too much academic interpretation commits this kind of plagiarism, i.e. claiming others ideas (if not their direct words) as those of the interpreter.

The Clipper-Classifiers have no idea of what the compiled synthesis will be in advance. Each item is treated as an indicator, not as a content receptacle. Of course, somebody with devious intent might purposefully skew his classification so that sometime in the future he would control the compiled narrative. “We don’t do that, honest Gov!” Dare I say Clipper-Classifiers (cc’s) must be professional?

This “voice problem” is one of the greatest pitfalls in the game of self-signification. Can a compiler work synthesising what is being said under a topic without interpreting? Nobody can claim to be totally objective when classifying, but people can try to be a disinterested as possible. With a social network of people classifying, the protection against individual bias would increase. Indeed, science would be impossible without degrees of abstraction, classification and operating directly on experimental data. This is exactly what compilers should be doing.

Inferring

The next stage in the process is to use the compilations and the counts of how statements are classified as indicators pointing to significant topics (e.g. government incompetence) or significant relationships between topics (e.g. consumer spending and oil demand). It then becomes possible to monitor the change in the counts enabling analysts to make inferences about trends and turning points in discourse often before they become apparent to the participants in the discourse. Note the difference between inference which uses indicators to point to another level of rational, reflective consciousness (beyond right or wrong) and interpretation which looks backwards to judge whether the content is accurate, to understand it as right or wrong, or simply to put it into other words.

In the currently dominant content-centred universe, compilations can also serve as high quality summaries (apparent interpretations) of content, and have a value as useful form of evidence in the conventional judgemental debates of academic discourse.

Nevertheless, when we get enough material (and statistics) together, their most potent purpose and highest value will be to use them as evidence for making useful inferences about changes – trends, turning points, emergence, and even the future.

More interesting than debating whether the content of articles is right or wrong, is using the perspective as a way of looking at the way groups of people are thinking, and more “significantly” how that thinking changes over time

Indeed, I would say that the process which I am advocating is a way of escaping “entrained” ideologies. The attempt (one can use awareness of ‘voice problem’ to help) to be “disinterested” during the classification process is simply to enable better “critical responsibility” at a later stage in the process. It could be argued that the classification channel is a kind of “entrainment”. The response is that it opens up a direct window on events and ideas which did not exist before … indeed as many windows as there are classifications and the questions which they ask.

How the sources are classified is the primary data from which it is possible infer significance. The findings are a different kind of knowledge to the right / wrong knowledge people are used to. The process which brings it into being is open. The classification is simply enables questions to be asked of the sources in advance.

If the compilation process has worked properly, the inquirer is not interacting with the analysts or their opinions, but with what is being said and its character opening up new kinds of questions to be asked, such as why it is being said that way. After time passes, it becomes possible to compare “what is being said” during different periods, then we are in a position to “sense” turning points and trends in the data, and it all becomes much more interesting…

And if statements are found which cannot be classified, it indicates that new questions must be formulated and asked.

Of course, it is impossible to have this meta-level of direct enquiry into information flows without the underlying content. At the moment, the problem is that there is far too much content and far too little inquiry into its significance. Open Intelligence is designed to help redress the balance.

Could this be the basis for a real disintermediated “information science” which does not depend on competing theories, such as mental models, or social construction, but inquires into the significance of actual information flows and their patterns as the object of study, rather than a way of trying to determine the “rightness” of the underlying content?

Twitter is changing collaborative consciousness

Trend – A new level of human consciousness is being opened up as Twitter brings real time “self-signifying” knowledge communities into the mainstream.

‘Self signifying’ means that trends in what a group thinks can be monitored directly without first having to interpret the content. ‘Real time’ means intelligence statistics are updated without intervention, as soon as people contribute.

‘Self-signifying’ knowledge systems have existed since the early 1960s in the form of Science Citation Index, pioneered by Eugene Garfied at the Institute for Scientific Information. More recently, companies such as Cognitive Edge have been leading the way.

Since Twitter became a Web sensation a few months ago, the self signifying trend has been accelerating. On April 30, The New Scientists reports:

“Real-time web search – which scours only the latest updates to services like Twitter – is currently generating quite a buzz because it can provide a glimpse of what people around the world are thinking or doing at any given moment.”

Now only days later, new Web and Twitter monitoring services are being launched almost by the hour using APIs to collect statistical intelligence on usage and analyse the resulting data into simple graphical representations.

Sources: Amplified Intelligence

Innovation: How your search queries can predict the future

Crawling the Web to Foretell Ecosystem Collapse

Terrific Twitter Research Tools

TweetStats

Against Interpretation 2.0

Swine Flu on Twitter: How To Filter Out the Noise

HOW TO: Build Your Thought Capital on Twitter