Knowledge-on-demand |
Can knowledge be switched on and off - and can it be stored outside of our heads? |
By Karl-Erik Tallmo |
Control and personalization seem to be two very strong effects coming from the use of computers - and maybe especially from the utilization of networks.
In a working process, for instance, people want to control more and more of the procedure themselves, operations that earlier were split up between several individuals. Maybe this was as most obvious in early desktop publishing, where all of a sudden the same person could write, proof-read, photograph, do design, process images and colors, and print out camera-ready copy. At the same time you want to put your personal stamp on both the content and the tool. You never saw a typewriter decorated like a computer screen, with all of the special desktop images, icons, colored menues and trash cans you may install.
If this observation is correct, then what will happen to knowledge and our attitude to it? Will we want to keep it inside of our heads or outside? Will our control of it be optimized, when it is stored on servers and hard drives all over the world, when it is possible to turn off and on through our portable miniature computers?
Or, will personalization be the most important aspect? Will we individualize knowledge by making our own selections, our own judgements, our own interpretations? If so, what does it take to be able to do that?
|
|
New techniques apparently give us access to the collected knowledge of the world, we only need to push a few buttons. Personally we only need to possess information about where to find other pieces of information. Metainformation, that is. But what does it really mean, this diffuse phrase access to information? Downloading, copying, or print-outs? Or agile application and imaginative practice, maybe even a creative kind of understanding that might lead to extended knowledge? Furthermore, metainformation tends to be a determining factor in many areas in society. When employees, consultants, and executives represent the most important capital in a corporation, and they keep it in their own brains, then metainformation becomes essential for an organisation to survive. Strategies, decisions, historical evaluations, constructions and work-flows must be documented, or else the whole business stands or falls with a certain person's leaving or staying. I also believe - and the most eager advocates of AI and intelligent decision-making robots might not agree with me on this point - that the opposite might be the case. Everything will become metainformation. All kinds of traditional knowledge - especially good old basics from elementary school - will become checkpoints, beacons that guide us to other kinds of knowledge, and this knowledge will help us to find a context for what we retrieve from different computer networks, it will help us evaluate veracity and relevance. *** It is said that we live in the Information Age, but many of us feel as though we are surprisingly uninformed. Information technology has long had the tendency, so to speak, of being more about technology than about information. This paradox is somewhat akin to a physicist discovering that increased particle activity at the molecular level does not result in increased heat. Or rather, heat is generated but nothing gets warm.
|
|
Data are basic measurements, instructions or observations of some sort - 32 km, three seconds, last year, to the right - that can be building blocks of both information and knowledge. Information is acquired only when these pieces of data are interpreted in some way by being put into a context. Then, if a deeper understanding of that information is acquired, enabling it to be used again, then one has reached the level of knowledge. Having the correct direction pointed out to you is information; knowledge is being able to find your own way. In other words, being well informed, as it is so often called, does not necessarily mean that one knows anything. An agent who is forced to reveal secrets through torture may be well informed, bursting with strange information he does not himself understand at all. Of course, the accuracy of the information is enormously important. Data, unlike facts, do not have to be true. That is why information must also be evaluated. We usually instinctively associate knowledge with some form of truth, or at least a correlation with reality and its demands. In other words, information can be false, but knowledge should have some element of truth in it. This is where we find many of the shortcomings in this new information technology as it is applied today. For information to truly be informative, that is, to be possible to use as knowledge, it must be true, relevant, understandable and, most of all, accessible. Access to information has, of course, both positive and negative aspects. Incredible amounts of previously unreachable information has been made globally accessible through the Internet, for example, and all of a sudden it seems as though the amount of information in the world has suddenly increased dramatically. To some extent this is an illusion. It is mostly the actual flow that has increased. But, if one has too much access, without the proper tools for selecting and evaluating the material, then one really doesn't have access to anything at all. In the Middle Ages, before the innovation of book printing, when things were still written by hand, the individual copy was the constant in the process. This does not imply that the copy always was true, but at least it was fairly invariable. With the advent of book printing, the possibility to fix text extended from the copy to the edition - an incredible advancement. At the moment of printing, text was frozen and each and every copy came out the same as all the others. In that respect, however, we have now taken a couple of steps backward, where electronic texts today do not actually have any fixed format at all. When we deal with the electronic word, the problematic of authenticity and preservation becomes apparent as soon as we type a few characters on the keyboard. We have no guarantees that it will be saved or saved correctly onto the hard disk. And when texts are subsequently distributed via networks or on diskettes - or, for that matter, on CD-ROM disks which now exist in recordable form - then we can no longer be sure that texts will look the same as they did when they left us. For a long time, people have talked about the problem of safely transferring account numbers over the Net in order to facilitate electronic commerce. And of course that is a very important question. But people are just now starting to realize that the transfer and storage of texts on different servers, in a way that they cannot be falsified, is also a very big problem. Why is that so important, and who would get the idea of falsifying something? It is important because we will soon be living in a society where a large portion of education and decision making will be based on electronic documents. Why, then, would anybody want to falsify information? The most obvious is probably for political reasons. Revisionist historians who want to rewrite history books and delete the atrocities of Nazism, for example. Of course, it doesn't have to involve such spectacular issues, but could also, for example, be about withholding information which one does not believe should be distributed at a given point in time because political opinion is leaning a certain way. Researchers who want to advance their careers could alter electronic texts that they make reference to in such a way as to support their hypotheses. Supposedly, this already happened in the United States a couple of years ago. When decision making is automated, it is important that the computer programs and the legislation that such decisions are based upon, have not been manipulated by someone who could profit by a certain outcome. The question then becomes, is there a way, in the electronic realm, to restore the confidence in and prestige of the source? With the aid of electronic signatures and other tricks, one can at least try. Every publisher could, perhaps, reserve a special server that he or she has control over, and which could be constantly monitored. Otherwise the problem is the endless amount of backup copies circulating and mirror sites containing the same information. Such a special server would be the officially authorized source for researchers and academics to turn to when they need reliable information. This issue of dealing with several copies of the same file out on the Net is both a problem and an advantage. They say, one should not have all one's eggs in one basket, and the more copies of a document that there are, the safer it is, of course. But at the same time, this creates a problem with authenticity. Some of the copies could be falsified. Or all of them could be false, creating, for example, a big problem for journalists who use the classic method of checking two independent sources before writing an article. How does one know that information found at two different locations on the Internet are independent? They could be direct copies of each other. So, perhaps the most important thing is to foster a level of healthy skepticism. One must learn to consider information taken from the Net as being equal to having heard it from an acquaintance; and one must continually take into consideration the reliability of that acquaintance. Is this a person who without hesitation passes on legends or who usually embellishes his or her stories, or is this a reliable and truthful person who does not usually comment on things that he or she has no personal knowledge about? Is the person speaking in the capacity of a professional, a private individual or merely out of general interest? Typical for the Internet is that so many non-professional publicists have a voice there. This is both the Net's strength and its weakness. All types of information may co-exist here, commercial and non-commercial, authorized and unauthorized. My understanding is, however, that these formats complement each other. Almost daily, I use information from the Encyclopedia Britannica, but I also frequently glean information about obscure subjects from enthusiastic amateurs; things which simply are not available from traditional sources. At the same time, I am glad that I am somewhat well-rounded. For the same reasons that I would not dare trust a calculator without having some idea of multiplication tables, I would not trust the Britannica without having at least a cursory overview of history, geography and other basic facts. That is why I think it is incredibly important that our schools do not make it their primary goal to turn our children into full blown multimedia producers, but rather to teach basic subjects. Multimedia technology can easily be learned on the job, but few companies teach, for instance, the rivers of Africa or South American capitals. *** Three years ago, I wrote an article in the Swedish daily Svenska Dagbladet about this problem, and I concluded then that we could leave the tech part of information technology to the technicians. But the information part of information technology we would have to jealously guard ourselves. Today, I would like to revise my position a bit. I am afraid that we must also monitor the technology part with a degree of suspicion. There are now many systems being created that handle information and knowledge in a way that will perhaps change the entire human knowledge process. First of all, systems and programs are now being created to assist decision makers, automating certain decisions. As I have already suggested, not only must the legislation that the decisions are based on not be falsified, but the very selection mechanisms and other criteria that the programs utilize, must be protected from unauthorized access. It should also be noted that the means by which the government provides itself with information will surely change, things such as committees, reports and investigative operations. In Sweden, for instance, the forms for this are being discussed; i.e. one man investigations versus parliamentary investigations. More than likely, entirely new forms of contact between experts, representatives in parliament, and grass roots will be developed. Secondly: for some time now we have heard about so called data mining - the extraction and refining of connections and relationships from databases, a sort of intellectual, mathematically defined processing of raw materials. Now there is even talk of text mining,  which involves exploring grammatical relationships with the help of artificial intelligence-like procedures which can extract new and unexpected facts and correlations between and within texts. I suspect that this idea of viewing information and text as a sort of mineral or raw material will, in many respects, permeate an increasing number of fields. Even within the private realm, people will probably tinker with some form of text mining.
|
|
Until now, when we have been referring to a person's own reading of a certain work, we meant his or her inner staging, version or mental interpretation of it. In the future, a more tangible, personal reading style will totally reformulate what a work is. Works will change and become something else with every new reader or user, depending on which reading tool he or she chooses. Soon, I believe, one will simply purchase a raw body of text or other information which one will then read using one's own tools of choice, retrieving information and finding connections, structures and other relationships within the texts. Man is the measure of all things, and perhaps one of the results of the new technologies, especially within artificial intelligence, will be that after centuries of dreaming, we will finally be able to have a fleeting sense of what it is like to view humankind from the outside. Perhaps we will be able to let a non-human subject give us momentary insights from another perspective. To summarize, I believe that many trends are in conflict with one another, and that the outcome is uncertain. As I have already mentioned, one of the issues before us is the issue of democracy. As recently as yesterday, Bo Södersten (in an article in Dagens Nyheter, December 14th, 1997) discussed the special refuge in society reserved for monetary policy, and it is striking how many of his criteria that may be applied also to information. The question, in other words is: will information become yet another isolated and protected area, or will it lead to greater influence through direct democratic methods and increased public access to official sources through IT? In a similar manner, there is a conflict between freedom of information and the more greedy watch over every little thought as a potential golden nugget. There is a lot of talk, in the business world, about knowledge sharing. According to a recent study by the Delphi Group in Boston, this is taking place very slowly. More than half of the 650 IT managers interviewed saw current business culture as a hinder to knowledge management. Jeff Held, of Ernst & Young's technology center in the United States, was quoted in Computer Sweden on Friday (no. 82, December 12th, 1997) saying: One can talk about knowledge management until one is blue in the face, but nobody shares what they know before finding out what sort of advantage they will get themselves. Thomas Jefferson is often quoted in the discussion of freedom of information. He said that one can share information without loosing anything, in the same way that someone who lights his taper at mine, receives light without darkening me. It is, of course, a very appealing thought. Still, one has to wonder if that was not an attitude that was easily taken and more affordable when, at the time, the material world was still the focal point for commerce and trade. Freedom of information versus information protectionism. This is where the current battle over copyright plays a big roll. Usually, copyright is viewed as having two dimensions, the economic aspect and the moral, the latter dealing with the integrity of the work, how it is presented and distributed, so that it does not appear in a form or in a context that the originator has not envisioned. The more we become an information-based society, with free-flowing data unconstrained by fixed editions, the more important the moral aspect of copyright will probably become. Authenticity and moral right seem, under the current situation, to be a marriage made in heaven. In the long run, I think, other means will be required. Who knows, perhaps we will return to a medieval system where not even the author is always a constant. Maybe, our new information systems will throw the old ones out of gear. By the old ones I mean information systems in a broad sense, e.g. money (stored labor) or laws (stored ethics or politics). Regardless of the outcome of such Utopian or Dystopian visions, the question is what happens with our knowledge, our thinking, and our view of ourselves in this world. Shall we limit our beings to become mere vessels for metaknowledge? But then, will we not basically become only appendices to the machines, will we not be the ones serving the search engines and the artificially intelligent evaluating processors instead of the other way around?
|
|
I think information technology in the classrom might accentuate strengths as well as weeknesses, precisely as the computer in general seems to amplify all sorts of human ideosyncracies. If you have problems in school, these might get worse if you just smear some computer polish over it all. But if you have a good climate for learning and use all of the new tools not as miracle remedies, but as enhancements of what you want to achieve, I think you can get very far indeed. The excitation over technique at the expense of content has become more and more obvious in several areas. And I believe that the insight that we need content also, is about to hit debaters within the education sector too. I don't believe knowledge is something that can be switched on and off. It requires a durable, long time relationship between the world and your own judgement.
|