In the past year, I have been thinking and reading, and trying to find out what other people have said on the subject that I wish to expand. I think I am now embarking on a different phase: pulling out all the ideas – like weeds 🙂 – and drawing them together into a narrative.
I have started on my dissertation “proper”. That is, I have set up the outline and am now in the process of filling in the blanks. You can check my progress if you like, but this comes with a warning. Some of the ideas should not be spread to security officers before I interview them, because it would render the interview useless. So, if you think you might be a candidate for interview on the meaning and implication of ISO norms on security, do not read my dissertation-in-creation. I trust you. Really. My dissertation-in-progress is on https://publish.obsidian.md/theartofmisunderstanding. It will ask you for a one-time password. If you want it, write to me, although you might guess it :-). The landing page will refer you to my progress page, and there you can see what I am working on at any particular moment. There is quite a bit of tooling behind the dissertation-writing process. I use Obsidian in combination with Zotero. If you are interested, ask me, or check out a showcase like this one. Also, videos by Brian Jenks are worth checking out. He uses Obsidian as an extended memory to manage his ADHD and has excellent structure and tips.
Meanwhile, my general progress report: I had a lot of fun finding out that the theory of speech acts which is at the basis of my research, has somehow escaped into the real world without language philosophers knowing about it. In fact, the ideas that have “got out there” are not just about language but about cognition and interaction in general—again, without philosophers being aware of it. I find this very amusing. On the one hand, you have these philosophers who are very seriously trying to work out what it all means, and then, on the other hand, normal, practical people borrow these ideas, and put them into practice, sometimes with astonishing success, but without any theoretical foundation whatsoever. And never shall the two meet ..
What kind of ideas? Well, the ideas about feedback being necessary to interaction. That particular idea originally came from biology, from two philosophers called Maturana and Varela, giving rise to such concepts as a PDCA cycle (Plan-do-check-act). The blanket term is “cybernetics”, and the underlying popular theory is known as “system theory”, which derives from autopoiesis. I was first introduced to this theory during my research master, and blogged about it here. The idea is that any system, whether a cell, an animal, or an organisation, will try to keep itself alive whilst engaging with the environment to increase its chances of survival. I am sure you find this idea, well, simple. This is how we commonly think. But we did not always think that way. It is something we learned to do, as a society, in the past 100 years, or even not much beyond 50 years.
The point is, this feedback-notion is an important philosophical idea which is rooted in a specific theory. Would you believe—this is for the IT people in my audience—that the agile and lean ways of thinking are derived from this, without any further theoretical grounding? I checked this extensively, and also talked about it with the Dutch cybernetics society. So, it appears that the philosophical ideas from the first half of the 20th century just went viral, and people tried them for practical purposes, adopting them if they seemed to work, and then of course went on from there to create more practices. Without there being any proper feedback loop to improve the initial theories. Incredible. How does anyone, philosopher or practical person, image we learn from this?
Did I intuit this before? Well, I did notice there was something going on, but I was not sure what exactly. I had previously noticed that frameworks like ArchiMate – a framework for describing organisational and IT structure—are, in spite of their claims, not derived from philosophy of language, just from practical application. Which is ok, but I do find the pretense of “scientific evidence” quite amazing in the absence of a proper theory and proper validation. And yet, the authors of Archimate tried—the same cannot be said of business concepts like “lean” and “agile” which are totally devoid of any theoretical basis whatsoever, and derive their strength from our natural tendency to work in small tribes, which we share with our chimp and bonobo ancestors. Which – for those of you that are agile-believers – does not necessarily mean the agile theory is wrong. Just that it caters to our cognitive disabilities as animals and apes rather than to any grand business theory ….
I had also noticed, in the work of Jan Dietz and his DEMO method for analysing business IT, that Dietz incorporated a notion of speech acts and commitment which went way beyond what was understood at the time and certainly had not found its way into other IT business or architecture thinking. That fascinated me. How did he manage this? So I interviewed Jan Dietz (a pleasant occasion at a café in Leiden) and found that indeed he had understood something that philosophers had not understood at the time. He told me wonderful stories about how his method saw the light, and what happened on his first projects. With respect to my search into the notion of “commitment” in his work, he referred me to the work of Winograd and Flores (1986). Both men are now nearing 80, but still very much part of the world, and, important to me, turned out to be willing to answer my questions.
Terry Winograd is the inventor of the Google search engine and the initial driving force behind the idea of artificial intelligence. I wrote to him, but as I expected, his particular expertise was not language and normativity but IT. That language bit, he said in a private message to me, had come from his co-author, Fernanda Flores, whom he was still seeing regularly.
Fernando Flores is a person in his own right: a former minister of Finance in Chili, a philosopher, and a successful businessman; his communication advice costs about a million per piece. However, there was nothing in Flores’ writings that explained where he got his ideas from, so I wrote to him and managed to obtain an interview via Zoom. Amazing! We spoke for almost two hours. The idea of “speaking = acting = incurring commitment” is at the basis of the new Chilean plan-economy in the early 1970s, at the instigation of Flores himself. His communication-with-commitment ideas were shaped by Stafford Beer, the father of organisational management, who took Maturana’s original theory and applied it – without his consent – not to biological cells, but to organisations, i.e., treating businesses as living organisms. Flores himself spent years in jail after the right-wing coup—being visited by, yes, Maturana, amongst others. Cybernetics became associated with communism, which did not help its academic stature much. I recall being at school, as a teenager, at the time that all of this happened, and not understanding much. My parents were very right-wing, and I was told that anything to do with communism and socialism was bad. I remember getting a geography essay on Chili back with my teacher remarking that “I might have invested more effort”. Quite! Well, here is my karma, after all. There is a great book on this; see Medina, E. (2011). Cybernetic revolutionaries: Technology and politics in Allende’s Chile. Cambridge, Mass: MIT Press.
I also found that psychologist Watzlawick’s ideas about communication, which I blogged on before and liked very much, stem from the very same cybernetic tradition, the idea that our social connections form a system within which we interact-much like a living system that strives for survival and expansion and interaction. The idea is, do not just look at the message. Look at the whole context. By the time I had found all of these cybernetic applications of a single under-developed philosophical idea about the connection between speech acts and commitment, I was getting quite excited. I wondered what had gone wrong with cybernetics and why the connection to philosophy had been lost? So I searched high and low and eventually came across a book by Novikov, which shows the entire area of application of cybernetic theory.
If, from the picture above, you get the idea that artificial intelligence is also not rooted in philosophical theory, you would be quite right. AI is all practice and practical application without any theoretical basis in language or philosophy whatsoever. But don’t tell anyone, because no one wants to know. Everyone wants funding and an audience.
Right. So now the only thing I have to do is to connect cybernetics back to its philosophical roots 🙂 and then, for language understanding, reconnect it to philosophical moral theory and cognition. Not exactly simple to do, but needs must, as they say. The problem, is that there is no system or consistency to the application of cybernetics to its operational fields. It literately escaped from the philosophical laboratory. So I would have to start either from the original theory of cybernetics from its inception by mathematician and philosopher Norbert Wiener (1948) and work my way forward. Or I should posit my own theory and try to fit practical results from real life applications of the theory into my new theory. It will have to be the latter as I am not a historian, but it is daunting nevertheless.
I am leaving you to ponder over my new insights into cybernetics, but also with a book suggestion for which I thank both Husband and my friend Teja. Two female philosophers have written a book about four courageous women philosophers at Oxford at the time of the 2nd world war: Elizabeth Anscombe, Iris Murdoch, Philippa Foot and Mary Midgley. See the review in the Guardian. (Dutch people: do not read the NRC review because it is pretentious clap-trap, sorry to say, just order it from here.) Anyway, such a great book! Finally, I understand why my many questions were not answered when I was there (at the time I blamed myself). The main pleasure comes from knowing why and under what conditions the important analytical theories were produced, as well as how the great philosophers interacted and how any other ideas were excluded. Lots of insightful gossip about horrible men and passionate quests for understanding. I love it. I’m halfway through the audiobook now. I am going to write to the authors when I finish, to thank them for putting analytical philosophy and my own Oxford experience into perspective for me. It is a delightful read, highly recommended.
My next journey is into agency, the connection between roles and norms and understanding. I will keep you posted.
PS. Did you see my post on LinkedIn about my son graduating in Law and moving to philosophy? For those of you that know us personally, you know we went to hell and back. So grateful it all ended well. I thank all of you that liked and/or commented. The young need our encouragement, as we left them so many problems to solve that we could not.
I promised (so many times before ) to tell you in more detail what my PhD is about, what puzzle I want to solve, and how. Today I will start. I’ll have to do it in instalments since I’ll have to gather my thoughts as I write. Also, I have to find a way not to bore you to death with philosophical jargon and references. You will tell me, I am sure 🙂
Have you ever witnessed a discussion between security experts? I have, many times. To their credit, they don’t argue when there is an emergency. In a crisis, all security professionals transform into firefighters, rolling out their hoses and putting everything on the line to save whatever must be saved. But other times, you’ll hear them argue about how important it is (not) to put a certain policy or measure in place, or how exactly it should be done, and they’ll practically behead each other. Often, the discussion is about the meaning of a particular regulation. And that surprised me. Because those regulations are supposed to be written by experts for experts.
Aside: forgive me for saying "they". The discussions I was privy to involved me as a security expert too. I was part of what I witnessed. If there is any guilt to be shared, then I am equally at fault. However, I don't think there is any guilt. As my dissertation will eventually show, there is an explanation which does not have anything to do with being argumentative, or lacking knowledge, or, heaven forbid, any psychological cause. In fact, these men (there are lamentably few women in this profession) are quite right to argue. But that will take me years to prove, so let's start.
I will cast the puzzle in terms of a dialogue. The reason for that will be come clear later (and not in this post). Illustrated with a picture:
What you see here, are regulators, such as an ISO committee, sitting together high up somewhere, working on the text of a new standard or regulation. This is a highly structured and monitored process, with many reviews and voting. The experts themselves are selected from the member countries.
When they are finished, the standard or regulation gets published, i.e., becomes a book or a paper. These are then made available to the experts below. The experts need these standards, either to conform to (the standard is mandatory) or as a best practice. As you can see in the picture, there is not much interaction between the regulator (the “speaker” in this dialogue) and the security experts (the “hearers” or “listeners”). The conversation is a bit one-sided, so to speak. Note that this “broadcasting” model is like the Herald proclaiming the King’s wishes-more of that below.
Two different readings of the same text
Suppose we open up one of these standards and we read the text, as shown below. It is about utility programs. These are intended for the exclusive use of IT-personal because you can do a lot of damage with them. However, such programs are sometimes given over to or appropriated by end-users.
Let’s now assume that this text can be read in two different ways. These are polar opposites, but they also serve as a good illustration of the types of discussions security experts might have. The first reading is by a security expert who has a lot of expertise:
The second reading is by a security expert who has much experience:
Why multiple interpretations?
I’m guessing that you don’t have to know much about security to see that these two ways of looking at it will lead to very different actions. And that’s the whole point. Standards and rules are put in place to make sure that the right safety steps are taken, to take the guesswork out of security implementation.
So what are we to make of multiple interpretations? Is the text unclear? Are those readers, the security experts, unable to comprehend the text through a lack of knowledge or experience? This is highly unlikely, given that we are dealing with experts on both sides of the dialogue. But I admit, in the beginning, I thought so too. So much so that I stepped up my training and qualified for CISM. But it made no difference.
Unfortunately, this is the standard picture with a standard solution to go with it: knowledge and training. The regulators should learn to write more clearly, concisely, and engagingly. The security experts should be better qualified and better trained. Needless to say, neither intervention helps. Regulators can write perfectly well, and security experts are, well, experts 🙂
Other interpretations: the security experts’ worry
One possibility is that the security expert in the second reading is concerned about what will happen when he informs users that they can no longer use their utility programs. The interesting thing is (also something I will come back to in the future) that this worry seems to influence the way he or she reads the text almost before he reads it. Split second.
Other interpretations: the regulators’ worry
On the side of the regulators, there is also something going on. Regulators work to specifications. If no one asks them to create a standard, they are out of a job. So it is not in their interest to produce a standard that cannot be lived up to in practice. Also, if they disagree amongst themselves, there is a tendency to use abstract language to circumvent the disagreement. Again, they cannot show themselves to disagree amongst themselves, because this would inevitably erode their credibility. I don’t know this, of course, for certain, but there is quite a bit of research on how such consensus processes work. So here we have the regulator’s worry
If this regulator’s worry affects text clarity, I wonder how I might detect this in text. As part of my research master, I took a course in computational linguistics. I suppose I could ask the professor if she could point me in the right direction.
Lack of feedback
Another point of interest is how standards and regulations are communicated. We still use the old way of the Herald proclaiming what the King wants. No answer is needed or even welcome. The Herald leaves after the message is sent out. There isn’t much difference between this and putting standards and regulations in a book, newspaper, or online. The problem with not being able to react is that misunderstandings cannot be cleared up. The problem with not being able to check reactions is that the effect of the proclamation is uncertain. I wonder why we modern humans still use this model, as it is so obviously defective, but we do.
You’ve probably noticed that so far I’ve only told you what the problem is and what some of its features are. In the next post, I’ll try to explain how I think the puzzle can be solved. In several steps.
I had intended to dive straight into my PhD, follow my research proposal, complete it, etc. But for some reason my professor insisted on delivering cautionary tales. About how a phd never turns out according to its initial proposal (even if he thought it was very good on another occasion). About how in my particular case and quest, I would have to look at other disciplines beside philosophy. About how I might as well take six years and never publish anything as long as I enjoyed myself. Well … I think I understand what he meant – rephrased his sentences several times and rechecked – but I am not at all sure why he said these things, other than that he appeared to want me to slow down. So I thought I’d think about it a little bit first.
Without throwing my research proposal at you (although it is here if you want to read it), I must admit that it is far more complex to explain why something does not happen (understanding) than why it does happen. Because, in order to explain what does not happen, you have to understand exactly what it is that does not happen that could have happened. A bit like why it takes much longer not to find something in your pocket. So, I embarked on some thinking-by-myself, in the wild, so to speak.
I had been worrying about two publications (well, books-n-papers) in my day-job field (information technology), and it would not leave me alone. So I decided to investigate further.
One is on Archimate. That is a open-source modelling language used by digital architects to make blueprints of business environments. It was developed primarily by Mark Lankhorst, but there were some other researchers involved in connecting up Archimate with the theoretical background, including philosophy of language, in this paper: Arbab et al (2015). Actually, the claim they make is not so terribly large, it seems. They refer back to the meaning triangle proposed by Morris (1946) which really is not by Morris at all, but goes back to Ogden & Morris (1923). Basically, it says that there is a relationship between things in the outside world and our thoughts, and that we connect they two using symbols. They then use symbols (in a modelling language) and thought/reference (the meaning of those symbols, presumably as expressed or understood by the modellers. Presto: meaningful diagrams. To be honest, there is not really very much philosophy of language in this theory. I have written to one of the researchers to ask if this claim should not simply be taken out – as it eats no bread, as they say.
The other theory is by Jan Dietz and colleagues, called DEMO. It is a modelling and design language, which was conceived in the early 90s and still going strong. It claims ( Ettema & Dietz, 2009) to be firmly rooted in philosophy of language, as opposed to Archimate, because it is based on a more-modern-than-Searle conception of speech acts, as advocated by Habermas, which envisages speech acts not just as information carriers but as coordination devices. Sounds much like Brandom’s normative inferentialism. Habermas’ insight into speech acts was not so different from the currently mainstream idea: speech acts are not just for passing on information. Speech acts also have a social component related to the speaker/hearer’s role – as a human being, as a member of one or more groups. It turns out, Brandom and Habermas met and agreed on much but also disagreed vehemently. I collected papers on the Brandom-Habermas debate, but there was no quick way in – and quite a few philosophers professed not to understand it either. Must be some fine point of philosophy, which I will return to if I must. But for the moment, I cannot imagine this controversy – which many philosophers profess not to understand – would have any impact on the conception of DEMO.
I must admit that I spent a happy evening tracing back all the theoretical components that are supposed to make up DEMO, and were fitted with big Greek letters accordingly. It had a distinct shopping spree feel to it – a stack of theories from everywhere, incorporated because they seemed to fit, a sort of build-your-own-theoretical-foundations-toolkit. Not in a million years would I be allowed to construct a theory on such a basis. But, I find DEMO interesting because it seems to explain how an organisation can help itself to new facts, truth, whatever you want to call them. Which is exactly what we do in speech acts, when we talk to each other. So I have written to ask what DEMO’s attachment is to Habermas. I personally think, there is none. Brandom would do just as well. Or even Grice, as nothing seems to be said about the motivation to coordinate actions. The point is, I think that in creating DEMO its authors may have understood specific felicity conditions for speech acts, and I want to find out how and what and where, because such notions may point to conditions for avoid misunderstanding, even in a highly stylised environment such as a business. Also, the fact that DEMO thinks of language in terms of coordination and collaboration rather than information exchange is still quite revolutionary, even though it was developed nearly 30 years ago. I want to know where the ideas came from.
I was happy to receive a reply on both counts – invitations to talk further. Great. Meanwhile a ideas has been brewing in my head. Might it be the case that modelling languages like Archimate and DEMO are in fact natural language-extensions? They are not mathematical languages, I am quite sure of that. They are not natural languages, they are made with a specific purpose in mind. Their elementary concepts constitute an elementary grammar plus the idea that whatever we want to happen (be it a process, a decision, an action or whatever) can be expressed in that grammar – i.e. stripping additional meanings, context etc down to a bare, model-able minimum.
The other side of the problem is the relationship between computer commands and “truth”. I need to find the right academic sources, but I am pretty sure none exist. I crossed checked with Husband coz he has actually written machine code where I hovered just above in my RPGII. Code simply instructs the processor to load two values, compare them and then take some action defined by you. There is no truth to it in any philosophical sense, other than whatever comes out of the comparison and taken as a starting point for action.
Just to make sure I don’t miss anything obvious, I have also been reading up on philosophical truth in all its variations. I found that that Habermas was a staunch supporter of the consensus theory of truth: whatever a specified group believes to be true, is true. There are other theories – correspondence, coherence, constructivist, pragmatic; and then there are the so-deflationary theories which say that truth is not a property of statements. I was surprised to learn that Strawson (1949) had proposed a performative theory of truth which characterised truth as a property of the speaker’s intentions, in response to Tarski who invented the concept of a object language to solve the liar’s paradox. Sounds like an early beginning of speech act theory to me!