-
Truth and meaning
I had intended to dive straight into my PhD, follow my research proposal, complete it, etc. But for some reason my professor insisted on delivering cautionary tales. About how a phd never turns out according to its initial proposal (even if he thought it was very good on another occasion). About how in my particular case and quest, I would have to look at other disciplines beside philosophy. About how I might as well take six years and never publish anything as long as I enjoyed myself. Well … I think I understand what he meant – rephrased his sentences several times and rechecked – but I am not at all sure why he said these things, other than that he appeared to want me to slow down. So I thought I’d think about it a little bit first.
Without throwing my research proposal at you (although it is here if you want to read it), I must admit that it is far more complex to explain why something does not happen (understanding) than why it does happen. Because, in order to explain what does not happen, you have to understand exactly what it is that does not happen that could have happened. A bit like why it takes much longer not to find something in your pocket. So, I embarked on some thinking-by-myself, in the wild, so to speak.
I had been worrying about two publications (well, books-n-papers) in my day-job field (information technology), and it would not leave me alone. So I decided to investigate further.
One is on Archimate. That is a open-source modelling language used by digital architects to make blueprints of business environments. It was developed primarily by Mark Lankhorst, but there were some other researchers involved in connecting up Archimate with the theoretical background, including philosophy of language, in this paper: Arbab et al (2015). Actually, the claim they make is not so terribly large, it seems. They refer back to the meaning triangle proposed by Morris (1946) which really is not by Morris at all, but goes back to Ogden & Morris (1923). Basically, it says that there is a relationship between things in the outside world and our thoughts, and that we connect they two using symbols. They then use symbols (in a modelling language) and thought/reference (the meaning of those symbols, presumably as expressed or understood by the modellers. Presto: meaningful diagrams. To be honest, there is not really very much philosophy of language in this theory. I have written to one of the researchers to ask if this claim should not simply be taken out – as it eats no bread, as they say.
The other theory is by Jan Dietz and colleagues, called DEMO. It is a modelling and design language, which was conceived in the early 90s and still going strong. It claims ( Ettema & Dietz, 2009) to be firmly rooted in philosophy of language, as opposed to Archimate, because it is based on a more-modern-than-Searle conception of speech acts, as advocated by Habermas, which envisages speech acts not just as information carriers but as coordination devices. Sounds much like Brandom’s normative inferentialism. Habermas’ insight into speech acts was not so different from the currently mainstream idea: speech acts are not just for passing on information. Speech acts also have a social component related to the speaker/hearer’s role – as a human being, as a member of one or more groups. It turns out, Brandom and Habermas met and agreed on much but also disagreed vehemently. I collected papers on the Brandom-Habermas debate, but there was no quick way in – and quite a few philosophers professed not to understand it either. Must be some fine point of philosophy, which I will return to if I must. But for the moment, I cannot imagine this controversy – which many philosophers profess not to understand – would have any impact on the conception of DEMO.
I must admit that I spent a happy evening tracing back all the theoretical components that are supposed to make up DEMO, and were fitted with big Greek letters accordingly. It had a distinct shopping spree feel to it – a stack of theories from everywhere, incorporated because they seemed to fit, a sort of build-your-own-theoretical-foundations-toolkit. Not in a million years would I be allowed to construct a theory on such a basis. But, I find DEMO interesting because it seems to explain how an organisation can help itself to new facts, truth, whatever you want to call them. Which is exactly what we do in speech acts, when we talk to each other. So I have written to ask what DEMO’s attachment is to Habermas. I personally think, there is none. Brandom would do just as well. Or even Grice, as nothing seems to be said about the motivation to coordinate actions. The point is, I think that in creating DEMO its authors may have understood specific felicity conditions for speech acts, and I want to find out how and what and where, because such notions may point to conditions for avoid misunderstanding, even in a highly stylised environment such as a business. Also, the fact that DEMO thinks of language in terms of coordination and collaboration rather than information exchange is still quite revolutionary, even though it was developed nearly 30 years ago. I want to know where the ideas came from.
I was happy to receive a reply on both counts – invitations to talk further. Great. Meanwhile a ideas has been brewing in my head. Might it be the case that modelling languages like Archimate and DEMO are in fact natural language-extensions? They are not mathematical languages, I am quite sure of that. They are not natural languages, they are made with a specific purpose in mind. Their elementary concepts constitute an elementary grammar plus the idea that whatever we want to happen (be it a process, a decision, an action or whatever) can be expressed in that grammar – i.e. stripping additional meanings, context etc down to a bare, model-able minimum.
The other side of the problem is the relationship between computer commands and “truth”. I need to find the right academic sources, but I am pretty sure none exist. I crossed checked with Husband coz he has actually written machine code where I hovered just above in my RPGII. Code simply instructs the processor to load two values, compare them and then take some action defined by you. There is no truth to it in any philosophical sense, other than whatever comes out of the comparison and taken as a starting point for action.
Just to make sure I don’t miss anything obvious, I have also been reading up on philosophical truth in all its variations. I found that that Habermas was a staunch supporter of the consensus theory of truth: whatever a specified group believes to be true, is true. There are other theories – correspondence, coherence, constructivist, pragmatic; and then there are the so-deflationary theories which say that truth is not a property of statements. I was surprised to learn that Strawson (1949) had proposed a performative theory of truth which characterised truth as a property of the speaker’s intentions, in response to Tarski who invented the concept of a object language to solve the liar’s paradox. Sounds like an early beginning of speech act theory to me!
-
The art of misunderstanding
I have been talking in this blog about my journey back into academia. But I have said little about why. There is a reason for this reticence. Well, several. First, I am not at all sure I will complete this quest, so the less said, the better. Second, I might change my mind. Seriously, after just 3 weeks I am already so filled up with new thoughts that anything might happen. And finally, well, you might laugh. But never mind all that. I will explain.
In my day job I am a security architect. That is someone who thinks out a web of strategic and actual safety measures which will protect a company from bad people or natural disaster. There is a lot of IT involved.
One might ask how a philosophy & psychology graduate ever ended up as a security architect. Well, I am not sure. It happened. And it involves being in a world of very serious, conscientious people who argue about …. words. It is almost impossible to get any work done because of these arguments.
It is not about ordinary words. It is about words in regulations and contracts, even laws. Anyway, you can read it all in the paper below. It is the one I wrote for “my” professor during the university acceptance process. I have also included the mind map I created before writing the actual paper. I was nervous, I had written nothing academic in 30+ years. Mind mapping is always a good idea. This one is colourful.
Meeting expectations: the language of governance and compliance
Introduction
Organisations are expected to take care of their assets. This is especially true when damage or misuse has negative consequences for the public or the state. In this digital age, information is widely regarded as a major asset. It needs protection against many threats. Threats may range from common theft to a disgruntled employee bent on revenge; from industrial espionage to natural disaster; from human error to terrorist attack. In general terms, protecting information means ensuring its availability, integrity and confidentiality up to a pre-agreed level.
On the subject of information security, in the past 20 years a multitude of (inter)national regulations and standards have emerged, and more appear every day. These regulations and standards guide, direct or impel companies to institute good information security governance and to report on the level of compliance achieved. Failing to comply may be punished in various ways: a formal warning, a fine, a revoked licence, or public shaming; and may result in the loss of a job, bankruptcy or even a prison sentence.
Because of the value of information assets, its many threats, and consequences of failing to institute proper protection, governmental and business organisations actually want to comply with regulations and standards.
However, there is a problem. These texts are hard to understand, and their meaning is often open to different interpretations. This negatively influences the quality of information security that can be achieved.
Regulations and standards on information security
Let us first identify common characteristics of relevant regulations and standards. As we will see later, some of these characteristics may be tied to interpretation problems within the texts themselves.
Regulations and standards on information security always are:
- in written form only, typically containing a mix of persuasive, informative, descriptive and instructive texts.
- intended for a specific purpose (a topic within the field of information security)
- intended to regulate behaviour (should, could, must)
- issued by a high-level body, such as a government, a board of directors of an (inter) national organisation
- produced as a group effort, usually involving stakeholders, experts and policy makers. Typically, there is no mention of the author(s) in the regulation or standard.
- created and maintained through a formal process
- available to a large audience, usually the public, but may require payment
- authoritative, either as an official directive or regarded as a de facto standard
Examples of such regulations and standards, are:
- Beveiligingsvoorschrift Rijksdienst, Voorschrift Informatiebeveiliging Rijksdienst, Voorschrift Informatiebeveiliging Rijksdienst – s informatie, and Baseline Informatiebeveiliging Rijksdienst; all published by the Dutch Government
- General Data Protection Act (published by the European Commission) and its Dutch add-on, the Uitvoeringswet Algemene verordening gegevensbescherming
- ISO/IEC 2727K family of standards on information security, published by the ISO/IEC Joint Technical Committee, Subcommittee 27, particularly the ISO27001 and the ISO27002; both European standards.
Organisations tend to treat regulations and standards as a single point of truth, taking texts as literally as possible. This is because of the need to demonstrate compliance. For the same reason, implementation is usually achieved through a top-down chain of command.
Texts and meanings
The text of these regulations and standards are riddled with meaning problems. Why should that fact be a problem? General wisdom dictates that if you don’t understand something, you should go and ask. Why does that not work here?
- One reason is that there is no one to ask. There is no author to ask for clarification, nor is there an easily accessible expert group. An additional problem is that reaching out to the publisher of the regulation or standard in question, must be done through proper channels, i.e. not something just any employee can do. Usually, the best that may be achieved is to send in a formal request for clarification – which may or may not be processed during a future maintenance window.
- Another reason is that readers tend not te be aware of the different meanings of a particular bit of text, because they assume that there is only one meaning, namely the meaning they have assigned themselves. Only when one happens to be confronted with a different interpretation by someone else, will there be cause to wonder.
- Yet another reason is in the field of regulations and standards: no one likes to admit to a lack of understanding or knowledge. It is associated with losing face, particularly when the particular regulation or standard is implemented from the top-down. Power and knowledge of important matters is supposed to live at the top, rather than in the workplace.
The nett result is that texts get interpreted in different ways by different people who all believe they are right even when they are working at cross purposes. This generally results in a confused implementation of the regulation or standard, and ultimately, in compliance failure.
The art of misunderstanding
There are many causes which contribute to interpretation problems in these texts. However, let us begin with what, contrary to popular opinion, is not a cause. It is not the case that the authors of these texts are unable or unwilling to use plain language. Rather, they arrive at the final wording through a group effort[1]. To achieve consensus, the outcome of a negotiation process, is much more important than clarity. Meaning problems which arise from this cause take the form of obfuscation and generally over-complicated text containing (too) many qualifiers.
The same effect may be produced deliberately. Organisations that issue regulations and standards are usually funded by public money and derive their status at least in part from their authority of being accepted by all parties involved. To keep that status and funding, they try to avoid any big confrontation with the intended audience. For that reason, expectations on compliance tend to be worded softly, so they won’t chafe too much, allowing for an escape. One way to do this is by introducing intentional vagueness into the text, for instance, by not being specific on whether something must, should or could be done.
Context is another issue. The same words will mean different things in different contexts, or to different people, and these meanings may even be contradictory. Some examples:
- the term special data (“bijzondere gegevens”) might be taken to mean data that need special care, or to data that are for some reason special. Yet the term also refers to data which it is the special duty of the government to secure[2]. Within the context of the GDPR[3] it means something completely different again, namely data describing very particular human characteristics such as DNA, creed, race or political inclination.
- the use of the word value (“belang”). In Dutch governmental regulations the term refers to anything which, when compromised, will negatively affect the Dutch state or its partners[4]. To security professionals, the term signifies the value of a company asset[5], expressed in either quantitative (money) or qualitative terms. In a business context the term usually refers to the interest of an important stakeholder[6]. In everyday speech, the term just means that the issue is deemed to be of some importance.
Last but not least, there are knowledge problems. These take various forms.
- There may be a lack of knowledge at the level of the intended audience. The committee or group composing the regulation or standard may also have knowledge gaps. A knowledge gap may have an underlying cause, such as a belief about the extent to which it is possible or desirable to regulate behaviour, or an opinion about whether information security threats are real or may be countered.
- Another area is the definition of knowledge itself. Within the field of information processing various modelling languages have been developed, ranging from formal, mathematical models to more descriptive languages such as UML, BPMN and Archimate which have the added advantage of being designed to produce strong visualisations which can be shared with a less specialised audience. The problem with these ‘descriptive’ languages, though popular, is that the concepts they are built on, have been arrived at through trial-and-error and common sense. Inevitably concepts overlap, leave gaps, are overloaded or simply are not sufficiently clear for the use of capturing knowledge[7].
- Within the field of computing, much interest has centred on the possibility of capturing information within an ontology in a formal language (such as OWL or WSDL) that can be processed by a standardised computer program or interface (semantic web service)[8]. In principle, this idea works for all kinds of information, including security, and may be used to construct theories, harmonise concepts or create computer-based applications. Some real progress has been made in highly specialised sub-topics such as automatic threat detection in cyberspace. Yet that progress seems to have been possible only because there exists a straightforward cause-and-effect relation between a cyberthreat and the way to respond to it. Overall, security ontologies for sub-topics are developed independently from each other. In a recent survey[9] eight different families of security ontologies were identified. Despite considerable work, these efforts do not converge. There exists general agreement on the lack of a common body of knowledge, but this conclusion tends to be presented both as a cause and as a solution.
Next steps
The above presents a general overview of problems encountered when interpreting regulations and standards on information security and points to some possible causes. These causes may exist simultaneously and may interact. Much more work needs to be done on this to achieve a true identification of relevant causes and underlying factors. It might be possible to construct a diagnostic framework which may be used to identify specific semantic problems in regulations and standards on information security, such that agreement may emerge on how to avoid current interpretation problems. At the very least, a deeper insight into the art of misunderstanding may be achieved.
Bibliography
Europees Parlement, Algemene Verordening Gegevensbescherming (AVG). (2016, 04 27). https://autoriteitpersoonsgegevens.nl/nl/onderwerpen/avg-nieuwe-europese-privacywetgeving. Retrieved from Autoriteit Persoonsgegevens: https://autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/verordening_2016_-_679_definitief.pdf
Figay, N. (2017, 8 8). Linked Enterprises: from ArchiMate language to ArchiMate Web Ontology? Retrieved from https://www.linkedin.com/pulse/from-archimate-language-web-ontology-dr-nicolas-figay/
Gomes, H., Zúquete, A., & Dias, G. P. (2009). An overview of security ontologies. 9ª Conferência da Associação Portuguesa de Sistemas de Informação . Viseu, Portugal. Retrieved from https://www.researchgate.net/publication/228692638_An_Overview_of_Security_Ontologies/references
Mast, N. v. (2006). De zin van ambtelijk taalgebruik. In Rijksvoorlichtingsdienst, De taal van de overheid (Vol. 5). Den Haag, Netherlands: SDU uitgeverij. Retrieved from https://www.communicatierijk.nl/documenten/publicaties/2006/04/01/platform-5
Minister van Algemene Zaken, BVR-2013. (2013, 06 01). Beveiligingsvoorschrift Rijksdienst 2013. Rijksoverheid. Retrieved from http://wetten.overheid.nl/BWBR0033512/2013-06-01
NEN, NEN-EN-ISO/IEC 27001:2017. (2017, 03 1). NEN. Retrieved from https://www.nen.nl/NEN-Shop/Norm/NENENISOIEC-270012017-en.htm
Soug, A., Salinesi, C., & Comyn-Wattiau, I. (2012). Ontologies for Security Requirements: A Literature Survey and Classification. In E. Bayro-Corrochano, & E. Hancock (Eds.), Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications (Vol. 8827, pp. 61-69). Cham: Springer International Publishing. doi:10.1007/978-3-642-31069-0_5
The Open Group. (2012). TOGAF 9.1. Zaltbommel, Netherlands: Van Haren Publishing. doi:isbn: 978-90-8753-679-4
[1] (Mast, 2006)
[2] (Minister van Algemene Zaken, BVR-2013, 2013)
[3] (Europees Parlement, Algemene Verordening Gegevensbescherming (AVG), 2016)
[4] (Minister van Algemene Zaken, BVR-2013, 2013)
[5] (NEN, NEN-EN-ISO/IEC 27001:2017, 2017)
[6] (The Open Group, 2012)
[7] (Figay, 2017)
[8] (Gomes, Zúquete, & Dias, 2009)
[9] (
Soug ,Salinesi , & Comyn-Wattiau, 2012)