I had intended to dive straight into my PhD, follow my research proposal, complete it, etc. But for some reason my professor insisted on delivering cautionary tales. About how a phd never turns out according to its initial proposal (even if he thought it was very good on another occasion). About how in my particular case and quest, I would have to look at other disciplines beside philosophy. About how I might as well take six years and never publish anything as long as I enjoyed myself. Well … I think I understand what he meant – rephrased his sentences several times and rechecked – but I am not at all sure why he said these things, other than that he appeared to want me to slow down. So I thought I’d think about it a little bit first.
Without throwing my research proposal at you (although it is here if you want to read it), I must admit that it is far more complex to explain why something does not happen (understanding) than why it does happen. Because, in order to explain what does not happen, you have to understand exactly what it is that does not happen that could have happened. A bit like why it takes much longer not to find something in your pocket. So, I embarked on some thinking-by-myself, in the wild, so to speak.
I had been worrying about two publications (well, books-n-papers) in my day-job field (information technology), and it would not leave me alone. So I decided to investigate further.
One is on Archimate. That is a open-source modelling language used by digital architects to make blueprints of business environments. It was developed primarily by Mark Lankhorst, but there were some other researchers involved in connecting up Archimate with the theoretical background, including philosophy of language, in this paper: Arbab et al (2015). Actually, the claim they make is not so terribly large, it seems. They refer back to the meaning triangle proposed by Morris (1946) which really is not by Morris at all, but goes back to Ogden & Morris (1923). Basically, it says that there is a relationship between things in the outside world and our thoughts, and that we connect they two using symbols. They then use symbols (in a modelling language) and thought/reference (the meaning of those symbols, presumably as expressed or understood by the modellers. Presto: meaningful diagrams. To be honest, there is not really very much philosophy of language in this theory. I have written to one of the researchers to ask if this claim should not simply be taken out – as it eats no bread, as they say.
The other theory is by Jan Dietz and colleagues, called DEMO. It is a modelling and design language, which was conceived in the early 90s and still going strong. It claims ( Ettema & Dietz, 2009) to be firmly rooted in philosophy of language, as opposed to Archimate, because it is based on a more-modern-than-Searle conception of speech acts, as advocated by Habermas, which envisages speech acts not just as information carriers but as coordination devices. Sounds much like Brandom’s normative inferentialism. Habermas’ insight into speech acts was not so different from the currently mainstream idea: speech acts are not just for passing on information. Speech acts also have a social component related to the speaker/hearer’s role – as a human being, as a member of one or more groups. It turns out, Brandom and Habermas met and agreed on much but also disagreed vehemently. I collected papers on the Brandom-Habermas debate, but there was no quick way in – and quite a few philosophers professed not to understand it either. Must be some fine point of philosophy, which I will return to if I must. But for the moment, I cannot imagine this controversy – which many philosophers profess not to understand – would have any impact on the conception of DEMO.
I must admit that I spent a happy evening tracing back all the theoretical components that are supposed to make up DEMO, and were fitted with big Greek letters accordingly. It had a distinct shopping spree feel to it – a stack of theories from everywhere, incorporated because they seemed to fit, a sort of build-your-own-theoretical-foundations-toolkit. Not in a million years would I be allowed to construct a theory on such a basis. But, I find DEMO interesting because it seems to explain how an organisation can help itself to new facts, truth, whatever you want to call them. Which is exactly what we do in speech acts, when we talk to each other. So I have written to ask what DEMO’s attachment is to Habermas. I personally think, there is none. Brandom would do just as well. Or even Grice, as nothing seems to be said about the motivation to coordinate actions. The point is, I think that in creating DEMO its authors may have understood specific felicity conditions for speech acts, and I want to find out how and what and where, because such notions may point to conditions for avoid misunderstanding, even in a highly stylised environment such as a business. Also, the fact that DEMO thinks of language in terms of coordination and collaboration rather than information exchange is still quite revolutionary, even though it was developed nearly 30 years ago. I want to know where the ideas came from.
I was happy to receive a reply on both counts – invitations to talk further. Great. Meanwhile a ideas has been brewing in my head. Might it be the case that modelling languages like Archimate and DEMO are in fact natural language-extensions? They are not mathematical languages, I am quite sure of that. They are not natural languages, they are made with a specific purpose in mind. Their elementary concepts constitute an elementary grammar plus the idea that whatever we want to happen (be it a process, a decision, an action or whatever) can be expressed in that grammar – i.e. stripping additional meanings, context etc down to a bare, model-able minimum.
The other side of the problem is the relationship between computer commands and “truth”. I need to find the right academic sources, but I am pretty sure none exist. I crossed checked with Husband coz he has actually written machine code where I hovered just above in my RPGII. Code simply instructs the processor to load two values, compare them and then take some action defined by you. There is no truth to it in any philosophical sense, other than whatever comes out of the comparison and taken as a starting point for action.
Just to make sure I don’t miss anything obvious, I have also been reading up on philosophical truth in all its variations. I found that that Habermas was a staunch supporter of the consensus theory of truth: whatever a specified group believes to be true, is true. There are other theories – correspondence, coherence, constructivist, pragmatic; and then there are the so-deflationary theories which say that truth is not a property of statements. I was surprised to learn that Strawson (1949) had proposed a performative theory of truth which characterised truth as a property of the speaker’s intentions, in response to Tarski who invented the concept of a object language to solve the liar’s paradox. Sounds like an early beginning of speech act theory to me!
An open door, yes: always a good idea, to think first. And yes, thinking is what philosophers do. Or they think about what others have thought. Or both. But that is not the kind of thinking that I mean.
I am in search of a documentation and retrieval system for my thoughts and notes and everything that goes into future publications, phd, whatever. I suppose having spent most of my working life in information science, I have a penchant for organising (although this does not extend to my very large, ever messy desk).
The problem is, Philosophia must have the largest number of digital illiterates of any discipline. The modus operandi does not seem to have changed in thousands of years: reading books and holding forth. Ok, so the reading may be on a computer and the holding forth via zoom – but that is as far as innovation has touched philosophy. In fact, philosophers seem to think that because their thinking led to the Computer (see Aristotle introducing it below), they are under no obligation to do anything with it. That is for the people. Somewhat like how the policy makers in the Hague look upon us civil servants to do the actual work (sorry, the day job crept in again).
I started off making mind maps. This is something I really like doing, and I found it worked well with short essays of the type that we were set during year-long skills seminar. One of my professors once saw one of my maps and professed total admiration – how had I done that? I did not have the heart to tell him that children get taught this at school nowadays, and that there are many many people who are much better at creative mind mapping than I am. Which I love coz I love infographics – see here. But the problem with mind maps is that they become incomprehensible when they get larger. So much so, that I would find a really nice mind map of some complicated problem on my hard drive, and think: “now where did I get this from” ? Only to remember a while after that I had made it myself – some weeks before. Ok, so not a good memory aid, that much is clear.
Zotero and Calibre
I had a look at my citation manager, Zotero. Which is a nifty program, and getting better all the time. It can do a lot (see here) but beyond storing articles in different collection, it cannot help me much. This is because all of its features, like tagging and grouping and linking are the level of the individual paper, whereas I want to organise the text content. Calibre is not an option either – it is wonderful for organising ebooks, but it cannot even handle papers properly.
Next I started to create my very own Wikipedia. The software that created Wikipedia is called mediawiki, and I managed to install in a subdomain to this blog. I then spent an inordinate amount of time tweaking the installation and teaching myself how to use it, how to create boxes, use colours, deploy pre-programmed templates and generally make my DIY wiki look pretty and interesting. Then I went to fill it. I spent a long time thinking about the categories (Mediawiki-speak) I would organise my information, and eventually came up with this: Philosophers – Positions – Arguments – Topics – Definitions. These two features of mediawiki really helped: hotlinking to another page and relating any page to one or more categories. I started pouring stuff into my wiki, but gradually slowed down. The problem was that wiki-pages are great to make – once you have finished with the material you want organise and you know exactly how. By the time I have worked out all of that, it is time to move to the next paper. I concluded I needed something that would help me create an organising system on the fly.
Next my love affair with Atlas.TI. I got the idea from architectural modelling in my day job. I even used archimate once to model autopoietic enactivism. Atlas.Ti is much more flexible than archimate, it is really quite wonderful. I got it for next to nothing in the student webshop, and later found out that the university distributes it for free. I also bought a competing program, MAXQDA plus. Both programs help you to annotate texts and then organise the labels into a scheme for easy retrieval and analysis. The philosophy behind them is different – there is a good article here, explaining how MAXQDA is based on qualitive content analysis, whereas Atlas.TI works allows you to find patterns in a text, using your own codingin system – more akin to grounded theory. Atlas.TI seemed to fulfill all my needs – for a while. I was so happy with it, I even bought an upgrade to the version 9 because I could not wait for the university to supply one for free (which of course by now they have done). Below you can see how it works. You annotate a paper through codes. Codes can be reused across a project – this project contains 135 papers.
You can then use the codes to create network diagrams, like so:
I have done some really nice analyses with this tool – for a presentation in my ReMa course in Amsterdam on advanced language and logic I worked out how my professor’s articles are related to various concepts he has investigated, and also for the last Philosophy of Mind seminar, when I investigated how various philosophers used different words for common ground. I also used the Atlas.TI network diagrams in my research logbook, which really looked wonderful. The only problem was that after a few months, I could not longer read the complicated diagrams I had made myself – well, not without rereading the entire paper, which sort of defeats the point. The other problem was that Atlas.Ti is not really geared toward the kind of use I make of it, nor do they plan to. The autocoding feature (which allows for automatic coding throughout a large set of documents) does not work in reverse. That means that your coding system has to be ready from before you start reading the papers. Ough. Same problem as with mediawiki. The other problem is that it cannot handle more than say 50 documents or books in any one project at one time, and you cannot interrelated projects or their codings systems. So alas. I wrote to Atlas.Ti. a couple of times, hoping to hear that they would build in the features I need, but they won’t – text annotation is not their core business. Pity.
Yet another search for philosopher’s tooling yielded a surprising result: one philosopher actually created his own tooling to dealing with philosophical research: organising and retrieving philosophical statements, knowledge, insight. Quite impressive, a philosopher-cum-programmer. The software impressed me as well. Until I tried to use it. I watched all of the instruction videos several times (there is no manual), but was not able to distill a workflow that was right for me, and the look-and-feel of the program felt awkward. Also, I did not like the manual zotero integration much. But my main worry was with becoming dependent on the author. Yes yes, the database is all readable XML, but I am not a modern programmer – I just manage procedural language programming (only in my sleep as it is a long time ago that I actually did any programming), and I positively loathe the object-oriented stuff. So what would I do with bunch of XML files? Too many worries. I needed something else.
Obsidian, my second brain
I had seen references to Obsidian before, but ignored them. Mainly because I did not know what markdown is, so I could not image why anyone would be interested in organising a bunch of markdown files, however prettily. But as it turns out, markdown is just plain text plus. Since its inception, many different versions have appeared, but they are all html-convertable and will be readable als txt files forever. Obsidian gives me all of the advantages of mediawiki without the disadvantages. It is fast and flexible. It integrates with zotero. I can link and tag notes and files. I can edit files on my PC and on my mobile devices, using icloud. The only thing that is missing is being able to publish to website (other than the paid version). But that will come, I am sure.
I started out with a work problem – a huge text that needed cutting up, the ISO27002 guidelines. This I needed to do anyway, so it seemed a good place to start. And yes, I was able to deconstruct the document and then put it back together again, although the learning curve was a bit steep – as it always is with these things. I will write up a post on the configuration(s) I arrived at and publish it on the thinking tools page, at some point. See below a snapshot of my folder structure, and a graph based on the word enacted.
Obsidian has a great online support community and extensive documentation. There are many plugins. It also integrates with Zotero. Cannot wait for the new Zotero release! Anyway, I think Obsidian may be it for me. The hierarchical tagging system is particularly helpful, because of another problem which I will describe next.
Towards a metamodel or a taxonomy of philosophy
Actually, there are not that many who tried. There are a few lone papers. This one is the best I came across: Grenon, P., Smith, B. Foundations of an ontology of philosophy. Synthese 182, 185–204 (2011). https://doi.org/10.1007/s11229-009-9658-x. It did not receive much attention. But I thought I’d try and model the ontology component they recommended in UML (nice reason to update my visio license), as a data model.
The problem is in the centre part. Grenon and Smith assume two things:
a) concept, proposition, argument, theory and method are disjunct
b) the philosopher’s workflow is like so: think a concept, propose whether it is true or not, supply argumentation, and then develop a theory using a method.
Unfortunately, philosophers are agreed on neither. They are not even agreed on the meaning of terms like concept or theory.And there is the addition problem of nesting – ad infinitum. I was a miserable when I saw this. And then I thought: who cares that philosophers don’t agree on their definitions or way of working. This is about my work, and I can define terms and workflows in whatever way I want. And I do want, because I need to store and retrieve.
So I decided to organise philosophers into single authors, groups and main fields (branches). I also have terms, topics, theories and approaches. See the image below. For my folders, I use the Johnny.decimal numbering system (which I have also started using at home and at the office). Folders and some notes are displayed on the left. I use aliases for my notes (at the top, with the metadata) so I can refer to them in different ways (with or without capitals, etc). I use a hierarchical tagging system, shown on the right.
I have already harvested my best essays in this system and am now in the progress of harvesting whatever may be useful from my diy wikipedia before I close it down forever. Let’s hope Obsidian will support me through the next few years, but I am hopeful. I also enjoy watching the videos Tall guy Jenks makes. He is a self-proclaimed ADHD sufferer, and he says the only way he can live and work is by outsourcing his information management as much as possible. Wow. I suppose the same is true of me, but with me coz of a not-so-young-anymore memory and too many things to do in a day.
I started this series of posts in november 2018. Quoting Robert Jordan, there are neither beginnings or endings to the turn of the wheel of Time. It was a beginning. This is an ending.
On september 8th, I received officially received my Master diploma in Philosophy of Language and Logic. Here it is.
I was persuaded to come and get my diploma in person because my supervisor would say a few words, which, according to him, would be fun. I was actually quite curious about what he was going to say and reckoned I should be able to manage to listen to those few words without blushing too much. No, I do not like being on stage. I always do these things by post. But this time – well, I went. And it was great.
Obviously, in Corona time there was no big meeting at the university – the graduation ceremony was arranged in a Nijmegen café. The university had really tried to make things festive – with drinks and snacks afterwards.
It was a small party – here were only three candidates, including me, and we all got speeched by our own professor. Husband videoed mine. I won’t post the video here, as I have a feeling my supervisor perhaps does not like his pictures all of the internet – I can find very few good photos of him. But it was a great story which went on for well over 10 minutes. Basically, he had built a story from when I first appeared at his kitchen table up, haltingly trying to explain what I wanted, to my recent research proposal. Apparently – so he said – I am a joy to teach and amazingly hard working. And very determined to achieve my goal – he had imagined that I had got side tracked during my many essays (I really got into philosophy of mind and evolution of language), but in my research proposal I returned back to base. I was touched that he had put so much effort into this speech, and found so many nice things to say about me. Compliments are rare with him. This will probably never happen to me again, so I will treasure the memory. What I will also treasure is the look on Husband’s face. He was so obviously proud of me. Sigh. Such an occasion.
Afterwards we decided to go and try a new restaurant – very modern: no menu, expensive and beautiful. We ate outside. It was a lovely meal on a lovely evening at the end of a lovely day.
Since then, I was formally enrolled at the Radboud as a PhD candidate. I am an external candidate which means that I am formally employed by the university with an annual leave of zero days and a salary of zero euros. On the bright side, if I every complete my PhD, there will be a contribution to the printing costs. Plus I don’t need to teach. I have never been so happy with a job that pays nothing 🙂 Because it is not nothing. It means full access to books, papers, my supervisor, the research group, and obviously the phd program itself, which I will have to get to know.
The research master is completed. Done. And dusted. But my, what a lot of dust!
Before I embark on that “dust”, let me express my profound gratitude to the universe, teachers, friend and colleagues and above all, Husband, for guiding, supporting and generally putting up with me during the years. Can’t have been easy ❤️ It also really took it out of me: this was a full-time 2-year degree which I did whilst being employed full-time, and in less than ideal personal and work circumstances. Which I only managed coz I absolutely loved every minute of it, even the first 6 months when I was scared stiff my brains were no longer up for this kind of battering. Or that the real (young) students would laugh at the old bat 🙂
Now why do I lay this on so thickly? Coz I don’t need to convince you, most of you know this. Well, it is the “dust”. Let me explain. My supervisor had expected the rounding up of my thesis to go smoothly, and hence, so had I. But something entirely different happened. During my thesis “defense” I ran into a breed of academic that I had not encountered before. A well-respected researcher specialising in at least half of the stuff my thesis was about. Seriously, I read quite a few of his papers, and he is good. He was brought in as my second examiner. During the 50 odd minutes I had to defend my thesis, this second examinar held the floor for well over half an hour, attacking me on points of form and method. He seemed to think a particular method – a mechanism – I employed merely as a source of inspiration to construct a research paradigm, should have been used, to “prove” a specific phenomenon. A parallel with the outside world: that is like the difference between devising a business strategy versus writing out the technical specifications of an IT system: totally different things. He must have understood some of that because he branded me a “generalist”, and himself a “specialist”. As if one excludes the other. Now I think about it, it did feel a bit like the day job: explaining security policy to an infrastructure whizzkid, just as, I suppose, it was once explained to me. Anyway, this guy did not ask me one genuine question. Not one. Instead he seemed to be arguing a point, saying things like “just as I thought” “what do you actually think philosophy is” and “just to prove my point”. I was flabbergasted. Was this an examination? I pointed out that I had adopted a problem-solving approach to a difficult issue and that it yielded results. According to him, the reason the problem had not been addressed was that it could not be solved, otherwise it would have been solved – by someone else, was the implication. Wow. A street fighter.
It took me a while, at least 15 minutes, to understand what was happening, and even then I could not believe it. I have been an examiner myself at some points in my life. To my mind, what this guy did was unprofessional, nothing to to with a neutral assessment of someone’s knowledge and work. But I was too much taken aback to say so. Thankfully my supervisor interrupted a couple of times with questions of his own, allowing me some time to regroup. He also made remarks about a third examiner, and how my end grade would be a weighted average, and that he himself had hoped for a different outcome, even that my work had giving him some new insights. It took me until the next day to deduce that there must have been a disagreement between examiners, which is by procedure is the only reason the third examiner is ever called upon.
So how did this end? Well, the damage is not too bad. My final score drops by .25 point to an 8.2 coz my thesis got a 7.5. So no big deal. I still get cum laude, say my diploma supplement. But I am annoyed. No, not annoyed, upset. Sad. It is not about the grade – if I had even been asked why I had used that particular approach in my thesis and they not agreed with the answer, that would have been fine. But this is not what happend. This little man I do not even know, single-handedly spoiled the very last event of my research master by embarking on some kind of personal war, without clear reason, without regard, completely out of the blue. What on earth could have made him so angry? It cannot have been the idea of a mechanism for enactive cognition, coz I wrote once a paper on that which was marked by his co-researcher and got a high grade. I suppose I did argue that, in philosophy of language, a broader view should be taken than is generally taken by individual, specialist researchers. I did propose, from my model, new issues to research, or to research differently. But that is not something to get angry about, is it? But I suppose the real reason has to remain a mystery. I have decided that I will not lodge a complaint because I can see that university procedure was followed meticulously. I will also not bother my supervisor with this, because I can tell he already did what he could. But I will, in my student-evaluation, suggest that the university amend the Research Master thesis defense-procedure to allow for situations like this – to give the student a chance to be informed of the objections of a second examiner in time to defend or amend. Might not do any good, coz this may be a rare case, but at least I will have voiced the issue.
What have I learned? Well, as my friend Teja has told me a thousand times, a university is not always a safe place. And as Husband often reminds me, I tend to be too trusting. Right. Wake up time, and let’s be grateful I learn it now and not years hence. This kind of situation certainly is not a risk I will run for my PhD. I will find some other medium to express interesting stuff, and stick to the well worn approach for university work. I will also make sure that I connect with every professor judging my Phd research well in time until I am sure that I have explained myself sufficiently and ironed out any creases. Which also means that I have to find another way and environment to express my more daring thoughts and interesting notions. Not at work, though, coz that runs into the same but different problems 🙂 I will think on this, maybe do a series of short papers or publish informally on some blog or medium or other. Do drop me a line if you have ideas. I do want to voice my newly found insights on the crossroads between language, security and IT – there are so many old, even obsolete, philosophical theories still believed in by the general educated public, that I itch to dispel some of them, and replace by something more productive. Such as the idea that using language is no more than stringing along words derive their meaning from referring to an entity in the world. That is an old idea, introduced by Frege, but even he give up it up towards the end, as I found to my surprise in my little Frege adventure. Yet everyone in IT seems to cling on to this idea as if it were a religion. The reverse is also true: Ashby, a psychiatrist who in the previous century pioneered in cybernetics and invented the “homestat”, and with that, the idea of a double feedback loop in status tracking, the basis of every now existing quality control and assurance program. There must be millions. In philosophy, this concept is not discussed often or not in connection with language, which I think is a pity. But my research will change that, I hope ;-), or at least throw a crumb in that direction.
So what is next? Well, I here post links to my thesis and to my research proposal. I am proud of both. There are summaries to both of them, so please don’t feel obliged to read them (I will never question you about them, honestly). But some of you seem to want to read these, so needs must.
If things go as planned, I will start my research in september. It should take 4 years full time, but who cares if it takes 7. I want to finish it before I get pensioned off though. Through the day job, I have managed to secure positions both on the board of the international committee creating norms on information security and on the national governmental board of professionals directing compliance with said norms, so I am exactly where I want to be to run the research I want on both “sides” of the normative coin. Also, coz I am nearly 60 now, I now get a day off every monday. Great employer, eh? The universe smiles, I suppose. Let’s go for it. The old bat is up for it 🙂
And so, dear diary – things are finally coming to an end. A temporary end, from which things progress further. As Robert Jordan wrote: there are neither beginnings nor endings to the Wheel of Time.
Yes, I am in contemplative mood. Or just exhausted. Same difference. You know that feeling, when you just keep going, that feeling of plodding on, through imaginary wind and snow and rain, until yesterday and tomorrow blur into each other?
I think these years I must have reached my very own Olympic peak in “compartmentalising” – the little trick I taught myself when I was very young, to separate out the things that needed living through in time, space and attention. It is a simple process: just fit every task, every emotion, every thought, into its own little pigeon hole and set the alarm for when it must opened. And shut again.
Occasionally my human mind will seize control, but nothing that cannot be crowded out by listing to a favourite audio book (yes, Scandinavian noir). Only just before I fall asleep the pigeons will fly out to where I cannot catch them, and I murmur about it to uncomprehending Husband before my power supply goes dark. A more sensible person than myself might think it time for a holiday.
I went bit over the top with my pigeon holing. For the past couple of years I have forced myself to do one thing at the time, and one only. Every day and every moment. Well, admittedly there is that bit of working through my mail backlog whilst participating in some digital work conference that is beyond slow but I am supposed to listen to patiently (I am not allowed to peel potatoes secretly anymore coz it has my colleague in stitches with laughter). But mostly, I stick to one task at the time, for multitasking is not a good idea if you want to do something well and you don’t have the time or the opportunity to re-do it. Alas, some tasks will not wait for the next pigeon to fly out. So I find myself setting my phone alarm for the oddest things. Check the rising of the bread rolls in 10 minutes. Call Son to remind him of something or other at 9:00. Complete the home-delivered shopping by midnight on Wednesdays. Stop studying by 22:00 and have a drink with Husband. I even have a winding-down alarm reminding me I should be preparing to go to bed in 45 minutes.
So, what has come to an end? Well, a couple of things. Some of them are a bit too personal to share here – you know, the sad stuff that happens to all of us eventually, and seems to happen a lot more frequently as we get older. Let’s say I won’t be sending this update to as many subscribers as I did before.
But one momentous thing I must share with you. I have officially ended middle age and am now an old crony. Officially, because as a civil servant I can trade in some of my holiday leave plus a tiny portion of my salary in what is called an PAS scheme (no one know what the letters stand for) to get one full day off every week. Is that not incredible? I suppose they will soon throw it out or delay it because the pension-age used to be at 65 but got extended, by nearly 2.5 years in my case. So, I still have another 8 years to go, but on a 4-day work week. Wonderful. That gets me three full days a week for uninterrupted studying. And all it took (because I could have applied for it last year already) was to swallow my pride and admit that I am, well, not so young anymore, perhaps. Maybe.
What else has happened? Well, yes, finally. I handed in my state-of-the art paper, my thesis and my research proposal. It was a frightening thing to do, coz once you handed it in, then what? Wait. Bite nails. Wait some more. Actually, I wrote three state-of-the-art papers. One on the topic that my supervisor (the loveable grump) suggested and I gave up on. One on what I wanted to research. And a final version added as a glossary to my thesis coz I decided to review three different theories and one cannot expect the examiners to be familiar with all of them. State-of-the-art papers don’t get graded, just pass or fail, and apparently I passed. Today my supervisor wrote to me and said he really liked my research proposal. Which is great, coz that will be my job description for my unpaid PhD candidate job for the next few years (just as well I got a day off from work). A whole new road ahead (yes, mellow yellow).
Which leaves my thesis. It was already reviewed once, and deemed of sufficient academic quality, but I was advised to make it “easier to read”. And could I not plug in a few examples? I was surprised. This is the kind of comment I might get at the office, but surely not here, with all these super clever academics? My supervising professor laughed outright when I said that. But anyway, I got the point: academics want to be catered to even if they pretend they don’t need earthly comforts like summaries and the like. Also, it turned out I had not used the APA referencing system quite correctly and I also had to flesh out my research question a bit – in short, I have just handed in a second version. Hopefully it will be ok. And then the procedure starts – well, it has already started. The examiners (there are three, my supervisor included coz he happens to be chairman of the examination board) receive both the thesis and the research proposal by June 20th and once they have read it all, I get a chance to defend it semi-publically. When I received notice of this, I wondered if I would have to wear a gown and cap (I still have mine though they might be moth-eaten), but when I consulted another student, it turned out I did not. Just as well I asked, imagine how silly I would have looked 🙂
So what did I write about in the end? I will post the documents on this blog once they are approved. The thesis, or publishable article as it is officially called, is about 11300 words excluding glossary and bibliography so you might have better things to do considering Spring has just arrived and corona measures are finally being lifted. But basically, what I did is try and work out what philosophy of language is about these days, and how to further it in a sensible manner. It turns out that the field was invented some 150 years ago, and developed in roughly four phases which nobody bothered to document very clearly as they were too busy killing each other. My previous degree did not go beyond phase 1, which explains why I had to work so hard to understand what was going on and who was arguing what against whom. Here comes the abstract to the thesis which is called: “it is in the singing, not in the song” – a multidisciplinary approach to language use.
Language, as a social practice, involves abilities not specific to language, e.g. agency, attention, interaction, perception, memory and inferencing. Philosophical perspectives on language use can be enriched by integrating research with cognitive psychology and philosophy of biology. To show how this may work, I outline three ‘rebel’ theories: autopoietic enactivism, cultural evolutionary psychology, and normative inferentialism against a general background of the evolution of language.These can be combined into one levelled framework if we assume cognition to be normative and embodied, and to be constructed out of old animal parts. Two central processes impact all levels of the new framework: normative regulation and identity-generation. Suggestions are made for further research based on predictive processing.
And the abstract for the PhD research proposal, which is called – yes: “the art of misunderstanding”. It is not just armchair philosophy either, I get to enjoy myself by doing a nice bit of empirical research on my colleagues.
Speech act theory offers a central insight: utterances do not just convey meaning, they are actions that assert, request, warn, promise, invite, predict, offer, direct, etc. In conversation, we generally recognise speech acts automatically and correctly, and almost as soon as the other starts to speak. But in some situations there is a problem. With regulatory texts on specific subjects, even the experts frequently disagree exactly what responsibility these texts confer onto whom. I propose to show that in these situations, the misidentification of speech acts is a major source of confusion; that the author(s) and audience have different interpretations of what speech acts are contained in these texts, and what the normative dimensions of these speech acts are. These findings will be interpreted in the context of Brandom’s normative inferentialism, and against the background of the cognitive theory of predictive processing; both sharing an notion of common ground and score keeping. Together, these theories may be combined to provide a framework for normative agency and interaction, of which speech acts are an instance. From this combination of philosophical insights and experimental findings, I aim to provide recommendation to improve understanding of regulatory texts on information security.
I leave you with a picture of one of my favourite flowers: buttercups. I adore meadow flowers like daisies and poppies and buttercups. Find yourself a field full of them to roll around in – I certainly will.
Next time I will tell you about the thesis itself, and what fascinated me about it, and what I discovered and how all of that is related to shades of glorious yellow. And after that, I will be talking about the research project – which will take me four years, so plenty of time for that.
This is a line from Hamlet. Which I did not know about before I moved to England, coz I heard the Hair version first. Remember Hair? I suppose I am showing my age, but this was from a time when musicals were fresh and new and often provocative. I remember being mesmerised by the German version of “hair”. No idea why. Anyway, the Hair musical also does the “what a piece of work is man” as a song. It is here, if you want to listen to it. I now prefer the Hamlet version, which is here, No need to listen to all of it, just the bit after 50 seconds. This is the full quote:
This type of thinking has a name: anthropocentric. It is when we place the human on a pedestal at the top of Creation and say: wow-wow-wow. We humans are clearly made in God’s image, more clever, more adept, than any other living creature. It is also when we look at the rest of nature and judge it by our own standards, look at animals though the glasses of our human understanding or try to understand the universe in terms of intentions and rationality and whatever else we thought up to provide a flattering backstage to our performance as masters of the universe. As George Orwell put it in Seven Commandments of Animalism in Animal Farm: “Four legs good. Two legs better”.
I suppose you think I might be laying it on a bit thick. But I am not, really. In philosophy there has long been a sharp divide between creatures that think (us) and creatures that don’t (the rest), and it was assumed that the difference is somehow fundamental and significant. Humans think. Humans have language. Humans make tools. Humans create. Humans plan. These abilities are so fundamental to our self-image, we think they must be hard-wired, in our genes, transmitted through natural selection because these are the very characteristics that have put us in our position as overlords of creation. QED.
Right? Wrong. Or at the very least not self-evident. This is one of the issues that lies at the root of the so-called analytic-continental divide. You might remember an earlier post about this, when I explained I never knew about the so-called divide until I started this ReMa course. In England they firmly held to the belief that anything continental was, eh, continental. Like they had “intercontinental” phone booths, for calling from one continent (the UK) to another. That was 40 years ago, before anyone ever dreamt of Brexit. But I am digressing.
The man-as-superbeing story is a nice story, of course, comforting and safe. I was told it as a child by all the grown-ups, so it must be true. Probably, so were you. So why should it suddenly be all wrong? Well, think about it. The earth is about 4500 million years old, with single-cell organisms starting to emerge around 3500 million years ago. The first invertebrates appear around 700 million years, fishes at 500 million years and plants even later. The first mammals arrived around 200 million years ago. From these, we get primates, then great-apes and eventually humans. If you put these data on a 24-hour clock, you can see that we humans arrive in the last one-and-a-quarter minute.
So what? you may think. Well, it takes time for natural selection to work on genes. It has taken a very very long time for complex life forms to evolve. It is therefore very unlikely that human abilities appeared out of the blue, by lightning or some other external event. It is much more likely that our special cognitive abilities are built up from old stuff, from building blocks that we share with other forms of life. This is what Cecelia Heyes says. She is a philosopher-psychologist or psychologist-philosopher and she explains it all very well in her book “Cognitive gadgets”. I am a big fan. If you don’t want to read the whole book, read this article in Aeon.
For my thesis (yes, I am writing it), I am trying to connect up three theories on an evolutionary continuum:
- Enactivism, based on biological evolutionary theory, Di Paolo-style
- Cultural evolution, Cecilia Heyes style
- Pragmatist philosophy of language, Brandom-style
The idea is to recognise basic patterns to the emergence, use and development of cognitive abilities. Any pattern that exists as an evolutionary ground pattern, need not be explained at the human level. So let’s go down the evolutionary path and start at the bottom. With something really simple, like one-cell organisms.
The first problem that needs solving, is how to decide that when something is alive. Biological theory (well, the variety invented by Maturana back in the 1980s, which developed into enactivism) says that any living cell will keep itself alive in two opposite ways: by shutting itself away safely behind a membrane and by interacting with its environment (bacteria will move towards a source of sugar in the water). All living organisms switch between those two modes of isolation and interaction. This switching is regulated through active homeostasis.
Homeostasis involves having an extra process on top of the primary process which monitors the situation (compares it against a threshold) and responds when something needs doing. Your home thermostat is a good example. The idea of homeostasis is also hugely popular in the real world. Quality systems are inevitably based on (at least) double loop regulation. Little do they know that the concept was, well, I supposed not discovered, but formalised by Ashby in the 1940s. Now obviously thermostats and quality systems are not alive. Their homeostasis is not active, not self-regulating, not geared toward survival. It is not active. Active homeostasis causes the organism to take steps to stay alive, i.e. to regulate its isolation and interaction in such a way that not only will it survive now and today, but also tomorrow. Is this scientific? Well, yes, in the sense that it a very basic and clean formulation of what life is: striving to survive, to be free of the imminent danger of the internal system running down or the environment taking over, taking steps to ensure that survival.
The next question is about how organisms get involved with their environment and each other, and become adapted to that interaction. General systems theory offers a basic pattern for this, with a little tweaking: the idea of operational closure. A terrible term for something rather interesting: the living system will try to say alive by extending into the environment. This is done by moving its boundaries. Literally. The inside of the living organism consists of linked up processes (the black circles). You may think of it as the inside of a cell or a bacterium. Notice how the internal processes interconnect to form a whole: the total of connections between the linked processes constitutes its border. Also note how operational closure does not require a structural barrier with the environment: it is not organisational closure. Nor it is interactional closure, because the interconnecting processes of the organism may simultaneously interact with the environment. Now the organism may further extend its borders by drawing in another process, by connecting its in- and output to the existing connections. It is not difficult to see how operational closure, in combination with active homeostasis, will allow a group of cells work together and eventually specialise, and so on, into the evolution of that organism, simply by trial and error, and without recourse to genetic change. What I find interesting is that the identity of the living system is not fixed but fluid, indeed self-generated: by drawing in new processes (or abandoning them), the living system changes not just its structure but also its identity.What a wonderful modelling problem 🙂 I think the business architects in my day job would file for mass retirement if they knew.
The ideas of isolation/interaction, active homeostasis, operational closure and identity-generation are fundamental to what is called autopoietic enactivism, the first of three theories I am lining up. The important thing is that if a bacterium actively keeps itself alive through these basic biological patterns, there is no need to explain the why of this behaviour at every level of creation. Nor – and this is important – do we need to assume it requires consciousness or thought or intentions or any other specifically human machinery. We also do not need to ask how the bacterium knows how to respond to the outside world. It does not know. It merely tries to stay alive, i.e. keeps its processes below the robustness-threshold of its homeostasis by whatever means available to it. It is easy to see natural selection operating on the survival of the fittest bacterium.
With the emergence of more complex life forms – perhaps fitted with a nervous system – the history of successful interactions will start to become important, allowing for the formation of habits. This is the realm of ecological psychology, which says that organisms use their environment as scaffolding. Recognition of other life-forms as agents, i.e. as organisms striving to survive through interaction with the shared environment, new types of interaction become possible. Several organisms may interact to produce a group interaction. And from that, group habits. Again, it is easy to see natural selection operating on the survival of the group with the most successful habits. No human cleverness needed for that, either.
Boring? I find this stuff fascinating. It allows for another type of evolution with is not based on genetics alone, but on successful behaviour which is arrived at and extended into the future in some other, non-genetic way. Which sort of fits the data: the difference between our genes and the genes of chimps or bonobos is only 1.2%. Interestingly, there is a lot more genetic variation within chimps and bonobos than within humans: only 0.1% within humans. I would not be surprised if we killed off the most of the human genetic variants ourselves. A nasty piece of work, is Man. We probably threw the Neanderthals off the last cliff. Plus our other ‘siblings’. There is a bunch of new discoveries about our human predecessors. In the chart below you can see our brothers and sisters who did not make it.
This post is getting a bit long. I will write up the other two parts next time. I just want tell you about my tiny adventure with Continental philosophy and a French philosopher who causes my regular professors indigestion. I still had to get some ECs (European credits) and I decided on a Philosophy of Evolution course. I thought it might extend my knowledge, and expected it to be a bit like the research seminar for philosophy of language, which was (mainly) on chimpanzees. Well, it was not. It was Continental philosophy, centred around Stiegler, a French philosopher who died very recently. My lecturer is a big authority on him, so the seminar was worthwhile if only for that (he also publishes philosophical papers on shamans and the use of psychotics, so an old hippie, after my heart). Unfortunately “Stiegler and his Parisian Friends” as my supervising professor calls them, invoke impatience to the point of aggression in analytically-styled philosophers. Which I find amusing. I suppose that through my long years as a civil servant I have learned to stomach texts which are a lot more cumbersome than Stiegler’s. So I just smiled my way through (a lot of) them, and found some really useful insights. I also noticed that my lecturer was not up date on the ideas in philosophy of mind and language; just as the other (analytical) side was not. So I wrote my essay on combining ideas from both sides, I suppose as a sort of prelude to my thesis. I used the picture about the evolving chimp (above) in my essay, and had the text checked by both sides. It was well received, so I must have got it right. It is here, if you want to read it. I was allowed get out of a “group essay” – oh horror, remember my previous adventures with group work? Even if it does work out, I end up doing most or all of the work so I might as well do it on my own. But I was let off (pfft) and allowed to write an essay on my own, on the condition that I would stick to the word-limit. Which I did, with three words to spare 🙂 So the essay is not too long, but perhaps a bit technical. Judge for yourself if you are so inclined.
P.S. If you have any comments about the biological patterns I wrote up in this post, do let me know. I am wondering if explaining the ideas the way I do, is making any sense to normal people like yourselves. I have been trying it out on Husband, but he is finally beginning to object to the timing of my lengthy after-midnight brainwaves, so perhaps you want to help me out 🙂
Water, sea, waves, skies – I adore them. I stand in great awe of artists who manage to capture their light. Like Ivan Konstantinovich Aivazovsky who painted the translucent waves of the picture at the top of this blog. He did many more (you can use the link to check him out). This particular one he did towards the end of his life, and is special because there are no ships, people, or shoreline. Just water and light. So breathtakingly beautiful.
I am not quite sure why I wanted this particular picture for today’s post. Possibly I am missing the sea. In other years we usually take mini-holidays near the sea so I can walk bare-foot along the the sea line. I can do that for hours on end. If Husband would not suggest we’d better turn back, I believe I would not stop. I love wading between the little islands that form between tides. I sing to the waves and talk to the birds, but mostly I breathe and splash water with my feet. Yes, quite like a child. Adulthood is overrated 🙂
I suppose wild water signifies opposite things to me. Beauty, freedom, light, strength, life. But also force and danger and sudden change. A bit like living, perhaps. And there is my link. I wanted to tell you about trust. Trust is risky business. I have been thinking about it in connection to language and cognition, and I developed my own little theory. Which is probably wrong, but it is my first, so bear with me.
This is also the long-promised fourth and last instalment of my mini-series ‘studying in times of Corona’. The first wave (hmm) as it now transpires we are heading into the third. I think the Netherlands must be the very last densely populated European country to go into lockdown. But we are. From tomorrow. Not before time, either. Husband has gone out, trying to get coffee beans. It appears he is not the only one. Even though coffee is ‘essential’, surely.
My last “big” seminar was on “folk psychology”. You might think that is people pretending to be psychologists, but it is a bit different. The idea is that we read each others’ minds. All the time. We do that, supposedly, to understand and predict each other. We know or we think we know ‘what makes other people tick’. We think in terms of belief-desire: we are rational beings that believe and want things, and that is what makes us act. The idea is from Hume, and draws on Aristotle’s de Anima. So it has been around a long time, long enough for you and I to believe it firmly. Sounds plausible, eh? Flattering too: the Homo Sapiens really got it all sussed. No wonder we are at the top of the evolutionary ladder.
You probably saw it coming: perhaps not. This is a big debate in current Philosophy of Mind now. I wrote a very, very long paper on it, far exceeding the number of words allowed for a paper, first reviewing the various positions on the issue, and then developing a bit of my own theory. If you want the read the paper, it is here. It got me an very good grade, but I suppose I was lucky the professor wanted to read it at all, as it did not conform to any of the usual requirements. He said it was majestic, but not a paper at all, more like the outline of a book or a dissertation. Well, yes, I suppose it was. I was so excited about the topic. Still, I felt lucky to get detailed feedback. Not used to get this much attention to my work. At the office no one is in the business of improving my mind, I suppose 🙂
I will try to tell you what debate on folk psychology is about, because otherwise I cannot explain my own little theory. Let’s take how we normally talk about each other as a starting point. We talk about our mental states a lot. About what we think, believe, feel, and why and why not. We are also very much aware of other people having thoughts, beliefs, etc. Children learn to do this at an early age, it is thought as early as 15 months. It is a fundamental ability for social interaction because it allows us to cooperate and coordinate. At all levels, in a family, in a shop, at school, at work or in government. You can see this ability at work very easily. We explain ourselves constantly in terms of what we believe and feel. And we call each other out: Why did you do that? What is the point of this? Such behaviour is characteristic of humans. Because there is little (or none, as some would have it) evidence of animals making each other justify their behaviour.
So what is the big debate about? Well, it is not about whether we display this behaviour or whether this is typically human. It is about whether this folk psychology is an innate, genetically inherited ability. The received opinion was and mostly is, that this innate ability is what makes human special, sets us apart. Philosophers who think that, usually also think that this ability lives in the brain, as some kind of specialised module. That we read our own mental states and others, because we have special equipment to do so given to us by Evolution. At great cost, because large brains are expensive in terms of energy. But those with the best mindreading abilities survived, because clearly this provided a competitive advantage. This is called the Machiavellian Intelligence Hypothesis. It also explains our intelligence and our ability to plan ahead.
There are some big problems with the view. One is that our reactions to other people are much faster than would ever be possible if we consciously evaluated mental states. Another one is that if you look upon other people in the third person, as agents with mental states that you can read, that leaves no room for true interpersonal experience, for experiencing together. Then there is the matter of the horse and the carriage – do our mental states explain our behaviour, do we act in accordance with our intentions? Or is it the other way around ? Cecelia Heyes, a philosopher-psychologist says that folk-psychologising is very clever, but that there is no neural basis for it. At all. It is an ability which we have discovered, fostered, taught to our children and hence transmitted across generations, through cultural learning. We teach our children from birth to respond and to learn, that is what makes us special. There are others, who say that cognition is not individual and not brain-bound; that is just an fairly recent idea which came from our own invention of computers. And so on and so forth.
The main idea, from the non-traditional camp, is that social cognition, including our mind-reading ability, is extended by language. Language is required for cooperation, specialisation and coordination. And as device for the enculturation of social memory. Not, as classic philosophy of language would have it, to express truths about the world – remember my post about Frege? No special genes, no special modules. Simply something we have learned to do well as a species. Much like to our ability to drive or play games.
You may shrug and think this new approach not a big deal, but I can assure you it is, in my little Philosophia bubble. It turns human cognition into something that is shared with other primates, which opens up a whole new vista of research. We do have to redefine the word “cognition” though, so that it does not refer to just to humans, but that should not be too much of a problem. Philosophers of language have done much worse in the past 🙂
And now it is curtains up for my little theory. It struck me that in neither “camp” there was a true discussion or inquiry into “why”. Why do we mindread? Or pretend we do? Obviously the survival-of-the-fittest theory won’t wash, as this ability is not genetically inherited. So why? I learned from cases in psychiatry that what therapists do, is to provide consistent feedback when patients cannot do this for themselves. As if they temporarily take over the social mindreading function until the patient can do it for him or herself again. Obviously that requires trust. If you look at this from the patient’s point of view, then what you see is a form of cognitive offloading – the patient outsources, as it were, mental work to the therapist. If you look at cognition in general, this what we do all the time, outsource, offload task to our environment. To the environment, to other people, to artefacts like books, and recently to smart devices – anything to free up cognitive resources. Even if we accept information from someone, you may regard this as a form of cognitive offloading. And all of it requires trust. If you cannot rely on whatever you outsource your cognitive labour to, you are at risk.
In a nutshut, social cognition requires constant risk management. So there. I will come back to this idea at later point, because it will be a theme in my PhD. Talked it over my professor today, and he agreed. I will tell you about the full proposal once I have written it up, but there will be a relation between felicitous conditions for speech acts, trust and what we do – in language – to compensate when we are not sure what we or who we are dealing with. Maybe I will find out something interesting. And if not, that is also of interest.
Next I will tell you about my tiny adventure with Continental philosophy and a French philosopher who causes my regular professors indigestion. And then it will be thesis time – these days called a “publishable article”. I was told today that I have already done all the preparation I need (which means my research log = state-of-the-art paper =10 EC), so I will be writing the outline in the next few weeks. Exciting. But there is also Xmas, and Husband and Son and Xmas dinner to cook and films to watch.
I hope your Christmas will be pleasant.
This is supposed to be the last out of four blogs to tell you about my academic exploits in times of Corona. But meanwhile life has been catching up, and there are some other things I would like to share with you. Now preparing my master thesis. The first step takes the form of what is known as a “state of the art” paper. Yes, I have finally started. I think this paper is intended to be a place where you collect notes, insights, what-have-you about the topic you want to to you thesis on. It is not graded, just pass or fail. Mine will be on mindshaping. Yes, my professor suggested it, as I predicted, he seems to have an idea of where I am going even if I don’t.
So, what is mindshaping? Well, I don’t properly know yet. It is a new word, judging by what google ngram says about it. Seems to have been invented around 2009, and its use is gaining. It is a framework for social cognition, how we are biologically predisposed to create collective behaviours so that we may cooperate better. Bit vague? Yes. In fact I am going a little crazy with trying to get to grips with the idea. It does not help that I only have a few hours at any one time – and that only rarely – to study. Ha, making excuses! I hear you think. Well, that may be so. But I am groping about in semi-darkness though. I have started a logbook, just to keep track of things. It has already shown me that my ideas jump like fish, in and out of the bowl. My professor kindly sent me some additional papers to read, but I am struggling to connect these to the topic of mindshaping. I felt like a character out of a Murakami plot, specifically one I saw a movie of, called “Barns burning”. It contains the following scene:
As I mentioned, when I first met her she told me she was studying mime. One night, we were out at a bar, and she showed me the Tangerine Peeling. As the name says, it involves peeling a tangerine. On her left was a bowl piled high with tangerines; on her right, a bowl for the peels. At least that was the idea. Actually, there wasn’t anything there at all. She’d take an imaginary tangerine in her hand, slowly peel it, put one section in her mouth, and spit out the seeds. When she’d finished one tangerine, she’d wrap up all the seeds in the peel and deposit it in the bowl to her right. She repeated these movements over and over again. When you try to put it in words it doesn’t sound like anything special. But if you see it with your own eyes for ten or twenty minutes (almost without thinking, she kept on performing it) gradually the sense of reality is sucked right out of everything around you. It’s a very strange feeling.
“You’re pretty talented,” I told her.
“This? It’s easy. It has nothing to do with talent. What you do isn’t make yourself believe that there are tangerines there. You forget that the tangerines are not there. That’s all.
Right. Simply forget that the tangerine is not there.
It gives me a sense of real unreality or unreal reality that sort of suits me. As if I am floating in a sea of ideas. I have been trying to ground myself listening to audiobook detectives. In fact, I have devoured piles of them in the last few months. Not necessarily of great literary value. I love intoxicating who-dunnits that I listen to whenever I have to do some chore that allows for listening. Cooking, cleaning, shopping, cycling, whatever. To give you an idea:
- Arnaldur Indridason, an Icelandic writer: 8 detectives (all I could get). Iceland grows on you as you listen (except for the food which is heavy and greasy and without a trace of vegetables). Quite a lot in there about Iceland’s role in the 2nd world war which was unfamiliar to me.
- Thomas Engström, a Swedish writer. I devoured his “Ludwig Licht” quartet, which is a political thriller about an ex Stasi agent turned CIA. He tries to do the right thing in the wrong way, or the other way around. I sympathise.
- Nino Haratischvili is a Georgian author who wrote an epos about 6 generations of the family Jashi, orginally from Tbilisi. Sovjet history is definitely not Georgian history, nothing like it, in fact. It is a huge story – 900 pages, many audiobook episodes, but this Tolstoian effort I recommend highly. Is it a detective? Well, of sorts. This kind of historical writing is a bit detective like, in the classic “who-dunnit” sense.
- Eva García Sáenz de Urturi is a Spanish detective writer in the style of Carlos Ruis Zafon. You can hear the magic swelling through the striking lyrical descriptions which must originate from the Spanish (sadly I do not understand that language, as opposed to Son who is actively studying it). Great prose, great stories. She has published three detectives on audiobook. I am currently listening to final part of the trilogy of the White City, which is situated in Vitoria, the capital of the Basque country, with inspector Kraken as its main character. Kraken is actually a dee-sea monster with very long arms.
- Finally, Pieter Waterdrinker. He is Dutch. I am not sure how well known he is, but he is about my age and spent most of his life in Moscow. A prolific writer. He writes big books, epics, and has an ink black view of society. I adore his writing. A while ago I read “Poubelle” which is (partly) about European parliament and its politics. Now I have just finished “the Rat from Amsterdam”. It is about the charity industry, amongst other things. It is just layers and layers of images until you are completed wrapped up in them. Amazing. These are who-dunnits in a very different sense, showing up society and all of us – in a bleak and compassionate manner.
Right, so this is what I do when I want to escape reality. Or when I want to escape my own foggy brain. You may have gathered that I love stories which span history and continents. Gives a sense of perspective, even if it is probably false. But then, Truth is overrated 🙂
This week has been particularly weird, with all the media coverage in parliament of my beloved employer and Son at home preparing for exams at the same time.. And me working full time and trying to study. Whatever helps to keep us sane, right? See you in the next post, which will be the concluding post on academic exploits in times of Corona – well, the first wave, as we now know it. After that, I will move on to the here and now. I have a little theory which I want to share with you.
Try, try, try again? Wrong! There are cases where you ought to consider simply giving up. For instance, when you are studying philosophy of language in the early 1980s at Oxford and you find yourself unable to understand what all the fuss is about. I remember one of the essay questions at Finals: “If Pegasus does not exist, how can he be a Winged Horse?”. Well, that is easy. Like this:
That, of course, is not the right answer. If you must know, dive into the Stanford Encyclopedia of Philosophy, here. Don’t even think of complaining about a headache afterwards. Been there, done that. It gets much worse, trust me.
How to explain? When I first started this Research Master I ran into the so-called divide between analytic and continental philosophy. I wrote a post on the topic at the time. It is here, if you like to (re)read it. The general distinction is that the analytics think (according to themselves), whereas the continentals talk. This analytical thinking, back in the 1970s and early 1980s in the analytic nest that Oxford was, revolved about there being a a connection between language and truth, or words and the world. The idea was – ‘is’ to some- that through logical analysis of language one can know what is true and vice versa.
I can honestly say that I never believed this. It seemed a ridiculous idea to me at the time and still does. Why should there be a holy connection between Language and Truth? What does Logic have to do with Truth? Or Language? Not that I could not do Logic. I was very good at mathematics at the time (a straight A at A-level, although I can barely count now, well, use it or lose it, I suppose). Being in my early 20s, I assumed the problem had to be with me. It may even have contributed to my decision not to pursue an academic career at the time. Why continue to study something that does not resonate? I was bored. So I did not. I went into the wild world and did my thing out there.
Coming back to academic life and analytic philosophy this time around, I was very pleased to find that the kind of philosophy of language I want to study, now actually exists. Which is why I picked my current university, for the professor who is there and now is my supervisor. I think I told you about that encounter in a very early post. Or perhaps I only told you the result. Never mind. By the way, Philosophy of mind has also changed completely. There is no Logic in the curriculum whatsoever (or Language), it is all about Cognition and where the Mind lives. Rubbing shoulders with cognitive psychology, which was my other subject – a connection I could only have dreamt of back in the days. So for a while reckoned I had found academic Heaven.
However, I felt a bit, well, chickenish, for not looking my old enemy in the face. I was also a tiny bit worried that I might need to reacquaint myself with the old school stuff. That was my professor’s doing, coz he said at some point that he did not want to do Logic to me “yet”. Ominous. So I took a course on Advanced Topics in the Philosophy of Language, which ran at the University of Amsterdam. This time around, I enjoyed it. I also found it very difficult. Again. The Amsterdam professor was nice and very knowledgeable, but not nice enough to make me like Logic. I diligently did all the reading though, plodded through, understood most of it, and amazed myself. At least I have an idea now, of that approach to Language and Philosophy, and what it attempted to accomplish.
This being a formal academic seminar, there was an essay to write, a presentation to give, an exam to make. I did my presentation on my professor’s work, which is very modern, with a little bit of myself thrown in, explaining how the new approach emerged from the old school. They liked that, apparently my supervising professor is a bit of a hero, which was interesting to find out. He is a most unlikely hero. I have not told him yet 🙂 Anyway, I felt that my essay had to be on the hard core stuff. So I decided on Frege, the famous mathematician, who is the original corner stone figure of the whole language = truth approach. Frege was called out, overturned, by Russell who pointed out some fatal flaw in his reasoning which left Frege in disarray, just before retirement. Poor guy. The comic below explains what is was about. If you want further explanation, follow the link underneath the comic.
Actually, I feel a bit bad to leave you with Frege as Voldemort. He does not deserve that. Imagine, he received this devastating blow when the second volume of his masterpiece “Foundations of Arithmetic” was being printed. So what did he do, this most honourable man?
I did a close reading of one of Frege’s last papers, der Gedanke (1918).I also read the original German version, and checked translations. To my amazement, I found that he had changed his mind on many important issues. In fact, by 1918 Frege seems to have abandoned his entire Logic project, describing the world in terms that, believe it or not, I understand and agree with. There was only one small problem. Almost every Frege expert in the world assumes that there is a continuous line of thought from his early to his later papers, even if there is no textual proof. I myself do not have anything resembling all-encompassing Frege knowledge, and I did not dare to take on these academic giants. So in my essay, I did not come out with what I really thought, I just sort of showed it. Which got me an final grade “8” which I suppose was fine, but I felt that I had understood more and deserved a little better – but then, I should have been more clear. And more daring, I suppose. The essay is here, if you want to read it. It is called “Frege im Frage”. I regard it as a fitting end, if belated, to my dealings with the “old school” philosophy of language. I have paid my dues. At last. Bridged the gap between my old and new academic self.
Just to make sure I don’t incur any more karma, the wonderful picture at the top of this blog is by Rob Gonsalves. There are many more, take a look.
The story ends, this time, with the experience of a wonderful international workshop with hotshots from the academic field, on Delusion and Language, of all topics. I would never have known about if if I had not embarked on this course. It was great! Two days of constant lectures, packed full of ideas. What a diet. Only the definition of delusion itself remained elusive 🙂 More about that at some future date.
This was the third out of four posts about my academic exploits in times of Corona, or should I say, during the first wave, as we know now. The fourth post will follow soonish.
The second blog (out of four) in the series “academic exploits in times of Corona”. Let me tell you about a fascinating seminar I did in Amsterdam, at the UvA. My presence was virtual, which was a pity coz my dear friend Teja lives just around the corner – just think of those tingling glasses and gastronomic goodies we had to deny ourselves. Hopefully we can make up for it at some point in the near(er) future.
Anyway, the seminar was not what I had expected. At all. I thought I was going to learn something about the technical construction of language. Well, I suppose I did learn about a specific kind of construction. The kind that helps you to find to identify who really wrote a particular document. Or even, when there is more than one author, to identify who has written which part. For instance, some real detective work has been done on the writings of Hildegard of Bingen. Do you know about her? If not, look her up. She was an abbess, a famous composer of sacred music, mystic, philosopher, scientist, writer – and born in 1089! She must have been quite a woman. Anyway, she was a bit of a control freak. Her clerks were not allowed to change anything in her texts without her approval. But it is thought that she put a lot of trust in her last clerk, to the extent that he may have completed or even written some of her texts.
Curious? Have a look at this documentary. I thought it was wonderful, so exciting to find out what must have happened. Of course these researchers did not just look at the writings. They already knew a lot about her, her life, the general context, other authors, so they had an idea where to look. Still, a remarkable discovery (if you have not started the documentary by now, do it soon, or I will give up you).
Great stuff, right? But interesting as it was, I was wondering how to relate it to my research topic. And then I thought about the authorship of rules and regulations in security (my daytime job for those of you that do not know). I described the problem once to my supervising professor, at the start of my back-to-academia project. You will find it tucked away in my original problem definition, in this post. The problem with anonymous texts or texts that have been written by a “body” of people, is that it is almost impossible to get textual clarification. As I put it in that other piece:
“There is no one to ask. There is no author to ask for clarification, nor is there an easily accessible expert group. An additional problem is that reaching out to the publisher of the regulation or standard in question, must be done through proper channels, i.e. not something just any employee can do. Usually, the best that may be achieved is to send in a formal request for clarification – which may or may not be processed during a future maintenance window”.
The discipline that does this kind of investigation is called stylometry. Basically, stylometry analyses measurable textual features: word and sentence length, various frequencies (of words, word lengths, word forms, etc.), vocabulary richness, use of punctuation, use of certain expressions and preferences for certain spelling variants. You can imagine that the more texts you have by one particular author, the more you get to know about his or her particular stylometric style. Such analyses also allow you to pick out texts that seem odd, i.e. do not have the characteristic features commonly found in texts by that particular author.
What I find fascinating is that these kind of features are the ones we are not aware of: our use of little worlds like a/an/the, for instance. So disguising your handwriting or attempting to stay anonymous will not stop the literary detective from finding out who you are!
So I thought I’d study official, parliamentary, publications by and about the Dutch Tax office. Which turned out to be a lot more work than I thought, because it is not possible to get the documents from one particular dossier in one go. But since I only wanted about 50, that was doable, so I started collecting. I found some really interesting things. For instance, the official “functional” authorship which was stated on the documents (minister of finance, secretary of state, audit chamber, Dutch tax office, ministry of finance) rarely matched the author or group of authors that – according to tools and theory – actually wrote those documents. The most amazing was a set of two letters, one by the prime minister and one by the head of the audit chamber which appear to have been written by the same person. Which is weird, considering the audit chamber is suppose to check out the government.
At this point the seminar’s professor said that I must be very very careful interpreting these results, and perhaps I would like to do a further study and involve a data scientist. 🙂 Yes yes, I understand. This must be the n-th time where I have written something which might be a little explosive to publish. Like my paper on “naive normativity in animals”, or the piece about “artificial intelligence and profiling”. I suppose in this case – unlike the other two – there really is more work to do. After all, it was the very first time I played with these tools, and I am still not sure about their limitations.
If you want to read my paper, it is here. It is a bit dry, because it is basically analysis, but you will get the idea. Some pretty graphs included. I also did an analysis on trustworthiness. This was of particular interest because some of the documents in recent debates were said to contain falsehoods. I ran tests to find out if any signs of untruthfulness could be found. And I found? The opposite. All of these text breathed a 1000% “you can trust me”. Which probably means that in texts, trustworthiness cannot be measured, or maybe that trustworthiness is a style which can easily be faked. Or perhaps our society is not so interested in truth anymore.
My next blog will be about Frege. Yes, the one that sort of incidentally provided the mathematical foundation for the whole analytical philosophy of language approach which I found so very boring when I was first at university. I dared to go back into hell, and I will tell you the story. Next.