• Poppy flower in a poppy field. Poppies red summer flowers.


    Spring is here, and so are all of its delights. My small garden is full of roses I have planted over the years. I used to order one every year from a specialist rose farm. Some of them survived, others did not. The ones that survived are spectacular. I feed them, Husband prunes them, and they are just everywhere around the house. Even our grocery delivery person comments on them. My biggest success and my biggest failure is no longer with us. Husband had to kill it, alas. It was a Kiftsgate Rambling White Rose. Have a look at the original here. It is gigantic, encapsulating three trees.

    Kiftsgate RIP

    “Kiftsgate” quickly covered our small patio in the back garden and attracted a million bumble bees. For a couple of years. But it just kept growing. And growing. I suppose the worst thing about this rose were its thorns and its stems turning to rock-hard wood. Of course, the disclaimer said “not for small gardens”. Which is sound advice if your back garden is only about 5 x 5m like mine. But I sort of overlooked that in my enthusiasm. Well, it is gone. Somehow “Kiftsgate” inspired our honey suckle into a growing spurt, covering the patio, so all is well.

    Another delight that really is not a delight and hence must be resisted, is hay fever. It was staved off by the rainy weather but now all grasses seem to have bloomed at once. It is terrible. The Dutch hay-fever-radar sites colour a deep red, which is the worst there is. I have good hay fever pills now, keeping me from the worst, but this years is really quite extraordinary. I have to cover my eyes in vaseline and preferably stay indoors with all of my air-cleaning equipment turned on. And the tiredness! Even Husband complains, although he does not have a sneeze in his non-allergic body. Son, however, is still in denial, buying hay fever tablets off the counter, just this once … Well, hay fever is a predictable reaction. Once you pass a certain threshold of pollen, and start reacting, that is it. For years. But I suppose what amuses me most is his denial. He is so like me :-). Like: if I don’t wear my glasses, no one can see me (that was my favourite at his age). Anyway, the hay fever will be gone by July.

    We live quite near Paleis Het Loo, which is the palace where Queens Emma and Wilhelmina and young Juliana lived before the war. It is being restyled – a many-years project which is already overextended. We walk the outer grounds every night. The coming week I will visit it properly, on the inside, with Husband and my good once-red-haired friend who has just come out of chemotherapy and a friend. Anyway, as they were also restructuring the landscape around the palace, putting in what looks like wild-flower areas, I just could not resist. I adore poppies. They are wild and beautiful and resilient. So I ordered a pile of seeds ( a few thousands) and threw them about on the newly prepared fields. Son did his bit as well. So far, only few have come up. So then we planted about 40 poppy plants which were sent to by the nursery I bought my seeds from (thank you!). And I have ordered more seeds. And more. In a few years, the whole place should be incandescent with poppies, I have decided. It will be my heritage. Much nicer than money or material things.

    closeup look at daisies and poppies in summer meadow
    Strictly forbidden to mow the lawn

    There are lots of other wild flowers on those fields surrounding Paleis Het Loo. I know because we have been using this app called “obsidentify” to get to know flora and fauna. It is quite remarkable. Like playing Pokemon but for the elderly. Much easier to use than my old Flora. At some point, I will use it to find edible greens and mushrooms. But already I am enjoying being able to identify plants and flowers and trees. Amazing what I remember from early childhood – I must have had a great interest in nature, because I can still identify so many plants off the top of my head; fortunately, with the obsidentify-app, I now have the means to check and find new ones. For instance, the yellow plants (weeds) growing in my front garden are called “stinkende gouwe”, i.e. “smelly gold”. I now have much fewer qualms about pulling them up.

    Another disruptive thing I did was to have my hair cut. It had grown long throughout the Corona years when I did not dare to go to the hairdresser, so I just left it – well, Husband cut it for me to one length once. It grew way down my back, getting heavier and hotter every day. But I had this idea that with long hair I would turn into this patient, wise woman, such as below (the third woman from the left). The wild wise woman. Something my sister Sigrid is becoming in her amazing priest training, which is nearly finished now.

    The original painting is by Walter Crane, a Masque for the Four Seasons, depicting the Maiden, the Mother, The Wise Woman and the Crone.

    But alas, it is not for me, the long hair or the temperament! So I went to my old hairdresser who was amazed to see me – she thought I had gone elsewhere until they saw the length of my hair. Very amusing. Even more amusing is that my curls have returned. After 10+ years! Really tight curls that won’t be tamed. Which makes a mockery of the sleek, stylish hairdo I had selected, but who cares? I regard those re-emerging curls als evidence of my inner rebel – yes , the theme of this blog.

    The decision to cut off my hair came at the end of a lovely holiday. We usually take one during the first or the second week of May. I don’t like to be home on my birthday because it is Remembrance day, and also because May is such a wonderful month. Spring, not yet hot, there is no pollen, everything a lush green and flowers everywhere. We rented a wonderful little house at the edge of a wood near the beach. Son came over on his bike because we were near Leiden where he lives, and we celebrated both our birthdays. We also did a lot of walking and photography and walking along the shore. One day we went to the Keukenhof. I had always wanted to go, and now I had turned 60 I felt I had a right πŸ™‚ Husband was a true sport and came along without grumbling. Took a million pictures, but one will have to do for this post.

    The holiday marked the beginning of my return to health. If you remember my last post, things were pretty bleak on that front – my worst CFS flare up in years. But I should have remembered, just when I start to cry that I just cannot go on, things always get better. I had ordered a pile of supplements which might counteract the CFS bad-fuel problem, the anaerobic metabolism that causes my muscles to behave like they have run a marathon. The science is all here, in a PostScript. Anyway, I have been taking these supplements for two months now, and they are really making a difference. I don’t have more energy, but most of the aches and brain fog and stiffness have gone. Husband says I seem to get stronger. Great. I aim to be a super fit pensioner. That is still seven years away, so I might make the deadline :-).

    Whilst on holiday, I took some time to think about work. Normally I don’t, because there is not much I can change, but I was experiencing some kind of inner revolt which was bothering me.

    • Revolt against our political system, which I feel has been eradicating the social fabric of our society for the past 20-odd years. In civil service (the day job), such changes become more and more visible in the way we deal with the public and vice versa. I loathe neo-liberalism beyond anything I can put into words.
    • Revolt also against my day-job, which is about (information) security. I had been researching the threat landscape, both at work and for my PhD. It looks as if citizens are becoming squashed between criminal organisations and governments, neither of which can be defended against. So what is there for little me to do against all that?

    A bit bleak, eh? But there is nothing for it, other than to look the monsters squarely in the face, take a deep breath, and do what I can in my own little world. Or so I resolved. I must find some more colleagues to pass on my experience and knowledge on. That is a much better idea than running around, trying to save the world by myself. Meanwhile, I keep an eye on the lottery, but I never win. Well, as they say, lucky in love unlucky in gambling, which is fine with me.

    Another thing that was worrying me is whether we should move house. We have very steep stairs, so if we become old and feeble, we won’t be able to make it upstairs. We had the stairs measured for one of these chair-lifts, and it will fit! Well, you would have to duck your head a little, but it fits. So that is one problem less. Time to revamp the place, coz it is a while since we did any painting. We will need help this time around. So Husband and I have decided to save up a bit before we start. We will start on the study. Husband has already made a maquette to scale. Exciting.

    Catching snowflakes

    The PhD is also back on track. My professor advised me to stop reading and start thinking. Which turned out to be very hard advice to follow – whenever I think up something, I am inclined to check if someone else has thought of it, and what they said, and how they developed the thought, etc. And this habit was making me feel as if I was catching snowflakes. It is strange, somehow just thinking does not feel like work whereas reading does. It is the Protestant Work Ethics lurking inside me; I suppose. Anyway, I have been “just thinking” for over a month now (with a bit of reading on the side, I will admit), and things are progressing again. I have developed a mini theory which I am expanding on. I have also run into some interesting contacts.

    • A German guy who is introducing companies to autopoiesis. The thing is, it seems to work and they are ecstatic, but no one seems to worry about why it works. Autopoiesis is about living cells. Organisations are not.
    • Some interesting IT guys, external contractors, approached me. They have developed a new way of approaching problems, which is a bottom-up empowering style, rather than the traditional top down “blue” design thinking. I like them and their style very much, but for now I cannot make much sense of what they are doing: it seems to be a pot-pourri of original thoughts, sound scientific theory, well-thought out personal style and agile-style hypes. Must find out more. The conversation continues.
    • Then there is this interesting guy, a business architect like me, but much more the suave boardroom type. He is clever, well read, and a self-styled philosophy with fixed ideas about language, the type of ideas that many people in the IT business have – they think that either language has a fixed meaning (being comprised of words) or that meanings come from intentions. If I can explain my ideas to him and get him to understand, that would mean that I have achieved sufficient clarity myself.
    • Also talked to two security professors now. Both want to help me. One is offering to co-author my literature research on security professionals. Not too much work for him, but it would validate the paper, as I am a philosophy researcher, not a security one, despite the day job. So sharing authorship seemed ok to me. My professor agreed. Better to be generous.

    I am a bit hesitant, but in my next post I will try to outline some of the ideas I have been working on. Must start somewhere, so I will start with you.


    I have signed up for the reading challenge on Goodreads. A 100 books this year. I am heavily into “noir detectives”, and I don’t read, I listen. All the time. I have also been listing to some other stuff. This one: the Dawn of Everything, by David Graeber (anthropologist) and David Wengrow (archeologist). It is “is a reimagining of the history of humanity, based on new discoveries in the worlds of anthropology and archeology. According to the authors, new findings challenge what we thought we knew about hierarchies, inequality, property, and the state”. David Graeber, who died unexpectedly last year, was actually kicked out of Harvard for his anarchistic ideas, so that sparked my attention. A whopping 24 hours listening, but a fascinating book! The book is very detailed, so I will repeat the experience at some point. Recommended. For my Dutch friends, there is a Dutch translation: “het begin van alles”.

  • PhD

    One of those days

    Make that one of those months. Some things are not going the way I want them to. At all. So if you are in the mood for some complaining, do read on. It is all self-centred drivel, but that comes with the privilege of hosting your own blog. Where to start? Well, there are a couple of things in my life that are not moving forward how I had hoped: The PhD. The job. The health. The future house.

    Let’s take that list backwards. The Future house is a concept. It might be the house I am living in. The idea is, that we need to look making the house elderly-proof, if we want to stay here like forever, or even 20 years. Or we need to move. In either case, we need to do something. My husband is considerably older than I am, and my health (next paragraph) is not great in some respects. So we must not leave decisions until too late. Start some long overdue maintenance on the house and check if a stairlift can be fitted at some point. Or look seriously for another house. We slanted towards the latter (because a wonderful option came along), but recently, with the housing market and the uncertainty introduced by the war in Ukraine, it looks like we will stay put. But we have not decided definitely on anything, and I hate that. I want to know where I am going.

    Chronic fatigue syndrome is defined as six months or more of persistent fatigue that disrupts life and doesn’t get better with rest. Photograph: Dominic McKenzie/The Observer. Taken from the Guardian.

    Health – now I know where I am going with that. Nowhere. If I am lucky and careful and diligent, I might well keep my CFS at bay. I am getting more pains and more minor impediments, but slowly. Sometimes I hate the way I have to live – no room for adventure of even straying from my schedule. Only feeling energetic in the morning or after a glass of wine in the evening to get the blood flowing again (I get colder as the day progresses). Two weeks ago, my son came over, and we spent three late nights in the kitchen, just talking and not even that late. It took me a week to feel ok again.

    Next I got corona. I am just getting out of that now. I know I should be grateful, only a mild case, coz of all the vaccines and boosters. Also, I feel bad about complaining because one of my dearest friends got diagnosed with something aggressive. I cannot stand to lose him and desperately hope I won’t. Meanwhile, I just want my old energy and flexibility back. From when I was 45. Ain’t going to happen. But sometimes I forget.

    Work was truly terrible the first few months of this year. So bad that I seriously considered changing jobs. So I opened up my LinkedIn Profile and wrote some emails. Since then, I am bombarded with jobs, some very attractive and all very well paying. But I am no longer sure that I want to change jobs at all. I managed to change some things at work, my colleagues are helping, and I am wondering what the point is of changing just 7 years before I get pensioned off. Is that sensible of just cowardly? I don’t know. Mind you, I hardly have time to think about it, because it is a madhouse out there. The social and political climate is such that we are bombarded with questions about security – and just answering those questions seems to take more time than actually doing the work.

    The PhD got off to a false start. There, I have said it. There was some delay after I finished my research master, which I did not quite understand, wrote about it a bit in a previous post, here. Then, after this winter, I felt was not really making much progress. Also, I could not make sense of what my supervisor was asking me to do: to research general misunderstandings, which are only vaguely connected to my research topic. However, that caused me to do a lot of work which I now know will not go into my PhD research. Hence also the form for collecting misunderstandings on this site, which I will take down soon, because it is far too general for my purposes.

    Anyway, I finally figured out that my supervisor might not remember or perhaps had never fully understood the details of the problem I want to solve. So I created a presentation with lots of pictures and used that in the next meeting to talk him through it. It worked, which was a great relief. He even talked to another professor about it (who was doing his own second PhD in our faculty and turned out to be a great guy with lots of interesting ideas). At my supervisor’s suggestion, I wrote up my presentation as a problem description with a bit context. It was difficult, because I had to digest a lot of academic articles on IT security and then summarise them to be understandable to a general public. The connection to cyberwar and the war in Ukraine did not help, as the gift from my stepfather (2nd generation camp syndrome) got in the way pretty badly until I decided to watch no more television to avoid images. But I completed it. My supervisor said my text was perfectly understandable, so mission accomplished. I thought.

    Other than that, neither my supervisor nor I seem to have much idea yet about how I am going to find other academics to support my research. This is required , but as my research involves or touches several other disciplines, this is also requires careful thinking. These other academics will want to their pound of flesh, corresponding to their own academic interests, so inevitably they will interfere with my work, steering me in other directions, wanting more or less detail, etc. I must admit, I am worried about this. I have now experimented with telling my research story to people from various disciplines, and every time it takes a great deal of time and effort to explain what the problem is and how I want to tackle it.

    There are quite a few disciplines touching on my research question, but it is difficult to find someone to talk to. Philosophers tend not to be interested in real-world problems, that is not their job. In Psychology, there is so much useless research, it is extremely difficult to find what you are looking for, alone find a kindred spirit. The IT world still thinks of words and data as components of a logical language, i.e. that everything can be programmed or otherwise made predictable. In IT security, there is virtually no academic tradition, nor much natural inclination to look beyond itself. I believe that “business” or “management” is an academic field nowadays, but from what I can see, it is merely an industry of hypes and market opportunities (I might have to eat my words, but this is how it looks to me at the moment). I am not familiar with linguistics or communication as academic disciplines, but I am touching upon those as well, and I have no academic connections there.

    I was not going to worry about this, thinking these problems would solve themselves as I went along, but then I had an unpleasant experience. I asked an IT security professor at my university to validate a few pages of text, in which I had tried to explain the context of my research- the very text I had created following the presentation to my supervisor. I just wanted him to check that I had not written nonsense, as I am knowledgeable but don’t have an academic degree in Information Science. Did not exist when I grew up πŸ™‚ But for some reason – perhaps in haste – this professor read my text as if it were my research proposal, and then proceeded to hate it, even correcting the odd spelling mistake in the process. This was expressed by scribbling across my document, making remarks as they occurred to him, without even waiting to read the next sentence, as if I were a nine-year-old being graded for a school project. My husband shook his head upon hearing this, and said it should have been perfectly clear this was not a research proposal, it did not tick any of the boxes, and also I had said it was not. But it happened anyway. My supervisor, however, said it was my fault for doing this by mail. I should have arranged a face-to-face meeting, and this is how I should make contact in the future. Yes. Of course. I agree. But even if I made a communication mistake, there is a nagging feeling at the back of my mind that this is not ok. Academic professors probably think this normal behaviour, but I hope I do not act in this condescending manner when, in my day job, I am asked for help or advice.

    Chimps grooming

    The trouble is, I might be making an implicit and possibly unfair comparison between two worlds: one I know well, where these things don’t go wrong because I the rules so well; and the academic environment which I don’t know well enough, so I have to be extra careful. Which I will do in the future. It is just more good old stakeholder management, which in this bright and clear academic world I naively had hoped to do without – endless grooming for the sake of building beneficial relationships. I know how to do it, but I hate it. There are days where I don’t mind too much, thinking that this is how the world works. Other days, I get nauseated listening to these academics thanking each other profusely for their interesting talks, just before baring their teeth and going for the kill. I detest articles that seem to be written for the sole purpose of putting someone else down. The problem is, I am much too vulnerable myself. If anyone says something purposely scathing to me, particularly if it is about something I have done my very best for, it may take me days, even weeks, to get over it. Not a very efficient way to be, I concur. A character trait which renders me totally unsuited to an academic career. But an academic career is not what I want. I want to understand something more than I do now, and if possible, share that understanding to make things better. The rest is not so important. There. This thought helps. It quietens me down.

    I have just read back what I have written so far. I suppose I just wanted to say to myself that it is ok, sometimes, to lose heart. For a few minutes, that is all. It is allowed. And also, perhaps, that sometimes I might take a break. Watch a silly movie with Husband, which he lovingly selects for me from the 6+or 12+ range because I cannot handle anything more adult. Bake a cake. Plant herbs. Slow down. Complain. Drink chai. Commiserate with my few remaining girlfriends and lovely, wise, nearly priest sister who I am proud of. Natter with my son about every topic under the sun. Fondly remember some people I have lost. Reconnect with some friends that seem to drift away. Listen to my favourite noir detectives. Take mini breaks. Enjoy the wood fire at night, again courtesy of Husband. Wait for my Sunday mails on blogs that I really like. “Life is like a box of chocolates”, says Forrest Gump in the movie that we are watching, “you never know what is inside”.

    I will leave you with this hopeful ending – the temporary end of my complaining. I wish you a box of chocolates too.

    PS (The next day) This is perhaps some weird reverse psychology I am subjecting myself too, but this morning I woke up thinking I should kick myself into action and take charge. Nobody is going to do it for me. So I checked out on what other Dutch universities are researching on information security. Found two very interesting top professors who are interested in governance and behaviour, and who do or supervise actual research. I wrote to both of them, not bombarding them with information, but explaining I need advice on how to integrate my IT security literature study in a language philosophy dissertation, and would they be prepared to talk to me about this? Yes, I have taken my supervisor’s advice to go for a face-to-face meeting. Has to be a digital meeting, though. I cannot manage the travelling, well, not much. First, see if they respond. And if not, there is an entire world out there.

    PPS (a week later) I periodically check for new CFS research. A useful perk of my studies, because I can access all the medical journals too. I came across into a whole new line of thinking that seems to fit my condition exactly. The hypothesis CFS patients cannot burn carbs (well) and therefore their starving cells switch to whatever fuel is at hand. Which is ineffective (tiredness), depletes some important amino acids (long recovery) and saddles the muscles with lactic acid (pain). Exactly, that is how it feels. The same is found in healthy people after intensive exercise. The medical articles are difficult to read, but there is a beautiful write up here. Some comments make for an interesting read – people get angry because the article describes how they feel without providing a solution. But I think these researchers are well underway to solve the mystery – perhaps another 5 years? Here is hoping ..

  • PhD

    A quiet moment

    Quiet moments – I don’t get many of these. So I am savouring the moment. It won’t last – by tomorrow morning, or probably as I prepare to go to sleep, things will have moved again at great speed. But for now, things are quiet. After the weekend and after having completed another 2 1/2 week stretch on my research project, and just before I go back the day job. Bliss.

    Speaking of the day job, I came back to it in the second week of January from my gloriously long leave. There were hundreds of mails, and more pouring in. Most were either requests for knowledge or attempts to involve me in some project. Hmm. I worked late in order to get rid of them. Must find a way to prevent that from happening again. Or perhaps, as a colleague suggested, come a back a day early and not tell anyone πŸ™‚

    Source: dilbert.com

    Between those, this is my own very secret project: trying to make the Dutch Tax office, or even better, the country, a safe(r) place. In terms of digital information, that is. You probably think that this would be an impossible task for me by my lonesome, and you would be right. But still I try. I will be pensioned off in 2742 days from now and I would love to look back at that point and think that I made a small contribution. Or at least have not made things much worse. So much work, so little time. I will return to it tomorrow. For today is my official old-crone-leave day which I spend on academic things.

    Today was my 3-weekly meeting with my supervisor. I send him status reports a few days beforehand, but I think I may be sending them in too late, because I think he had not been able to read it all. Mind you, I speak to him more frequently than I ever did before, so perhaps it does not matter. Or perhaps the way I present my information does not resonate (lots of diagrams). Or he is simply too busy. Anyway, most of our talk today went to repeating and explaining what I had been doing, and when the ideas are new, I always find it difficult to express myself. I think he wants me to start collecting hard evidence, and so do I, but I need, must have the general picture in my head before I can start. We are agreed on the kind of work that needs to be done: locate those bits – traces; I suppose – people put in our conversation which are not about information, but about the relationship with the audience. It is just that I need a framework to start investigating – just noting that a conversation is “face-to-face” simply is not enough.

    So what did I find by the way of theoretical framework? Well, I ran into this guy called Watzlawick who was inspired by Bateson (who invented the double-bind) and in his turn inspired Janet Bavelas who spent her life trying to prove that Watzlawick was right – with some tweaks and alterations. I spent nearly 3 weeks dissecting his 1967 book, and it taught me a lot, apart from what Bavelas is on about, which I still need to investigate further. The basic idea is this:

    Source: https://en.wikipedia.org/wiki/Four-sides_model

    What I like about it, it that preserves both the relationship side (what I or you must do, believe, think, avoid whatever) and the factual information side (he calls that “digital” because it its on-off character). In fact, according to Watzlawick, many misunderstanding and disagreements come from misreading the most important side – the relationship side (he calls that “analog”) or confusing the two. Did you know that when reading out a simple list of words, we automatically check if they are appropriate to our audience? Hence, we are much slower in reading out words we think are offensive to whoever is listening. Not mind blowing, just to show that both sides of the message count.

    Why is it important? Well, it had not occurred to me that there may be a common base to conflict and misunderstanding from getting tangled up and confusing the relationship and the informative side of a conversation. Experienced negotiators know this, of course, but I did not :-). I seem to recall that such styles play a role in business communication ( see: thinking in colours) and in autism. Must investigate.
    There is also a nice bit on paradoxical communication – not my subject, but still fun:

    “You sure write good!” (Watzlawick et al, 1967)

    There was one other thought that hit me: misunderstandings are something you cannot agree upon between you. The misunderstanding would instantly disappear. You could agree in retrospect that there was a misunderstanding, but not at the time it is happening. Which is weird. It also means that you could be wrong about the nature of the misunderstanding you had with someone, even if you are both agreed on it. Which I suppose could also be true of disagreements.

    If you feel like it, start sending in misunderstandings. I created an English and a Dutch form on the opening screen of this site, under the heading “misunderstandings“. There is some stuff about not putting in private information, but that is obligatory these days – you understand, I am sure.

  • Communicate

    Truth and meaning

    I had intended to dive straight into my PhD, follow my research proposal, complete it, etc. But for some reason my professor insisted on delivering cautionary tales. About how a phd never turns out according to its initial proposal (even if he thought it was very good on another occasion). About how in my particular case and quest, I would have to look at other disciplines beside philosophy. About how I might as well take six years and never publish anything as long as I enjoyed myself. Well … I think I understand what he meant – rephrased his sentences several times and rechecked – but I am not at all sure why he said these things, other than that he appeared to want me to slow down. So I thought I’d think about it a little bit first.

    Without throwing my research proposal at you (although it is here if you want to read it), I must admit that it is far more complex to explain why something does not happen (understanding) than why it does happen. Because, in order to explain what does not happen, you have to understand exactly what it is that does not happen that could have happened. A bit like why it takes much longer not to find something in your pocket. So, I embarked on some thinking-by-myself, in the wild, so to speak.

    I had been worrying about two publications (well, books-n-papers) in my day-job field (information technology), and it would not leave me alone. So I decided to investigate further.

    Ogden & Richards, meaning triangle

    One is on Archimate. That is a open-source modelling language used by digital architects to make blueprints of business environments. It was developed primarily by Mark Lankhorst, but there were some other researchers involved in connecting up Archimate with the theoretical background, including philosophy of language, in this paper: Arbab et al (2015). Actually, the claim they make is not so terribly large, it seems. They refer back to the meaning triangle proposed by Morris (1946) which really is not by Morris at all, but goes back to Ogden & Morris (1923). Basically, it says that there is a relationship between things in the outside world and our thoughts, and that we connect they two using symbols. They then use symbols (in a modelling language) and thought/reference (the meaning of those symbols, presumably as expressed or understood by the modellers. Presto: meaningful diagrams. To be honest, there is not really very much philosophy of language in this theory. I have written to one of the researchers to ask if this claim should not simply be taken out – as it eats no bread, as they say.

    The other theory is by Jan Dietz and colleagues, called DEMO. It is a modelling and design language, which was conceived in the early 90s and still going strong. It claims ( Ettema & Dietz, 2009) to be firmly rooted in philosophy of language, as opposed to Archimate, because it is based on a more-modern-than-Searle conception of speech acts, as advocated by Habermas, which envisages speech acts not just as information carriers but as coordination devices. Sounds much like Brandom’s normative inferentialism. Habermas’ insight into speech acts was not so different from the currently mainstream idea: speech acts are not just for passing on information. Speech acts also have a social component related to the speaker/hearer’s role – as a human being, as a member of one or more groups. It turns out, Brandom and Habermas met and agreed on much but also disagreed vehemently. I collected papers on the Brandom-Habermas debate, but there was no quick way in – and quite a few philosophers professed not to understand it either. Must be some fine point of philosophy, which I will return to if I must. But for the moment, I cannot imagine this controversy – which many philosophers profess not to understand – would have any impact on the conception of DEMO.

    I must admit that I spent a happy evening tracing back all the theoretical components that are supposed to make up DEMO, and were fitted with big Greek letters accordingly. It had a distinct shopping spree feel to it – a stack of theories from everywhere, incorporated because they seemed to fit, a sort of build-your-own-theoretical-foundations-toolkit. Not in a million years would I be allowed to construct a theory on such a basis. But, I find DEMO interesting because it seems to explain how an organisation can help itself to new facts, truth, whatever you want to call them. Which is exactly what we do in speech acts, when we talk to each other. So I have written to ask what DEMO’s attachment is to Habermas. I personally think, there is none. Brandom would do just as well. Or even Grice, as nothing seems to be said about the motivation to coordinate actions. The point is, I think that in creating DEMO its authors may have understood specific felicity conditions for speech acts, and I want to find out how and what and where, because such notions may point to conditions for avoid misunderstanding, even in a highly stylised environment such as a business. Also, the fact that DEMO thinks of language in terms of coordination and collaboration rather than information exchange is still quite revolutionary, even though it was developed nearly 30 years ago. I want to know where the ideas came from.

    I was happy to receive a reply on both counts – invitations to talk further. Great. Meanwhile a ideas has been brewing in my head. Might it be the case that modelling languages like Archimate and DEMO are in fact natural language-extensions? They are not mathematical languages, I am quite sure of that. They are not natural languages, they are made with a specific purpose in mind. Their elementary concepts constitute an elementary grammar plus the idea that whatever we want to happen (be it a process, a decision, an action or whatever) can be expressed in that grammar – i.e. stripping additional meanings, context etc down to a bare, model-able minimum.

    The other side of the problem is the relationship between computer commands and “truth”. I need to find the right academic sources, but I am pretty sure none exist. I crossed checked with Husband coz he has actually written machine code where I hovered just above in my RPGII. Code simply instructs the processor to load two values, compare them and then take some action defined by you. There is no truth to it in any philosophical sense, other than whatever comes out of the comparison and taken as a starting point for action.

    Just to make sure I don’t miss anything obvious, I have also been reading up on philosophical truth in all its variations. I found that that Habermas was a staunch supporter of the consensus theory of truth: whatever a specified group believes to be true, is true. There are other theories – correspondence, coherence, constructivist, pragmatic; and then there are the so-deflationary theories which say that truth is not a property of statements. I was surprised to learn that Strawson (1949) had proposed a performative theory of truth which characterised truth as a property of the speaker’s intentions, in response to Tarski who invented the concept of a object language to solve the liar’s paradox. Sounds like an early beginning of speech act theory to me!

  • PhD

    Think first

    An open door, yes: always a good idea, to think first. And yes, thinking is what philosophers do. Or they think about what others have thought. Or both. But that is not the kind of thinking that I mean.

    I am in search of a documentation and retrieval system for my thoughts and notes and everything that goes into future publications, phd, whatever. I suppose having spent most of my working life in information science, I have a penchant for organising (although this does not extend to my very large, ever messy desk).
    The problem is, Philosophia must have the largest number of digital illiterates of any discipline. The modus operandi does not seem to have changed in thousands of years: reading books and holding forth. Ok, so the reading may be on a computer and the holding forth via zoom – but that is as far as innovation has touched philosophy. In fact, philosophers seem to think that because their thinking led to the Computer (see Aristotle introducing it below), they are under no obligation to do anything with it. That is for the people. Somewhat like how the policy makers in the Hague look upon us civil servants to do the actual work (sorry, the day job crept in again).

    source: https://www.theatlantic.com/technology/archive/2017/03/aristotle-computer/518697/

    During the ReMa journey, I tried various systems of organising what I learned. Some of these I have documented here, and I introduced the idea of using tools here.


    I started off making mind maps. This is something I really like doing, and I found it worked well with short essays of the type that we were set during year-long skills seminar. One of my professors once saw one of my maps and professed total admiration – how had I done that? I did not have the heart to tell him that children get taught this at school nowadays, and that there are many many people who are much better at creative mind mapping than I am. Which I love coz I love infographics – see here. But the problem with mind maps is that they become incomprehensible when they get larger. So much so, that I would find a really nice mind map of some complicated problem on my hard drive, and think: “now where did I get this from” ? Only to remember a while after that I had made it myself – some weeks before. Ok, so not a good memory aid, that much is clear.

    Zotero and Calibre

    I had a look at my citation manager, Zotero. Which is a nifty program, and getting better all the time. It can do a lot (see here) but beyond storing articles in different collection, it cannot help me much. This is because all of its features, like tagging and grouping and linking are the level of the individual paper, whereas I want to organise the text content. Calibre is not an option either – it is wonderful for organising ebooks, but it cannot even handle papers properly.


    Next I started to create my very own Wikipedia. The software that created Wikipedia is called mediawiki, and I managed to install in a subdomain to this blog. I then spent an inordinate amount of time tweaking the installation and teaching myself how to use it, how to create boxes, use colours, deploy pre-programmed templates and generally make my DIY wiki look pretty and interesting. Then I went to fill it. I spent a long time thinking about the categories (Mediawiki-speak) I would organise my information, and eventually came up with this: Philosophers – Positions – Arguments – Topics – Definitions. These two features of mediawiki really helped: hotlinking to another page and relating any page to one or more categories. I started pouring stuff into my wiki, but gradually slowed down. The problem was that wiki-pages are great to make – once you have finished with the material you want organise and you know exactly how. By the time I have worked out all of that, it is time to move to the next paper. I concluded I needed something that would help me create an organising system on the fly.


    Next my love affair with Atlas.TI. I got the idea from architectural modelling in my day job. I even used archimate once to model autopoietic enactivism. Atlas.Ti is much more flexible than archimate, it is really quite wonderful. I got it for next to nothing in the student webshop, and later found out that the university distributes it for free. I also bought a competing program, MAXQDA plus. Both programs help you to annotate texts and then organise the labels into a scheme for easy retrieval and analysis. The philosophy behind them is different – there is a good article here, explaining how MAXQDA is based on qualitive content analysis, whereas Atlas.TI works allows you to find patterns in a text, using your own codingin system – more akin to grounded theory. Atlas.TI seemed to fulfill all my needs – for a while. I was so happy with it, I even bought an upgrade to the version 9 because I could not wait for the university to supply one for free (which of course by now they have done). Below you can see how it works. You annotate a paper through codes. Codes can be reused across a project – this project contains 135 papers.

    You can then use the codes to create network diagrams, like so:

    I have done some really nice analyses with this tool – for a presentation in my ReMa course in Amsterdam on advanced language and logic I worked out how my professor’s articles are related to various concepts he has investigated, and also for the last Philosophy of Mind seminar, when I investigated how various philosophers used different words for common ground. I also used the Atlas.TI network diagrams in my research logbook, which really looked wonderful. The only problem was that after a few months, I could not longer read the complicated diagrams I had made myself – well, not without rereading the entire paper, which sort of defeats the point. The other problem was that Atlas.Ti is not really geared toward the kind of use I make of it, nor do they plan to. The autocoding feature (which allows for automatic coding throughout a large set of documents) does not work in reverse. That means that your coding system has to be ready from before you start reading the papers. Ough. Same problem as with mediawiki. The other problem is that it cannot handle more than say 50 documents or books in any one project at one time, and you cannot interrelated projects or their codings systems. So alas. I wrote to Atlas.Ti. a couple of times, hoping to hear that they would build in the features I need, but they won’t – text annotation is not their core business. Pity.


    Yet another search for philosopher’s tooling yielded a surprising result: one philosopher actually created his own tooling to dealing with philosophical research: organising and retrieving philosophical statements, knowledge, insight. Quite impressive, a philosopher-cum-programmer. The software impressed me as well. Until I tried to use it. I watched all of the instruction videos several times (there is no manual), but was not able to distill a workflow that was right for me, and the look-and-feel of the program felt awkward. Also, I did not like the manual zotero integration much. But my main worry was with becoming dependent on the author. Yes yes, the database is all readable XML, but I am not a modern programmer – I just manage procedural language programming (only in my sleep as it is a long time ago that I actually did any programming), and I positively loathe the object-oriented stuff. So what would I do with bunch of XML files? Too many worries. I needed something else.

    Obsidian, my second brain

    I had seen references to Obsidian before, but ignored them. Mainly because I did not know what markdown is, so I could not image why anyone would be interested in organising a bunch of markdown files, however prettily. But as it turns out, markdown is just plain text plus. Since its inception, many different versions have appeared, but they are all html-convertable and will be readable als txt files forever. Obsidian gives me all of the advantages of mediawiki without the disadvantages. It is fast and flexible. It integrates with zotero. I can link and tag notes and files. I can edit files on my PC and on my mobile devices, using icloud. The only thing that is missing is being able to publish to website (other than the paid version). But that will come, I am sure.

    I started out with a work problem – a huge text that needed cutting up, the ISO27002 guidelines. This I needed to do anyway, so it seemed a good place to start. And yes, I was able to deconstruct the document and then put it back together again, although the learning curve was a bit steep – as it always is with these things. I will write up a post on the configuration(s) I arrived at and publish it on the thinking tools page, at some point. See below a snapshot of my folder structure, and a graph based on the word enacted.

    Obsidian has a great online support community and extensive documentation. There are many plugins. It also integrates with Zotero. Cannot wait for the new Zotero release! Anyway, I think Obsidian may be it for me. The hierarchical tagging system is particularly helpful, because of another problem which I will describe next.

    Towards a metamodel or a taxonomy of philosophy

    Actually, there are not that many who tried. There are a few lone papers. This one is the best I came across: Grenon, P., Smith, B. Foundations of an ontology of philosophy. Synthese 182, 185–204 (2011). https://doi.org/10.1007/s11229-009-9658-x. It did not receive much attention. But I thought I’d try and model the ontology component they recommended in UML (nice reason to update my visio license), as a data model.

    The problem is in the centre part. Grenon and Smith assume two things:

    a) concept, proposition, argument, theory and method are disjunct

    b) the philosopher’s workflow is like so: think a concept, propose whether it is true or not, supply argumentation, and then develop a theory using a method.

    Unfortunately, philosophers are agreed on neither. They are not even agreed on the meaning of terms like concept or theory.And there is the addition problem of nesting – ad infinitum. I was a miserable when I saw this. And then I thought: who cares that philosophers don’t agree on their definitions or way of working. This is about my work, and I can define terms and workflows in whatever way I want. And I do want, because I need to store and retrieve.

    So I decided to organise philosophers into single authors, groups and main fields (branches). I also have terms, topics, theories and approaches. See the image below. For my folders, I use the Johnny.decimal numbering system (which I have also started using at home and at the office). Folders and some notes are displayed on the left. I use aliases for my notes (at the top, with the metadata) so I can refer to them in different ways (with or without capitals, etc). I use a hierarchical tagging system, shown on the right.

    I have already harvested my best essays in this system and am now in the progress of harvesting whatever may be useful from my diy wikipedia before I close it down forever. Let’s hope Obsidian will support me through the next few years, but I am hopeful. I also enjoy watching the videos Tall guy Jenks makes. He is a self-proclaimed ADHD sufferer, and he says the only way he can live and work is by outsourcing his information management as much as possible. Wow. I suppose the same is true of me, but with me coz of a not-so-young-anymore memory and too many things to do in a day.

  • Amuses

    An ending

    I started this series of posts in november 2018. Quoting Robert Jordan, there are neither beginnings or endings to the turn of the wheel of Time. It was a beginning. This is an ending.

    On september 8th, I received officially received my Master diploma in Philosophy of Language and Logic. Here it is.

    I was persuaded to come and get my diploma in person because my supervisor would say a few words, which, according to him, would be fun. I was actually quite curious about what he was going to say and reckoned I should be able to manage to listen to those few words without blushing too much. No, I do not like being on stage. I always do these things by post. But this time – well, I went. And it was great.

    Obviously, in Corona time there was no big meeting at the university – the graduation ceremony was arranged in a Nijmegen cafΓ©. The university had really tried to make things festive – with drinks and snacks afterwards.

    It was a small party – here were only three candidates, including me, and we all got speeched by our own professor. Husband videoed mine. I won’t post the video here, as I have a feeling my supervisor perhaps does not like his pictures all over the internet – I can find very few good photos of him. But it was a great story which went on for well over 10 minutes. Basically, he had built a story from when I first appeared at his kitchen table up, haltingly trying to explain what I wanted, to my recent research proposal. Apparently – so he said – I am a joy to teach and amazingly hard working. And very determined to achieve my goal – he had imagined that I had got side tracked during my many essays (I really got into philosophy of mind and evolution of language), but in my research proposal I returned back to base. I was touched that he had put so much effort into this speech, and found so many nice things to say about me. Compliments are rare with him. This will probably never happen to me again, so I will treasure the memory. What I will also treasure is the look on Husband’s face. He was so obviously proud of me. Sigh. Such an occasion.

    Afterwards we decided to go and try a new restaurant – very modern: no menu, expensive and beautiful. We ate outside. It was a lovely meal on a lovely evening at the end of a lovely day.

    Since then, I was formally enrolled at the Radboud as a PhD candidate. I am an external candidate which means that I am formally employed by the university with an annual leave of zero days and a salary of zero euros. On the bright side, if I every complete my PhD, there will be a contribution to the printing costs. Plus I don’t need to teach. I have never been so happy with a job that pays nothing πŸ™‚ Because it is not nothing. It means full access to books, papers, my supervisor, the research group, and obviously the phd program itself, which I will have to get to know.

  • Amuses

    Done and dusted

    The research master is completed. Done. And dusted. But my, what a lot of dust!

    Well, not quite, the official ceremony (without the cap) is in September, but it is all completed now.

    Before I embark on that “dust”, let me express my profound gratitude to the universe, teachers, friend and colleagues and above all, Husband, for guiding, supporting and generally putting up with me during the years. Can’t have been easy ❀️ It also really took it out of me: this was a full-time 2-year degree which I did whilst being employed full-time, and in less than ideal personal and work circumstances. Which I only managed coz I absolutely loved every minute of it, even the first 6 months when I was scared stiff my brains were no longer up for this kind of battering. Or that the real (young) students would laugh at the old bat πŸ™‚

    Now why do I lay this on so thickly? Coz I don’t need to convince you, most of you know this. Well, it is the “dust”. Let me explain. My supervisor had expected the rounding up of my thesis to go smoothly, and hence, so had I. But something entirely different happened. During my thesis “defense” I ran into a breed of academic that I had not encountered before. A well-respected researcher specialising in at least half of the stuff my thesis was about. Seriously, I read quite a few of his papers, and he is good. He was brought in as my second examiner. During the 50 odd minutes I had to defend my thesis, this second examinar held the floor for well over half an hour, attacking me on points of form and method. He seemed to think a particular method – a mechanism – I employed merely as a source of inspiration to construct a research paradigm, should have been used, to “prove” a specific phenomenon. A parallel with the outside world: that is like the difference between devising a business strategy versus writing out the technical specifications of an IT system: totally different things. He must have understood some of that because he branded me a “generalist”, and himself a “specialist”. As if one excludes the other. Now I think about it, it did feel a bit like the day job: explaining security policy to an infrastructure whizzkid, just as, I suppose, it was once explained to me. Anyway, this guy did not ask me one genuine question. Not one. Instead he seemed to be arguing a point, saying things like “just as I thought” “what do you actually think philosophy is” and “just to prove my point”. I was flabbergasted. Was this an examination? I pointed out that I had adopted a problem-solving approach to a difficult issue and that it yielded results. According to him, the reason the problem had not been addressed was that it could not be solved, otherwise it would have been solved – by someone else, was the implication. Wow. A street fighter.

    This is what it felt like.

    It took me a while, at least 15 minutes, to understand what was happening, and even then I could not believe it. I have been an examiner myself at some points in my life. To my mind, what this guy did was unprofessional, nothing to to with a neutral assessment of someone’s knowledge and work. But I was too much taken aback to say so. Thankfully my supervisor interrupted a couple of times with questions of his own, allowing me some time to regroup. He also made remarks about a third examiner, and how my end grade would be a weighted average, and that he himself had hoped for a different outcome, even that my work had giving him some new insights. It took me until the next day to deduce that there must have been a disagreement between examiners, which is by procedure is the only reason the third examiner is ever called upon.

    So how did this end? Well, the damage is not too bad. My final score drops by .25 point to an 8.2 coz my thesis got a 7.5. So no big deal. I still get cum laude, say my diploma supplement. But I am annoyed. No, not annoyed, upset. Sad. It is not about the grade – if I had even been asked why I had used that particular approach in my thesis and they not agreed with the answer, that would have been fine. But this is not what happend. This little man I do not even know, single-handedly spoiled the very last event of my research master by embarking on some kind of personal war, without clear reason, without regard, completely out of the blue. What on earth could have made him so angry? It cannot have been the idea of a mechanism for enactive cognition, coz I wrote once a paper on that which was marked by his co-researcher and got a high grade. I suppose I did argue that, in philosophy of language, a broader view should be taken than is generally taken by individual, specialist researchers. I did propose, from my model, new issues to research, or to research differently. But that is not something to get angry about, is it? But I suppose the real reason has to remain a mystery. I have decided that I will not lodge a complaint because I can see that university procedure was followed meticulously. I will also not bother my supervisor with this, because I can tell he already did what he could. But I will, in my student-evaluation, suggest that the university amend the Research Master thesis defense-procedure to allow for situations like this – to give the student a chance to be informed of the objections of a second examiner in time to defend or amend. Might not do any good, coz this may be a rare case, but at least I will have voiced the issue.

    Uncertainty, doubt and insecurity in the future
    Future stretching out into the beyond.

    What have I learned? Well, as my friend Teja has told me a thousand times, a university is not always a safe place. And as Husband often reminds me, I tend to be too trusting. Right. Wake up time, and let’s be grateful I learn it now and not years hence. This kind of situation certainly is not a risk I will run for my PhD. I will find some other medium to express interesting stuff, and stick to the well worn approach for university work. I will also make sure that I connect with every professor judging my Phd research well in time until I am sure that I have explained myself sufficiently and ironed out any creases. Which also means that I have to find another way and environment to express my more daring thoughts and interesting notions. Not at work, though, coz that runs into the same but different problems πŸ™‚ I will think on this, maybe do a series of short papers or publish informally on some blog or medium or other. Do drop me a line if you have ideas. I do want to voice my newly found insights on the crossroads between language, security and IT – there are so many old, even obsolete, philosophical theories still believed in by the general educated public, that I itch to dispel some of them, and replace by something more productive. Such as the idea that using language is no more than stringing along words derive their meaning from referring to an entity in the world. That is an old idea, introduced by Frege, but even he give up it up towards the end, as I found to my surprise in my little Frege adventure. Yet everyone in IT seems to cling on to this idea as if it were a religion. The reverse is also true: Ashby, a psychiatrist who in the previous century pioneered in cybernetics and invented the “homestat”, and with that, the idea of a double feedback loop in status tracking, the basis of every now existing quality control and assurance program. There must be millions. In philosophy, this concept is not discussed often or not in connection with language, which I think is a pity. But my research will change that, I hope ;-), or at least throw a crumb in that direction.

    So what is next? Well, I here post links to my thesis and to my research proposal. I am proud of both. There are summaries to both of them, so please don’t feel obliged to read them (I will never question you about them, honestly). But some of you seem to want to read these, so needs must.

    The old bat taking a well deserved rest.

    If things go as planned, I will start my research in september. It should take 4 years full time, but who cares if it takes 7. I want to finish it before I get pensioned off though. Through the day job, I have managed to secure positions both on the board of the international committee creating norms on information security and on the national governmental board of professionals directing compliance with said norms, so I am exactly where I want to be to run the research I want on both “sides” of the normative coin. Also, coz I am nearly 60 now, I now get a day off every monday. Great employer, eh? The universe smiles, I suppose. Let’s go for it. The old bat is up for it πŸ™‚

  • Blue tit in water yellow background

    Mellow Yellow

    And so, dear diary – things are finally coming to an end. A temporary end, from which things progress further. As Robert Jordan wrote: there are neither beginnings nor endings to the Wheel of Time.

    Yes, I am in contemplative mood. Or just exhausted. Same difference. You know that feeling, when you just keep going, that feeling of plodding on, through imaginary wind and snow and rain, until yesterday and tomorrow blur into each other?

    I think these years I must have reached my very own Olympic peak in “compartmentalising” – the little trick I taught myself when I was very young, to separate out the things that needed living through in time, space and attention. It is a simple process: just fit every task, every emotion, every thought, into its own little pigeon hole and set the alarm for when it must opened. And shut again.

    Occasionally my human mind will seize control, but nothing that cannot be crowded out by listing to a favourite audio book (yes, Scandinavian noir). Only just before I fall asleep the pigeons will fly out to where I cannot catch them, and I murmur about it to uncomprehending Husband before my power supply goes dark. A more sensible person than myself might think it time for a holiday.

    I went bit over the top with my pigeon holing. For the past couple of years I have forced myself to do one thing at the time, and one only. Every day and every moment. Well, admittedly there is that bit of working through my mail backlog whilst participating in some digital work conference that is beyond slow but I am supposed to listen to patiently (I am not allowed to peel potatoes secretly anymore coz it has my colleague in stitches with laughter). But mostly, I stick to one task at the time, for multitasking is not a good idea if you want to do something well and you don’t have the time or the opportunity to re-do it. Alas, some tasks will not wait for the next pigeon to fly out. So I find myself setting my phone alarm for the oddest things. Check the rising of the bread rolls in 10 minutes. Call Son to remind him of something or other at 9:00. Complete the home-delivered shopping by midnight on Wednesdays. Stop studying by 22:00 and have a drink with Husband. I even have a winding-down alarm reminding me I should be preparing to go to bed in 45 minutes.

    So, what has come to an end? Well, a couple of things. Some of them are a bit too personal to share here – you know, the sad stuff that happens to all of us eventually, and seems to happen a lot more frequently as we get older. Let’s say I won’t be sending this update to as many subscribers as I did before.

    But one momentous thing I must share with you. I have officially ended middle age and am now an old crony. Officially, because as a civil servant I can trade in some of my holiday leave plus a tiny portion of my salary in what is called an PAS scheme (no one know what the letters stand for) to get one full day off every week. Is that not incredible? I suppose they will soon throw it out or delay it because the pension-age used to be at 65 but got extended, by nearly 2.5 years in my case. So, I still have another 8 years to go, but on a 4-day work week. Wonderful. That gets me three full days a week for uninterrupted studying. And all it took (because I could have applied for it last year already) was to swallow my pride and admit that I am, well, not so young anymore, perhaps. Maybe.

    What else has happened? Well, yes, finally. I handed in my state-of-the art paper, my thesis and my research proposal. It was a frightening thing to do, coz once you handed it in, then what? Wait. Bite nails. Wait some more. Actually, I wrote three state-of-the-art papers. One on the topic that my supervisor (the loveable grump) suggested and I gave up on. One on what I wanted to research. And a final version added as a glossary to my thesis coz I decided to review three different theories and one cannot expect the examiners to be familiar with all of them. State-of-the-art papers don’t get graded, just pass or fail, and apparently I passed. Today my supervisor wrote to me and said he really liked my research proposal. Which is great, coz that will be my job description for my unpaid PhD candidate job for the next few years (just as well I got a day off from work). A whole new road ahead (yes, mellow yellow).

    Yellow way

    Which leaves my thesis. It was already reviewed once, and deemed of sufficient academic quality, but I was advised to make it “easier to read”. And could I not plug in a few examples? I was surprised. This is the kind of comment I might get at the office, but surely not here, with all these super clever academics? My supervising professor laughed outright when I said that. But anyway, I got the point: academics want to be catered to even if they pretend they don’t need earthly comforts like summaries and the like. Also, it turned out I had not used the APA referencing system quite correctly and I also had to flesh out my research question a bit – in short, I have just handed in a second version. Hopefully it will be ok. And then the procedure starts – well, it has already started. The examiners (there are three, my supervisor included coz he happens to be chairman of the examination board) receive both the thesis and the research proposal by June 20th and once they have read it all, I get a chance to defend it semi-publically. When I received notice of this, I wondered if I would have to wear a gown and cap (I still have mine though they might be moth-eaten), but when I consulted another student, it turned out I did not. Just as well I asked, imagine how silly I would have looked πŸ™‚

    So what did I write about in the end? I will post the documents on this blog once they are approved. The thesis, or publishable article as it is officially called, is about 11300 words excluding glossary and bibliography so you might have better things to do considering Spring has just arrived and corona measures are finally being lifted. But basically, what I did is try and work out what philosophy of language is about these days, and how to further it in a sensible manner. It turns out that the field was invented some 150 years ago, and developed in roughly four phases which nobody bothered to document very clearly as they were too busy killing each other. My previous degree did not go beyond phase 1, which explains why I had to work so hard to understand what was going on and who was arguing what against whom. Here comes the abstract to the thesis which is called: “it is in the singing, not in the song” – a multidisciplinary approach to language use.

    Language, as a social practice, involves abilities not specific to language, e.g. agency, attention, interaction, perception, memory and inferencing. Philosophical perspectives on language use can be enriched by integrating research with cognitive psychology and philosophy of biology. To show how this may work, I outline three β€˜rebel’ theories: autopoietic enactivism, cultural evolutionary psychology, and normative inferentialism against a general background of the evolution of language.These can be combined into one levelled framework if we assume cognition to be normative and embodied, and to be constructed out of old animal parts. Two central processes impact all levels of the new framework: normative regulation and identity-generation. Suggestions are made for further research based on predictive processing.

    And the abstract for the PhD research proposal, which is called – yes: “the art of misunderstanding”. It is not just armchair philosophy either, I get to enjoy myself by doing a nice bit of empirical research on my colleagues.

    Speech act theory offers a central insight: utterances do not just convey meaning, they are actions that assert, request, warn, promise, invite, predict, offer, direct, etc. In conversation, we generally recognise speech acts automatically and correctly, and almost as soon as the other starts to speak. But in some situations there is a problem. With regulatory texts on specific subjects, even the experts frequently disagree exactly what responsibility these texts confer onto whom. I propose to show that in these situations, the misidentification of speech acts is a major source of confusion; that the author(s) and audience have different interpretations of what speech acts are contained in these texts, and what the normative dimensions of these speech acts are. These findings will be interpreted in the context of Brandom’s normative inferentialism, and against the background of the cognitive theory of predictive processing; both sharing an notion of common ground and score keeping. Together, these theories may be combined to provide a framework for normative agency and interaction, of which speech acts are an instance. From this combination of philosophical insights and experimental findings, I aim to provide recommendation to improve understanding of regulatory texts on information security.

    Buttercup meadow

    I leave you with a picture of one of my favourite flowers: buttercups. I adore meadow flowers like daisies and poppies and buttercups. Find yourself a field full of them to roll around in – I certainly will.

    Next time I will tell you about the thesis itself, and what fascinated me about it, and what I discovered and how all of that is related to shades of glorious yellow. And after that, I will be talking about the research project – which will take me four years, so plenty of time for that.

  • Amuses

    What a piece of work is Man

    This is a line from Hamlet. Which I did not know about before I moved to England, coz I heard the Hair version first. Remember Hair? I suppose I am showing my age, but this was from a time when musicals were fresh and new and often provocative. I remember being mesmerised by the German version of “hair”. No idea why. Anyway, the Hair musical also does the “what a piece of work is man” as a song. It is here, if you want to listen to it. I now prefer the Hamlet version, which is here, No need to listen to all of it, just the bit after 50 seconds. This is the full quote:

    This type of thinking has a name: anthropocentric. It is when we place the human on a pedestal at the top of Creation and say: wow-wow-wow. We humans are clearly made in God’s image, more clever, more adept, than any other living creature. It is also when we look at the rest of nature and judge it by our own standards, look at animals though the glasses of our human understanding or try to understand the universe in terms of intentions and rationality and whatever else we thought up to provide a flattering backstage to our performance as masters of the universe. As George Orwell put it in Seven Commandments of Animalism in Animal Farm: “Four legs good. Two legs better”.

    I suppose you think I might be laying it on a bit thick. But I am not, really. In philosophy there has long been a sharp divide between creatures that think (us) and creatures that don’t (the rest), and it was assumed that the difference is somehow fundamental and significant. Humans think. Humans have language. Humans make tools. Humans create. Humans plan. These abilities are so fundamental to our self-image, we think they must be hard-wired, in our genes, transmitted through natural selection because these are the very characteristics that have put us in our position as overlords of creation. QED.

    Right? Wrong. Or at the very least not self-evident. This is one of the issues that lies at the root of the so-called analytic-continental divide. You might remember an earlier post about this, when I explained I never knew about the so-called divide until I started this ReMa course. In England they firmly held to the belief that anything continental was, eh, continental. Like they had “intercontinental” phone booths, for calling from one continent (the UK) to another. That was 40 years ago, before anyone ever dreamt of Brexit. But I am digressing.

    The man-as-superbeing story is a nice story, of course, comforting and safe. I was told it as a child by all the grown-ups, so it must be true. Probably, so were you. So why should it suddenly be all wrong? Well, think about it. The earth is about 4500 million years old, with single-cell organisms starting to emerge around 3500 million years ago. The first invertebrates appear around 700 million years, fishes at 500 million years and plants even later. The first mammals arrived around 200 million years ago. From these, we get primates, then great-apes and eventually humans. If you put these data on a 24-hour clock, you can see that we humans arrive in the last one-and-a-quarter minute.

    So what? you may think. Well, it takes time for natural selection to work on genes. It has taken a very very long time for complex life forms to evolve. It is therefore very unlikely that human abilities appeared out of the blue, by lightning or some other external event. It is much more likely that our special cognitive abilities are built up from old stuff, from building blocks that we share with other forms of life. This is what Cecelia Heyes says. She is a philosopher-psychologist or psychologist-philosopher and she explains it all very well in her book “Cognitive gadgets”. I am a big fan. If you don’t want to read the whole book, read this article in Aeon.

    For my thesis (yes, I am writing it), I am trying to connect up three theories on an evolutionary continuum:

    • Enactivism, based on biological evolutionary theory, Di Paolo-style
    • Cultural evolution, Cecilia Heyes style
    • Pragmatist philosophy of language, Brandom-style

    The idea is to recognise basic patterns to the emergence, use and development of cognitive abilities. Any pattern that exists as an evolutionary ground pattern, need not be explained at the human level. So let’s go down the evolutionary path and start at the bottom. With something really simple, like one-cell organisms.

    The first problem that needs solving, is how to decide that when something is alive. Biological theory (well, the variety invented by Maturana back in the 1980s, which developed into enactivism) says that any living cell will keep itself alive in two opposite ways: by shutting itself away safely behind a membrane and by interacting with its environment (bacteria will move towards a source of sugar in the water). All living organisms switch between those two modes of isolation and interaction. This switching is regulated through active homeostasis.

    Ashby’s double feedback loop

    Homeostasis involves having an extra process on top of the primary process which monitors the situation (compares it against a threshold) and responds when something needs doing. Your home thermostat is a good example. The idea of homeostasis is also hugely popular in the real world. Quality systems are inevitably based on (at least) double loop regulation. Little do they know that the concept was, well, I supposed not discovered, but formalised by Ashby in the 1940s. Now obviously thermostats and quality systems are not alive. Their homeostasis is not active, not self-regulating, not geared toward survival. It is not active. Active homeostasis causes the organism to take steps to stay alive, i.e. to regulate its isolation and interaction in such a way that not only will it survive now and today, but also tomorrow. Is this scientific? Well, yes, in the sense that it a very basic and clean formulation of what life is: striving to survive, to be free of the imminent danger of the internal system running down or the environment taking over, taking steps to ensure that survival.

    Operational closure, by Di Paolo & Thompson, 2014

    The next question is about how organisms get involved with their environment and each other, and become adapted to that interaction. General systems theory offers a basic pattern for this, with a little tweaking: the idea of operational closure. A terrible term for something rather interesting: the living system will try to say alive by extending into the environment. This is done by moving its boundaries. Literally. The inside of the living organism consists of linked up processes (the black circles). You may think of it as the inside of a cell or a bacterium. Notice how the internal processes interconnect to form a whole: the total of connections between the linked processes constitutes its border. Also note how operational closure does not require a structural barrier with the environment: it is not organisational closure. Nor it is interactional closure, because the interconnecting processes of the organism may simultaneously interact with the environment. Now the organism may further extend its borders by drawing in another process, by connecting its in- and output to the existing connections. It is not difficult to see how operational closure, in combination with active homeostasis, will allow a group of cells work together and eventually specialise, and so on, into the evolution of that organism, simply by trial and error, and without recourse to genetic change. What I find interesting is that the identity of the living system is not fixed but fluid, indeed self-generated: by drawing in new processes (or abandoning them), the living system changes not just its structure but also its identity.What a wonderful modelling problem πŸ™‚ I think the business architects in my day job would file for mass retirement if they knew.

    The ideas of isolation/interaction, active homeostasis, operational closure and identity-generation are fundamental to what is called autopoietic enactivism, the first of three theories I am lining up. The important thing is that if a bacterium actively keeps itself alive through these basic biological patterns, there is no need to explain the why of this behaviour at every level of creation. Nor – and this is important – do we need to assume it requires consciousness or thought or intentions or any other specifically human machinery. We also do not need to ask how the bacterium knows how to respond to the outside world. It does not know. It merely tries to stay alive, i.e. keeps its processes below the robustness-threshold of its homeostasis by whatever means available to it. It is easy to see natural selection operating on the survival of the fittest bacterium.

    With the emergence of more complex life forms – perhaps fitted with a nervous system – the history of successful interactions will start to become important, allowing for the formation of habits. This is the realm of ecological psychology, which says that organisms use their environment as scaffolding. Recognition of other life-forms as agents, i.e. as organisms striving to survive through interaction with the shared environment, new types of interaction become possible. Several organisms may interact to produce a group interaction. And from that, group habits. Again, it is easy to see natural selection operating on the survival of the group with the most successful habits. No human cleverness needed for that, either.

    Boring? I find this stuff fascinating. It allows for another type of evolution with is not based on genetics alone, but on successful behaviour which is arrived at and extended into the future in some other, non-genetic way. Which sort of fits the data: the difference between our genes and the genes of chimps or bonobos is only 1.2%. Interestingly, there is a lot more genetic variation within chimps and bonobos than within humans: only 0.1% within humans. I would not be surprised if we killed off the most of the human genetic variants ourselves. A nasty piece of work, is Man. We probably threw the Neanderthals off the last cliff. Plus our other ‘siblings’. There is a bunch of new discoveries about our human predecessors. In the chart below you can see our brothers and sisters who did not make it.

    Source: https://www.nhm.ac.uk/discover/the-origin-of-our-species.html

    This post is getting a bit long. I will write up the other two parts next time. I just want tell you about my tiny adventure with Continental philosophy and a French philosopher who causes my regular professors indigestion. I still had to get some ECs (European credits) and I decided on a Philosophy of Evolution course. I thought it might extend my knowledge, and expected it to be a bit like the research seminar for philosophy of language, which was (mainly) on chimpanzees. Well, it was not. It was Continental philosophy, centred around Stiegler, a French philosopher who died very recently. My lecturer is a big authority on him, so the seminar was worthwhile if only for that (he also publishes philosophical papers on shamans and the use of psychotics, so an old hippie, after my heart). Unfortunately “Stiegler and his Parisian Friends” as my supervising professor calls them, invoke impatience to the point of aggression in analytically-styled philosophers. Which I find amusing. I suppose that through my long years as a civil servant I have learned to stomach texts which are a lot more cumbersome than Stiegler’s. So I just smiled my way through (a lot of) them, and found some really useful insights. I also noticed that my lecturer was not up date on the ideas in philosophy of mind and language; just as the other (analytical) side was not. So I wrote my essay on combining ideas from both sides, I suppose as a sort of prelude to my thesis. I used the picture about the evolving chimp (above) in my essay, and had the text checked by both sides. It was well received, so I must have got it right. It is here, if you want to read it. I was allowed get out of a “group essay” – oh horror, remember my previous adventures with group work? Even if it does work out, I end up doing most or all of the work so I might as well do it on my own. But I was let off (pfft) and allowed to write an essay on my own, on the condition that I would stick to the word-limit. Which I did, with three words to spare πŸ™‚ So the essay is not too long, but perhaps a bit technical. Judge for yourself if you are so inclined.

    P.S. If you have any comments about the biological patterns I wrote up in this post, do let me know. I am wondering if explaining the ideas the way I do, is making any sense to normal people like yourselves. I have been trying it out on Husband, but he is finally beginning to object to the timing of my lengthy after-midnight brainwaves, so perhaps you want to help me out πŸ™‚

  • Amuses

    Trust me, I don’t know either

    Water, sea, waves, skies – I adore them. I stand in great awe of artists who manage to capture their light. Like Ivan Konstantinovich Aivazovsky who painted the translucent waves of the picture at the top of this blog. He did many more (you can use the link to check him out). This particular one he did towards the end of his life, and is special because there are no ships, people, or shoreline. Just water and light. So breathtakingly beautiful.

    I am not quite sure why I wanted this particular picture for today’s post. Possibly I am missing the sea. In other years we usually take mini-holidays near the sea so I can walk bare-foot along the the sea line. I can do that for hours on end. If Husband would not suggest we’d better turn back, I believe I would not stop. I love wading between the little islands that form between tides. I sing to the waves and talk to the birds, but mostly I breathe and splash water with my feet. Yes, quite like a child. Adulthood is overrated πŸ™‚

    I suppose wild water signifies opposite things to me. Beauty, freedom, light, strength, life. But also force and danger and sudden change. A bit like living, perhaps. And there is my link. I wanted to tell you about trust. Trust is risky business. I have been thinking about it in connection to language and cognition, and I developed my own little theory. Which is probably wrong, but it is my first, so bear with me.

    By John P. Weis. I have a subscription to his newsletter, hence found this delightful image in my mail this weekend.

    This is also the long-promised fourth and last instalment of my mini-series ‘studying in times of Corona’. The first wave (hmm) as it now transpires we are heading into the third. I think the Netherlands must be the very last densely populated European country to go into lockdown. But we are. From tomorrow. Not before time, either. Husband has gone out, trying to get coffee beans. It appears he is not the only one. Even though coffee is ‘essential’, surely.

    My last “big” seminar was on “folk psychology”. You might think that is people pretending to be psychologists, but it is a bit different. The idea is that we read each others’ minds. All the time. We do that, supposedly, to understand and predict each other. We know or we think we know ‘what makes other people tick’. We think in terms of belief-desire: we are rational beings that believe and want things, and that is what makes us act. The idea is from Hume, and draws on Aristotle’s de Anima. So it has been around a long time, long enough for you and I to believe it firmly. Sounds plausible, eh? Flattering too: the Homo Sapiens really got it all sussed. No wonder we are at the top of the evolutionary ladder.

    Painting by Harry Roseland. Yes, they are also reading tea leaves. Just a little joke.

    You probably saw it coming: perhaps not. This is a big debate in current Philosophy of Mind now. I wrote a very, very long paper on it, far exceeding the number of words allowed for a paper, first reviewing the various positions on the issue, and then developing a bit of my own theory. If you want the read the paper, it is here. It got me an very good grade, but I suppose I was lucky the professor wanted to read it at all, as it did not conform to any of the usual requirements. He said it was majestic, but not a paper at all, more like the outline of a book or a dissertation. Well, yes, I suppose it was. I was so excited about the topic. Still, I felt lucky to get detailed feedback. Not used to get this much attention to my work. At the office no one is in the business of improving my mind, I suppose πŸ™‚

    I will try to tell you what debate on folk psychology is about, because otherwise I cannot explain my own little theory. Let’s take how we normally talk about each other as a starting point. We talk about our mental states a lot. About what we think, believe, feel, and why and why not. We are also very much aware of other people having thoughts, beliefs, etc. Children learn to do this at an early age, it is thought as early as 15 months. It is a fundamental ability for social interaction because it allows us to cooperate and coordinate. At all levels, in a family, in a shop, at school, at work or in government. You can see this ability at work very easily. We explain ourselves constantly in terms of what we believe and feel. And we call each other out: Why did you do that? What is the point of this? Such behaviour is characteristic of humans. Because there is little (or none, as some would have it) evidence of animals making each other justify their behaviour.

    So what is the big debate about? Well, it is not about whether we display this behaviour or whether this is typically human. It is about whether this folk psychology is an innate, genetically inherited ability. The received opinion was and mostly is, that this innate ability is what makes human special, sets us apart. Philosophers who think that, usually also think that this ability lives in the brain, as some kind of specialised module. That we read our own mental states and others, because we have special equipment to do so given to us by Evolution. At great cost, because large brains are expensive in terms of energy. But those with the best mindreading abilities survived, because clearly this provided a competitive advantage. This is called the Machiavellian Intelligence Hypothesis. It also explains our intelligence and our ability to plan ahead.

    There are some big problems with the view. One is that our reactions to other people are much faster than would ever be possible if we consciously evaluated mental states. Another one is that if you look upon other people in the third person, as agents with mental states that you can read, that leaves no room for true interpersonal experience, for experiencing together. Then there is the matter of the horse and the carriage – do our mental states explain our behaviour, do we act in accordance with our intentions? Or is it the other way around ? Cecelia Heyes, a philosopher-psychologist says that folk-psychologising is very clever, but that there is no neural basis for it. At all. It is an ability which we have discovered, fostered, taught to our children and hence transmitted across generations, through cultural learning. We teach our children from birth to respond and to learn, that is what makes us special. There are others, who say that cognition is not individual and not brain-bound; that is just an fairly recent idea which came from our own invention of computers. And so on and so forth.

    The main idea, from the non-traditional camp, is that social cognition, including our mind-reading ability, is extended by language. Language is required for cooperation, specialisation and coordination. And as device for the enculturation of social memory. Not, as classic philosophy of language would have it, to express truths about the world – remember my post about Frege? No special genes, no special modules. Simply something we have learned to do well as a species. Much like to our ability to drive or play games.

    Painting by Cassius Coolidge

    You may shrug and think this new approach not a big deal, but I can assure you it is, in my little Philosophia bubble. It turns human cognition into something that is shared with other primates, which opens up a whole new vista of research. We do have to redefine the word “cognition” though, so that it does not refer to just to humans, but that should not be too much of a problem. Philosophers of language have done much worse in the past πŸ™‚

    And now it is curtains up for my little theory. It struck me that in neither “camp” there was a true discussion or inquiry into “why”. Why do we mindread? Or pretend we do? Obviously the survival-of-the-fittest theory won’t wash, as this ability is not genetically inherited. So why? I learned from cases in psychiatry that what therapists do, is to provide consistent feedback when patients cannot do this for themselves. As if they temporarily take over the social mindreading function until the patient can do it for him or herself again. Obviously that requires trust. If you look at this from the patient’s point of view, then what you see is a form of cognitive offloading – the patient outsources, as it were, mental work to the therapist. If you look at cognition in general, this what we do all the time, outsource, offload task to our environment. To the environment, to other people, to artefacts like books, and recently to smart devices – anything to free up cognitive resources. Even if we accept information from someone, you may regard this as a form of cognitive offloading. And all of it requires trust. If you cannot rely on whatever you outsource your cognitive labour to, you are at risk.

    In a nutshut, social cognition requires constant risk management. So there. I will come back to this idea at later point, because it will be a theme in my PhD. Talked it over my professor today, and he agreed. I will tell you about the full proposal once I have written it up, but there will be a relation between felicitous conditions for speech acts, trust and what we do – in language – to compensate when we are not sure what we or who we are dealing with. Maybe I will find out something interesting. And if not, that is also of interest.

    Robots communication

    Next I will tell you about my tiny adventure with Continental philosophy and a French philosopher who causes my regular professors indigestion. And then it will be thesis time – these days called a “publishable article”. I was told today that I have already done all the preparation I need (which means my research log = state-of-the-art paper =10 EC), so I will be writing the outline in the next few weeks. Exciting. But there is also Xmas, and Husband and Son and Xmas dinner to cook and films to watch.

    I hope your Christmas will be pleasant.