• PhD

    New ideas: a partial answer (part 2)

    In the past year, I have been thinking and reading, and trying to find out what other people have said on the subject that I wish to expand. I think I am now embarking on a different phase: pulling out all the ideas – like weeds 🙂 – and drawing them together into a narrative.

    I have started on my dissertation “proper”. That is, I have set up the outline and am now in the process of filling in the blanks. You can check my progress if you like, but this comes with a warning. Some of the ideas should not be spread to security officers before I interview them, because it would render the interview useless. So, if you think you might be a candidate for interview on the meaning and implication of ISO norms on security, do not read my dissertation-in-creation. I trust you. Really. My dissertation-in-progress is on https://publish.obsidian.md/theartofmisunderstanding. It will ask you for a one-time password. If you want it, write to me, although you might guess it :-). The landing page will refer you to my progress page, and there you can see what I am working on at any particular moment. There is quite a bit of tooling behind the dissertation-writing process. I use Obsidian in combination with Zotero. If you are interested, ask me, or check out a showcase like this one. Also, videos by Brian Jenks are worth checking out. He uses Obsidian as an extended memory to manage his ADHD and has excellent structure and tips.

    Meanwhile, my general progress report: I had a lot of fun finding out that the theory of speech acts which is at the basis of my research, has somehow escaped into the real world without language philosophers knowing about it. In fact, the ideas that have “got out there” are not just about language but about cognition and interaction in general—again, without philosophers being aware of it. I find this very amusing. On the one hand, you have these philosophers who are very seriously trying to work out what it all means, and then, on the other hand, normal, practical people borrow these ideas, and put them into practice, sometimes with astonishing success, but without any theoretical foundation whatsoever. And never shall the two meet ..

    What kind of ideas? Well, the ideas about feedback being necessary to interaction. That particular idea originally came from biology, from two philosophers called Maturana and Varela, giving rise to such concepts as a PDCA cycle (Plan-do-check-act). The blanket term is “cybernetics”, and the underlying popular theory is known as “system theory”, which derives from autopoiesis. I was first introduced to this theory during my research master, and blogged about it here. The idea is that any system, whether a cell, an animal, or an organisation, will try to keep itself alive whilst engaging with the environment to increase its chances of survival. I am sure you find this idea, well, simple. This is how we commonly think. But we did not always think that way. It is something we learned to do, as a society, in the past 100 years, or even not much beyond 50 years.

    The point is, this feedback-notion is an important philosophical idea which is rooted in a specific theory. Would you believe—this is for the IT people in my audience—that the agile and lean ways of thinking are derived from this, without any further theoretical grounding? I checked this extensively, and also talked about it with the Dutch cybernetics society. So, it appears that the philosophical ideas from the first half of the 20th century just went viral, and people tried them for practical purposes, adopting them if they seemed to work, and then of course went on from there to create more practices. Without there being any proper feedback loop to improve the initial theories. Incredible. How does anyone, philosopher or practical person, image we learn from this?

    Did I intuit this before? Well, I did notice there was something going on, but I was not sure what exactly. I had previously noticed that frameworks like ArchiMate – a framework for describing organisational and IT structure—are, in spite of their claims, not derived from philosophy of language, just from practical application. Which is ok, but I do find the pretense of “scientific evidence” quite amazing in the absence of a proper theory and proper validation. And yet, the authors of Archimate tried—the same cannot be said of business concepts like “lean” and “agile” which are totally devoid of any theoretical basis whatsoever, and derive their strength from our natural tendency to work in small tribes, which we share with our chimp and bonobo ancestors. Which – for those of you that are agile-believers – does not necessarily mean the agile theory is wrong. Just that it caters to our cognitive disabilities as animals and apes rather than to any grand business theory ….

    I had also noticed, in the work of Jan Dietz and his DEMO method for analysing business IT, that Dietz incorporated a notion of speech acts and commitment which went way beyond what was understood at the time and certainly had not found its way into other IT business or architecture thinking. That fascinated me. How did he manage this? So I interviewed Jan Dietz (a pleasant occasion at a café in Leiden) and found that indeed he had understood something that philosophers had not understood at the time. He told me wonderful stories about how his method saw the light, and what happened on his first projects. With respect to my search into the notion of “commitment” in his work, he referred me to the work of Winograd and Flores (1986). Both men are now nearing 80, but still very much part of the world, and, important to me, turned out to be willing to answer my questions.

    Terry Winograd is the inventor of the Google search engine and the initial driving force behind the idea of artificial intelligence. I wrote to him, but as I expected, his particular expertise was not language and normativity but IT. That language bit, he said in a private message to me, had come from his co-author, Fernanda Flores, whom he was still seeing regularly.

    Fernando Flores is a person in his own right: a former minister of Finance in Chili, a philosopher, and a successful businessman; his communication advice costs about a million per piece. However, there was nothing in Flores’ writings that explained where he got his ideas from, so I wrote to him and managed to obtain an interview via Zoom. Amazing! We spoke for almost two hours. The idea of “speaking = acting = incurring commitment” is at the basis of the new Chilean plan-economy in the early 1970s, at the instigation of Flores himself. His communication-with-commitment ideas were shaped by Stafford Beer, the father of organisational management, who took Maturana’s original theory and applied it – without his consent – not to biological cells, but to organisations, i.e., treating businesses as living organisms. Flores himself spent years in jail after the right-wing coup—being visited by, yes, Maturana, amongst others. Cybernetics became associated with communism, which did not help its academic stature much. I recall being at school, as a teenager, at the time that all of this happened, and not understanding much. My parents were very right-wing, and I was told that anything to do with communism and socialism was bad. I remember getting a geography essay on Chili back with my teacher remarking that “I might have invested more effort”. Quite! Well, here is my karma, after all. There is a great book on this; see Medina, E. (2011). Cybernetic revolutionaries: Technology and politics in Allende’s Chile. Cambridge, Mass: MIT Press.

    I also found that psychologist Watzlawick’s ideas about communication, which I blogged on before and liked very much, stem from the very same cybernetic tradition, the idea that our social connections form a system within which we interact-much like a living system that strives for survival and expansion and interaction. The idea is, do not just look at the message. Look at the whole context. By the time I had found all of these cybernetic applications of a single under-developed philosophical idea about the connection between speech acts and commitment, I was getting quite excited. I wondered what had gone wrong with cybernetics and why the connection to philosophy had been lost? So I searched high and low and eventually came across a book by Novikov, which shows the entire area of application of cybernetic theory.

    If, from the picture above, you get the idea that artificial intelligence is also not rooted in philosophical theory, you would be quite right. AI is all practice and practical application without any theoretical basis in language or philosophy whatsoever. But don’t tell anyone, because no one wants to know. Everyone wants funding and an audience.

    Right. So now the only thing I have to do is to connect cybernetics back to its philosophical roots 🙂 and then, for language understanding, reconnect it to philosophical moral theory and cognition. Not exactly simple to do, but needs must, as they say. The problem, is that there is no system or consistency to the application of cybernetics to its operational fields. It literately escaped from the philosophical laboratory. So I would have to start either from the original theory of cybernetics from its inception by mathematician and philosopher Norbert Wiener (1948) and work my way forward. Or I should posit my own theory and try to fit practical results from real life applications of the theory into my new theory. It will have to be the latter as I am not a historian, but it is daunting nevertheless.

    I am leaving you to ponder over my new insights into cybernetics, but also with a book suggestion for which I thank both Husband and my friend Teja. Two female philosophers have written a book about four courageous women philosophers at Oxford at the time of the 2nd world war: Elizabeth Anscombe, Iris Murdoch, Philippa Foot and Mary Midgley. See the review in the Guardian. (Dutch people: do not read the NRC review because it is pretentious clap-trap, sorry to say, just order it from here.) Anyway, such a great book! Finally, I understand why my many questions were not answered when I was there (at the time I blamed myself). The main pleasure comes from knowing why and under what conditions the important analytical theories were produced, as well as how the great philosophers interacted and how any other ideas were excluded. Lots of insightful gossip about horrible men and passionate quests for understanding. I love it. I’m halfway through the audiobook now. I am going to write to the authors when I finish, to thank them for putting analytical philosophy and my own Oxford experience into perspective for me. It is a delightful read, highly recommended.

    My next journey is into agency, the connection between roles and norms and understanding. I will keep you posted.

    PS. Did you see my post on LinkedIn about my son graduating in Law and moving to philosophy? For those of you that know us personally, you know we went to hell and back. So grateful it all ended well. I thank all of you that liked and/or commented. The young need our encouragement, as we left them so many problems to solve that we could not.

  • Amuses

    Done and dusted

    The research master is completed. Done. And dusted. But my, what a lot of dust!

    Graduation
    Well, not quite, the official ceremony (without the cap) is in September, but it is all completed now.

    Before I embark on that “dust”, let me express my profound gratitude to the universe, teachers, friend and colleagues and above all, Husband, for guiding, supporting and generally putting up with me during the years. Can’t have been easy ❤️ It also really took it out of me: this was a full-time 2-year degree which I did whilst being employed full-time, and in less than ideal personal and work circumstances. Which I only managed coz I absolutely loved every minute of it, even the first 6 months when I was scared stiff my brains were no longer up for this kind of battering. Or that the real (young) students would laugh at the old bat 🙂

    Now why do I lay this on so thickly? Coz I don’t need to convince you, most of you know this. Well, it is the “dust”. Let me explain. My supervisor had expected the rounding up of my thesis to go smoothly, and hence, so had I. But something entirely different happened. During my thesis “defense” I ran into a breed of academic that I had not encountered before. A well-respected researcher specialising in at least half of the stuff my thesis was about. Seriously, I read quite a few of his papers, and he is good. He was brought in as my second examiner. During the 50 odd minutes I had to defend my thesis, this second examinar held the floor for well over half an hour, attacking me on points of form and method. He seemed to think a particular method – a mechanism – I employed merely as a source of inspiration to construct a research paradigm, should have been used, to “prove” a specific phenomenon. A parallel with the outside world: that is like the difference between devising a business strategy versus writing out the technical specifications of an IT system: totally different things. He must have understood some of that because he branded me a “generalist”, and himself a “specialist”. As if one excludes the other. Now I think about it, it did feel a bit like the day job: explaining security policy to an infrastructure whizzkid, just as, I suppose, it was once explained to me. Anyway, this guy did not ask me one genuine question. Not one. Instead he seemed to be arguing a point, saying things like “just as I thought” “what do you actually think philosophy is” and “just to prove my point”. I was flabbergasted. Was this an examination? I pointed out that I had adopted a problem-solving approach to a difficult issue and that it yielded results. According to him, the reason the problem had not been addressed was that it could not be solved, otherwise it would have been solved – by someone else, was the implication. Wow. A street fighter.

    This is what it felt like.

    It took me a while, at least 15 minutes, to understand what was happening, and even then I could not believe it. I have been an examiner myself at some points in my life. To my mind, what this guy did was unprofessional, nothing to to with a neutral assessment of someone’s knowledge and work. But I was too much taken aback to say so. Thankfully my supervisor interrupted a couple of times with questions of his own, allowing me some time to regroup. He also made remarks about a third examiner, and how my end grade would be a weighted average, and that he himself had hoped for a different outcome, even that my work had giving him some new insights. It took me until the next day to deduce that there must have been a disagreement between examiners, which is by procedure is the only reason the third examiner is ever called upon.

    So how did this end? Well, the damage is not too bad. My final score drops by .25 point to an 8.2 coz my thesis got a 7.5. So no big deal. I still get cum laude, say my diploma supplement. But I am annoyed. No, not annoyed, upset. Sad. It is not about the grade – if I had even been asked why I had used that particular approach in my thesis and they not agreed with the answer, that would have been fine. But this is not what happend. This little man I do not even know, single-handedly spoiled the very last event of my research master by embarking on some kind of personal war, without clear reason, without regard, completely out of the blue. What on earth could have made him so angry? It cannot have been the idea of a mechanism for enactive cognition, coz I wrote once a paper on that which was marked by his co-researcher and got a high grade. I suppose I did argue that, in philosophy of language, a broader view should be taken than is generally taken by individual, specialist researchers. I did propose, from my model, new issues to research, or to research differently. But that is not something to get angry about, is it? But I suppose the real reason has to remain a mystery. I have decided that I will not lodge a complaint because I can see that university procedure was followed meticulously. I will also not bother my supervisor with this, because I can tell he already did what he could. But I will, in my student-evaluation, suggest that the university amend the Research Master thesis defense-procedure to allow for situations like this – to give the student a chance to be informed of the objections of a second examiner in time to defend or amend. Might not do any good, coz this may be a rare case, but at least I will have voiced the issue.

    Uncertainty, doubt and insecurity in the future
    Future stretching out into the beyond.

    What have I learned? Well, as my friend Teja has told me a thousand times, a university is not always a safe place. And as Husband often reminds me, I tend to be too trusting. Right. Wake up time, and let’s be grateful I learn it now and not years hence. This kind of situation certainly is not a risk I will run for my PhD. I will find some other medium to express interesting stuff, and stick to the well worn approach for university work. I will also make sure that I connect with every professor judging my Phd research well in time until I am sure that I have explained myself sufficiently and ironed out any creases. Which also means that I have to find another way and environment to express my more daring thoughts and interesting notions. Not at work, though, coz that runs into the same but different problems 🙂 I will think on this, maybe do a series of short papers or publish informally on some blog or medium or other. Do drop me a line if you have ideas. I do want to voice my newly found insights on the crossroads between language, security and IT – there are so many old, even obsolete, philosophical theories still believed in by the general educated public, that I itch to dispel some of them, and replace by something more productive. Such as the idea that using language is no more than stringing along words derive their meaning from referring to an entity in the world. That is an old idea, introduced by Frege, but even he give up it up towards the end, as I found to my surprise in my little Frege adventure. Yet everyone in IT seems to cling on to this idea as if it were a religion. The reverse is also true: Ashby, a psychiatrist who in the previous century pioneered in cybernetics and invented the “homestat”, and with that, the idea of a double feedback loop in status tracking, the basis of every now existing quality control and assurance program. There must be millions. In philosophy, this concept is not discussed often or not in connection with language, which I think is a pity. But my research will change that, I hope ;-), or at least throw a crumb in that direction.

    So what is next? Well, I here post links to my thesis and to my research proposal. I am proud of both. There are summaries to both of them, so please don’t feel obliged to read them (I will never question you about them, honestly). But some of you seem to want to read these, so needs must.

    The old bat taking a well deserved rest.

    If things go as planned, I will start my research in september. It should take 4 years full time, but who cares if it takes 7. I want to finish it before I get pensioned off though. Through the day job, I have managed to secure positions both on the board of the international committee creating norms on information security and on the national governmental board of professionals directing compliance with said norms, so I am exactly where I want to be to run the research I want on both “sides” of the normative coin. Also, coz I am nearly 60 now, I now get a day off every monday. Great employer, eh? The universe smiles, I suppose. Let’s go for it. The old bat is up for it 🙂

  • Blue tit in water yellow background
    Amuses

    Mellow Yellow

    And so, dear diary – things are finally coming to an end. A temporary end, from which things progress further. As Robert Jordan wrote: there are neither beginnings nor endings to the Wheel of Time.

    Yes, I am in contemplative mood. Or just exhausted. Same difference. You know that feeling, when you just keep going, that feeling of plodding on, through imaginary wind and snow and rain, until yesterday and tomorrow blur into each other?

    I think these years I must have reached my very own Olympic peak in “compartmentalising” – the little trick I taught myself when I was very young, to separate out the things that needed living through in time, space and attention. It is a simple process: just fit every task, every emotion, every thought, into its own little pigeon hole and set the alarm for when it must opened. And shut again.

    Occasionally my human mind will seize control, but nothing that cannot be crowded out by listing to a favourite audio book (yes, Scandinavian noir). Only just before I fall asleep the pigeons will fly out to where I cannot catch them, and I murmur about it to uncomprehending Husband before my power supply goes dark. A more sensible person than myself might think it time for a holiday.

    I went bit over the top with my pigeon holing. For the past couple of years I have forced myself to do one thing at the time, and one only. Every day and every moment. Well, admittedly there is that bit of working through my mail backlog whilst participating in some digital work conference that is beyond slow but I am supposed to listen to patiently (I am not allowed to peel potatoes secretly anymore coz it has my colleague in stitches with laughter). But mostly, I stick to one task at the time, for multitasking is not a good idea if you want to do something well and you don’t have the time or the opportunity to re-do it. Alas, some tasks will not wait for the next pigeon to fly out. So I find myself setting my phone alarm for the oddest things. Check the rising of the bread rolls in 10 minutes. Call Son to remind him of something or other at 9:00. Complete the home-delivered shopping by midnight on Wednesdays. Stop studying by 22:00 and have a drink with Husband. I even have a winding-down alarm reminding me I should be preparing to go to bed in 45 minutes.

    So, what has come to an end? Well, a couple of things. Some of them are a bit too personal to share here – you know, the sad stuff that happens to all of us eventually, and seems to happen a lot more frequently as we get older. Let’s say I won’t be sending this update to as many subscribers as I did before.

    But one momentous thing I must share with you. I have officially ended middle age and am now an old crony. Officially, because as a civil servant I can trade in some of my holiday leave plus a tiny portion of my salary in what is called an PAS scheme (no one know what the letters stand for) to get one full day off every week. Is that not incredible? I suppose they will soon throw it out or delay it because the pension-age used to be at 65 but got extended, by nearly 2.5 years in my case. So, I still have another 8 years to go, but on a 4-day work week. Wonderful. That gets me three full days a week for uninterrupted studying. And all it took (because I could have applied for it last year already) was to swallow my pride and admit that I am, well, not so young anymore, perhaps. Maybe.

    What else has happened? Well, yes, finally. I handed in my state-of-the art paper, my thesis and my research proposal. It was a frightening thing to do, coz once you handed it in, then what? Wait. Bite nails. Wait some more. Actually, I wrote three state-of-the-art papers. One on the topic that my supervisor (the loveable grump) suggested and I gave up on. One on what I wanted to research. And a final version added as a glossary to my thesis coz I decided to review three different theories and one cannot expect the examiners to be familiar with all of them. State-of-the-art papers don’t get graded, just pass or fail, and apparently I passed. Today my supervisor wrote to me and said he really liked my research proposal. Which is great, coz that will be my job description for my unpaid PhD candidate job for the next few years (just as well I got a day off from work). A whole new road ahead (yes, mellow yellow).

    Yellow way

    Which leaves my thesis. It was already reviewed once, and deemed of sufficient academic quality, but I was advised to make it “easier to read”. And could I not plug in a few examples? I was surprised. This is the kind of comment I might get at the office, but surely not here, with all these super clever academics? My supervising professor laughed outright when I said that. But anyway, I got the point: academics want to be catered to even if they pretend they don’t need earthly comforts like summaries and the like. Also, it turned out I had not used the APA referencing system quite correctly and I also had to flesh out my research question a bit – in short, I have just handed in a second version. Hopefully it will be ok. And then the procedure starts – well, it has already started. The examiners (there are three, my supervisor included coz he happens to be chairman of the examination board) receive both the thesis and the research proposal by June 20th and once they have read it all, I get a chance to defend it semi-publically. When I received notice of this, I wondered if I would have to wear a gown and cap (I still have mine though they might be moth-eaten), but when I consulted another student, it turned out I did not. Just as well I asked, imagine how silly I would have looked 🙂

    So what did I write about in the end? I will post the documents on this blog once they are approved. The thesis, or publishable article as it is officially called, is about 11300 words excluding glossary and bibliography so you might have better things to do considering Spring has just arrived and corona measures are finally being lifted. But basically, what I did is try and work out what philosophy of language is about these days, and how to further it in a sensible manner. It turns out that the field was invented some 150 years ago, and developed in roughly four phases which nobody bothered to document very clearly as they were too busy killing each other. My previous degree did not go beyond phase 1, which explains why I had to work so hard to understand what was going on and who was arguing what against whom. Here comes the abstract to the thesis which is called: “it is in the singing, not in the song” – a multidisciplinary approach to language use.

    Language, as a social practice, involves abilities not specific to language, e.g. agency, attention, interaction, perception, memory and inferencing. Philosophical perspectives on language use can be enriched by integrating research with cognitive psychology and philosophy of biology. To show how this may work, I outline three ‘rebel’ theories: autopoietic enactivism, cultural evolutionary psychology, and normative inferentialism against a general background of the evolution of language.These can be combined into one levelled framework if we assume cognition to be normative and embodied, and to be constructed out of old animal parts. Two central processes impact all levels of the new framework: normative regulation and identity-generation. Suggestions are made for further research based on predictive processing.

    And the abstract for the PhD research proposal, which is called – yes: “the art of misunderstanding”. It is not just armchair philosophy either, I get to enjoy myself by doing a nice bit of empirical research on my colleagues.

    Speech act theory offers a central insight: utterances do not just convey meaning, they are actions that assert, request, warn, promise, invite, predict, offer, direct, etc. In conversation, we generally recognise speech acts automatically and correctly, and almost as soon as the other starts to speak. But in some situations there is a problem. With regulatory texts on specific subjects, even the experts frequently disagree exactly what responsibility these texts confer onto whom. I propose to show that in these situations, the misidentification of speech acts is a major source of confusion; that the author(s) and audience have different interpretations of what speech acts are contained in these texts, and what the normative dimensions of these speech acts are. These findings will be interpreted in the context of Brandom’s normative inferentialism, and against the background of the cognitive theory of predictive processing; both sharing an notion of common ground and score keeping. Together, these theories may be combined to provide a framework for normative agency and interaction, of which speech acts are an instance. From this combination of philosophical insights and experimental findings, I aim to provide recommendation to improve understanding of regulatory texts on information security.

    Buttercup meadow

    I leave you with a picture of one of my favourite flowers: buttercups. I adore meadow flowers like daisies and poppies and buttercups. Find yourself a field full of them to roll around in – I certainly will.

    Next time I will tell you about the thesis itself, and what fascinated me about it, and what I discovered and how all of that is related to shades of glorious yellow. And after that, I will be talking about the research project – which will take me four years, so plenty of time for that.

  • Amuses

    Trust me, I don’t know either

    Water, sea, waves, skies – I adore them. I stand in great awe of artists who manage to capture their light. Like Ivan Konstantinovich Aivazovsky who painted the translucent waves of the picture at the top of this blog. He did many more (you can use the link to check him out). This particular one he did towards the end of his life, and is special because there are no ships, people, or shoreline. Just water and light. So breathtakingly beautiful.

    I am not quite sure why I wanted this particular picture for today’s post. Possibly I am missing the sea. In other years we usually take mini-holidays near the sea so I can walk bare-foot along the the sea line. I can do that for hours on end. If Husband would not suggest we’d better turn back, I believe I would not stop. I love wading between the little islands that form between tides. I sing to the waves and talk to the birds, but mostly I breathe and splash water with my feet. Yes, quite like a child. Adulthood is overrated 🙂

    I suppose wild water signifies opposite things to me. Beauty, freedom, light, strength, life. But also force and danger and sudden change. A bit like living, perhaps. And there is my link. I wanted to tell you about trust. Trust is risky business. I have been thinking about it in connection to language and cognition, and I developed my own little theory. Which is probably wrong, but it is my first, so bear with me.

    By John P. Weis. I have a subscription to his newsletter, hence found this delightful image in my mail this weekend.

    This is also the long-promised fourth and last instalment of my mini-series ‘studying in times of Corona’. The first wave (hmm) as it now transpires we are heading into the third. I think the Netherlands must be the very last densely populated European country to go into lockdown. But we are. From tomorrow. Not before time, either. Husband has gone out, trying to get coffee beans. It appears he is not the only one. Even though coffee is ‘essential’, surely.

    My last “big” seminar was on “folk psychology”. You might think that is people pretending to be psychologists, but it is a bit different. The idea is that we read each others’ minds. All the time. We do that, supposedly, to understand and predict each other. We know or we think we know ‘what makes other people tick’. We think in terms of belief-desire: we are rational beings that believe and want things, and that is what makes us act. The idea is from Hume, and draws on Aristotle’s de Anima. So it has been around a long time, long enough for you and I to believe it firmly. Sounds plausible, eh? Flattering too: the Homo Sapiens really got it all sussed. No wonder we are at the top of the evolutionary ladder.

    Painting by Harry Roseland. Yes, they are also reading tea leaves. Just a little joke.

    You probably saw it coming: perhaps not. This is a big debate in current Philosophy of Mind now. I wrote a very, very long paper on it, far exceeding the number of words allowed for a paper, first reviewing the various positions on the issue, and then developing a bit of my own theory. If you want the read the paper, it is here. It got me an very good grade, but I suppose I was lucky the professor wanted to read it at all, as it did not conform to any of the usual requirements. He said it was majestic, but not a paper at all, more like the outline of a book or a dissertation. Well, yes, I suppose it was. I was so excited about the topic. Still, I felt lucky to get detailed feedback. Not used to get this much attention to my work. At the office no one is in the business of improving my mind, I suppose 🙂

    I will try to tell you what debate on folk psychology is about, because otherwise I cannot explain my own little theory. Let’s take how we normally talk about each other as a starting point. We talk about our mental states a lot. About what we think, believe, feel, and why and why not. We are also very much aware of other people having thoughts, beliefs, etc. Children learn to do this at an early age, it is thought as early as 15 months. It is a fundamental ability for social interaction because it allows us to cooperate and coordinate. At all levels, in a family, in a shop, at school, at work or in government. You can see this ability at work very easily. We explain ourselves constantly in terms of what we believe and feel. And we call each other out: Why did you do that? What is the point of this? Such behaviour is characteristic of humans. Because there is little (or none, as some would have it) evidence of animals making each other justify their behaviour.

    So what is the big debate about? Well, it is not about whether we display this behaviour or whether this is typically human. It is about whether this folk psychology is an innate, genetically inherited ability. The received opinion was and mostly is, that this innate ability is what makes human special, sets us apart. Philosophers who think that, usually also think that this ability lives in the brain, as some kind of specialised module. That we read our own mental states and others, because we have special equipment to do so given to us by Evolution. At great cost, because large brains are expensive in terms of energy. But those with the best mindreading abilities survived, because clearly this provided a competitive advantage. This is called the Machiavellian Intelligence Hypothesis. It also explains our intelligence and our ability to plan ahead.

    There are some big problems with the view. One is that our reactions to other people are much faster than would ever be possible if we consciously evaluated mental states. Another one is that if you look upon other people in the third person, as agents with mental states that you can read, that leaves no room for true interpersonal experience, for experiencing together. Then there is the matter of the horse and the carriage – do our mental states explain our behaviour, do we act in accordance with our intentions? Or is it the other way around ? Cecelia Heyes, a philosopher-psychologist says that folk-psychologising is very clever, but that there is no neural basis for it. At all. It is an ability which we have discovered, fostered, taught to our children and hence transmitted across generations, through cultural learning. We teach our children from birth to respond and to learn, that is what makes us special. There are others, who say that cognition is not individual and not brain-bound; that is just an fairly recent idea which came from our own invention of computers. And so on and so forth.

    The main idea, from the non-traditional camp, is that social cognition, including our mind-reading ability, is extended by language. Language is required for cooperation, specialisation and coordination. And as device for the enculturation of social memory. Not, as classic philosophy of language would have it, to express truths about the world – remember my post about Frege? No special genes, no special modules. Simply something we have learned to do well as a species. Much like to our ability to drive or play games.

    Painting by Cassius Coolidge

    You may shrug and think this new approach not a big deal, but I can assure you it is, in my little Philosophia bubble. It turns human cognition into something that is shared with other primates, which opens up a whole new vista of research. We do have to redefine the word “cognition” though, so that it does not refer to just to humans, but that should not be too much of a problem. Philosophers of language have done much worse in the past 🙂

    And now it is curtains up for my little theory. It struck me that in neither “camp” there was a true discussion or inquiry into “why”. Why do we mindread? Or pretend we do? Obviously the survival-of-the-fittest theory won’t wash, as this ability is not genetically inherited. So why? I learned from cases in psychiatry that what therapists do, is to provide consistent feedback when patients cannot do this for themselves. As if they temporarily take over the social mindreading function until the patient can do it for him or herself again. Obviously that requires trust. If you look at this from the patient’s point of view, then what you see is a form of cognitive offloading – the patient outsources, as it were, mental work to the therapist. If you look at cognition in general, this what we do all the time, outsource, offload task to our environment. To the environment, to other people, to artefacts like books, and recently to smart devices – anything to free up cognitive resources. Even if we accept information from someone, you may regard this as a form of cognitive offloading. And all of it requires trust. If you cannot rely on whatever you outsource your cognitive labour to, you are at risk.

    In a nutshut, social cognition requires constant risk management. So there. I will come back to this idea at later point, because it will be a theme in my PhD. Talked it over my professor today, and he agreed. I will tell you about the full proposal once I have written it up, but there will be a relation between felicitous conditions for speech acts, trust and what we do – in language – to compensate when we are not sure what we or who we are dealing with. Maybe I will find out something interesting. And if not, that is also of interest.

    Robots communication

    Next I will tell you about my tiny adventure with Continental philosophy and a French philosopher who causes my regular professors indigestion. And then it will be thesis time – these days called a “publishable article”. I was told today that I have already done all the preparation I need (which means my research log = state-of-the-art paper =10 EC), so I will be writing the outline in the next few weeks. Exciting. But there is also Xmas, and Husband and Son and Xmas dinner to cook and films to watch.

    I hope your Christmas will be pleasant.

  • Amuses

    Conceptual Lego

    This was the week I had to do a presentation for the Philosophy of Mind seminar. I had assumed that it would be ok, because the paper was by an author we had read before. Things were also going well in the other seminars. I had written the survey article for the Skills & Methods class. This time I had asked my” professor for recommended reading (remember my fiasco with the fundamentalist book review), and I even plucked up the courage to ask him to review my effort. It turned out I had drawn an overhasty conclusion. Sloppiness, really. I still have to get used to checking wording en phrasing really carefully. Anyway, my professor also gave me feedback on the structure of my article, so by the time I handed it in, I was happy with it.

    A little too relaxed

    So maybe I was relaxing a little too much. I wasn’t even bothered when the article I had to present was changed just a week beforehand. Only 13 pages, that would be a doddle, I thought. Hubris! Then everything happened at once. At work, a situation which had been smouldering for a while, suddenly exploded, causing all kinds of havoc. Also, I had taken a fall at the sauna a week before, causing a bad knee scrape. Suddenly this wound got inflamed so badly I had to go to the first aid post on a sunday morning. They gave me a shitload of penicillin, which made me feel so sleepy I had to take time of work, plus I had to miss one of my classes. And then there was the normal study workload plus this presentation to do. I already felt sorry for myself before I even started to do the actual prepartion.

    Deconstructing jargon

    So, the article. It was by a guy called Di Paolo, who specialises in the “enacted mind”. The great mystery to be explained is how cognition develops. I made a wordle out of the text for your amusement.

    Now this is not a simple subject, and the way this Di Paolo guy writes about it is a nightmare. He doesn’t really explain much, he refers to other papers, by himself, and by other philosophers. Plus it is all jargon, meant for an in-crowd which I certainly don’t belong to. I had to go through his source material, and read up on lots of reviews to help me understand what his theory was all about. Because the article did not have a helpful structure, I constructed “conceptual Lego” as the basis for my presentation. See below. Colourful, eh?

    conceptual Lego

    Thanks to my husband who is still (!) driving me to university, I was well in time to set up my presentation. I really was nervous. Fortunately, the professor-duo teaching this class apologised for the horrendous text as soon as they saw me. That took the edge of my nerves! The conceptual Lego worked even better than I had hoped. I felt I really liked this theory I was presenting. Maybe a good topic for the end-of-term paper I am to write soon.

    Busy bees

    It is all so very interesting, and I am learning so much! None of these theories were around when I first went to university. Back then, there was no joint research between disciplines. Now it is like a beehive, philosophical bees, psychological bees, sociological bees, neurological bees, all working on cognition. And on language, as a special form of cognition. I just wish there were more hours in a day :)