This is the first of a series of blogs about my academic exploits in the past 6 months. It is has been quite a diverse experience, because I was more or less free to choose what seminars I was going to attend to.
Administrative intermezzo. Where am I in my studies? If you recall, the Research Master is 120 EC (European Credits). One credit is supposed to be between 25 en 30 study hours. The last leg is the Master’s thesis, which consists of a publishable article and the PhD research plan. It is supposed to be between 10.000 en 20.000 words and good for 40 EC. The other 80 ECs I have now obtained – except for 2, which I am going to get through a short (5-week) seminar on the philosophy of evolution. The whole thing has to be accounted for as “study plan” to be formally approved by the Examination Committee. Apparently mine had been approved, but I had not realised. I had submitted it last year, but never received a reply so I thought they mislaid it. Great. I have to resubmit because of some minor changes, but I have been told that will not be a problem. So – time to start preparing for preparing my thesis. If I finish it before februari, I will have done the whole full-time Research Master next to a full-time job. I am thinking that perhaps I should slow down a bit, take the rest of the academic year. But not yet. First I will write up these blogs. Coz I promised, coz maybe you will like to read them, and anyway, it is reflection time for me now this mountain of seminars and papers is behind me. End of administrative intermezzo!
I took a seminar on Ethics and Artificial Intelligence. This was a regular Master seminar, so quite a lot of students, I think around 70 which was a bit of a change from 10-20. The idea was to learn about how to discuss the ethical aspects of robots, algorithms, self-thinking computers and how things will develop in future. We can all see that the world is changing rapidly, and this requires new thinking about what kind of decision we – humankind- want and do not want to leave to technology. If you are not familiar with this topic, you must watch John Oliver on facial recognition – in fact, you must watch it anyway because it is true and funny.
For the first part we studied a textbook – which was excellent, I annotated my copy endlessly. This one, if you are interested. I have noticed since that many universities use it as a set book. This was still in the pre-Corona era. We had two lecturers, one Dutch, one Italian. I was impressed by their meticulous preparation of lectures – great slides! There was an exam at the end, online, and I ran out of time, so I missed out on a few points. Never mind. It was useful to learn the stock arguments from Virtue ethics (religious virtues), Utilitarianism (democratic opportunism)) and Kantian Ethics (rules) in their application to AI, because whoever you talk to, they all have a preference, so you always need to know your stuff from each of those three ethical viewpoints.
Next we moved on to discussing a book by Steward Russell, called “Human Compatible”. It is a best seller, which I suppose is why they wanted us to read it. The idea of the course was that we learned not only to identify and tackle ethical problems about artificial intelligence, but also how to explain them to a board of directors. Unfortunately, I hated the book. Sloppy arguing, cherry-picked facts and fear-mongering – exactly the kind of cocktail that irritates me beyond measure. Don’t read it, it is dross. We discussed this book in online-workshops. I think by the end the lecturer agreed with me – or may he had another reason for abandoning the book early 🙂
The third part was the pièce de résistance: we were to write a research proposal to tackle an Artificial Intelligence ethical problem. The prescribed format was quite a challenge: two page spread, something an executive director can read in 6 minutes during breakfast. This is mine and it looks like this:
Pretty eh? Thank you husband, and this time also Son, for proofreading! If you read it (it is only supposed to take you 6 minutes), you will see it is about “profiling”. I wrote it just as the storm about supposed discriminatory profiling by the Dutch Tax Office broke, both in politics and in the news. So, when my professors suggested I’d send it to a newspaper because they thought it was very good, I thought I’d better check with my team and boss if I should publish – and I was sort of asked not to attract attention. So I did not. Which I found difficult, because I firmly believe that the whole issue was horribly misrepresented, and worse, detracts from the real problem. Write to me if you want to hear more about this.
I did decide to follow up on my own recommendation and contacted the Open Group for translating the EU guidelines for the creation of responsible EA into TOGAF (which is the methodology bible for digital architects). They are interested in a white paper, so I will try to find the time to do this, because I do think it it important. We need to be thinking about totally different stakeholders – not just the people in power, but also impact op society and future generations. Also, there is a big problem in assigning responsibility for things that robots do – because you cannot hold them accountable for working as designed.
So all in all I am happy I took the course. At times I thought was a bit slow, presumably that was because I already had a good working knowledge of both AI and Ethics. But the seminar also taught me how to put up a decent argument and write it down so that others might understand it – not a bad thing considering how much confusion there is around this topic.
I am leaving you now, to indulge in your – and mine – favourite pastime.