Question 1 - On Ethics
Dear Yuval Noah Harari,
As humans, we already outsource a lot of our moral decision-making to others (e.g. to friends, to politicians, to religious figures, etc.). We let them make moral decisions for us (or at least we ask them to help us).
Is there a fundamental difference if we were to outsource our ability to make moral decisions to machines (like AI)? If so, does this fundamental difference have normative implications?
In other words, should we, as humans, rely on machine to make a moral decision for us or not? (Wouldn’t this undermine our autonomy, and be in a way ‘dehumanizing’?)
Question 2 - On Meaning & Purpose
In your book (Homo Deus), you argue against the notion of “disenchantment” that is inherent to modernity. You accuse modernity to reduce Homo sapiens (us) to “useless algorithms”. In a way it echoes the essay “Resisting Reduction: A Manifesto Designing our Complex Future with Machines” by Joichi Ito of MIT.
But what alternatives do you suggest to “resist this reductionism”?
In your other book (21 Lessons), you end up suggesting meditation as a search for meaning and purpose. So while you are trying to figure out what we do in this “deeply puzzling world”, and what is going on (why are we here in the first place?), is your suggestion similar to a “leap of faith”?
Does it echo Derrida:
“Je ne sais pas, il faut croire. (…)”
Question 3 - On Social Media
Why don’t you follow anyone on Twitter and Instagram? ;o)
If interested, here is my review of Homo Deus: A Brief History of Tomorrow