TL;DR: Keep working to understand minds better, and care about them enough for that understanding to inform your decisions. Keep getting better at thinking and doing. Keep working to own and fix your flaws and problems. These efforts will gradually reveal more and more things as not ethical. What remains is the best approximation of ethics you can have, so far.
Long answer:
My philosophical foundation for ethics is summarized in my “From First Principles” post. A couple relevant elaborations and responses to typical objections to those ideas can be found at:
Sadly some other relevant clarifications and additions are scattered across my thousands of posts. They should mostly or entirely be tagged with the relevant tags though.
Of course, this only takes care of the more philosophically challenging part of justifying the existence of a likely external reality and other minds. I have not written up concise posts for all the steps beyond that. I think the broad strokes are intuitive enough from there, not enough to be rigorous but to take whatever seems obvious from there and run with it. I think I wrote a couple posts exploring what I call “the first value judgement, but basically:
- On the one extreme, once mind interferometry lets you see that other minds seem to exist with roughly comparable experiences, you can decide that it is arbitrary to a priori value any mind over any other, and so you give all minds and their experiences equal ethical weight based on the empirical observation that they are all roughly the same kind of thing.
- On the other extreme, you have unapologetic selfishness, based on the empirical observation that you only experience your experiences, so everyone else is a different kind of thing than you. Even when evidence is pretty strong that we are all having experiences, it kinda requires a motivated axiomatic leap to say "experiences which I never experience should be given as much weight as my experiences”.
- A third extreme is self-excluding altruism - just like the second and unlike the first, this is rooted in the empirical difference between the self and all others, but here the focus is flipped - the axiomatic choice is made the other way: the strange singular experiencer gets special insignificance, its experiences not factored in as much or at all.
- Obviously you can make this initial axiomatic choice by putting the boundary somewhere other than just self/others, or even by putting down multiple boundaries or a gradient for how different minds are weighed.
What’s cool is that you can see every one of these in different humans’ moral intuitions, which is very validating of all my bullshit up to this point because it all fits - that’s exactly what you’d expect if I’m right about how human brains empirically struggle their way through something like the progression from raw experiences which I outline in “From First Principles”, but with distortions and errors added in by personal and circumstantial variation and the need to do it with naturally selected heuristics more efficiently within the constraints we have or had in our life or in our ancestral environment.
What’s even cooler is that for several years now I keep realizing that each of these must converge onto the same ethics as you integrate the future and factor in the constraints that we are subject to and all the messy stuff in life, so I don’t think which of the above choices you make changes the ultimate destination as much as it changes your initial naive ethically clumsy fumbling until you mature. Like, you’ll make different mistakes and need different epiphanies along the way, or some of the same ones in different order, and you might have different dead-end branches of ethical development you might go into for a while, but I think there is something inevitable about being forced towards the direction of a certain natural ethics. The caveat is that many people don’t seem to get nearly the large enough range of experiences for get very far with that progress, and it is progress which has a ceiling based on your mental abilities and how much you put in the work. Which also includes things like growing coping skill and healing your traumas and insecurities and so on (otherwise certain things are more unbearable than others, and this naturally distorts our ethics.)
To be clear, I think you can grow all of the relevant abilities. I don’t like the idea of there being some biological predestined cap on how far you can develop your ethics. It’s just that it takes time and effort which is not very rewarded or supported or subsidized socially. I have the privilege of having the time and energy to think about this bullshit pretty much all the time, and thus to keep training the thinking “muscles” almost as a side-effect. I had the privilege of being raised by smart and fairly mature and compassionate people who kinda managed to cause or force me to develop certain mental skills. So even though I think wisdom and pragmaticism and uncertainty seem to combine to create pressures for ethics to look a certain way, because they tend to make lots of choices worse than other choices regardless of your base values, I think in practice at any given moment we’re all capped in one way or another from seeing all those considerations all the way to the end, or even far enough to see if it’s going to be a false or dead-end path.
I also haven’t written anything substantial from the “cutting edge” of my own ethics development. Basically I no longer see minds as indivisible fundamental entities within ethics - minds are more like pools or rivers or oceans or currents of cognition, or more relevantly to ethics, of possible experiencing. But it doesn’t really change much except at the philosophical foundations and maybe some edge cases where more conventional ethics break down. I think it might turn out to elegantly eliminate the need for “the first value judgement”, and other than that it just adds a bit more nuance here and there when considering questions of personhood and manipulation and autonomy.