I am in the middle of reading a commentary on Frank Jackson's moral functionalism. That will I've me a better understanding of the subject. Although I must say that I am wary of the term deconstruction that you use. The reason that is so is because although it is a powerful analytical tool, it tends to work as a universal acid and cannot be contained. Read the transcript of Foucault vs Chomsky on Power and Justice (1971). I think moral relativism may be a revelation for an unquestioning mind, but it leads to the same uni-dimensional approach that is the fatal flow of Marxism. Reducing everything down to a single function - power, economics, ideology, produces perverse outcomes. I will readily admit that I am yet to assemble all the complex tools required for the job, but one thing I know for sure I'd that using a single approach such as deconstruction will not suffice.
About genetic makeup - Jonathan Haidt calls the "blank slate" theory the greatest mistake in psychology. It seems to me that all of it flows from Kant's claim about experiences being structured by the "necessary features" of our mind. More specifically, recent research (2011, PLoS ONE) shows that the single gene that controls the level of serotonin in our brain seems to decisively alter our perception of moral acceptability of foreseeable harm.
Natural selection and our environment also bear upon us to produce our moral landscape. Let me point out here that this issue of values, morals and ethics is vastly more important than it seems at first look. I am referring to Artificial Intelligence. Taking apart and re-assembling all that we know about this subject has become critical in view of certain needs.
Everyone is familiar with Asimov's laws of robotics. However, they are rather simplistic assertions about things that not many have thought through. It may be possible to code ethics into narrow AI, but what we are faced with is the quest for general intelligence. So to begin with, we would need access to the entire schema of our mental intuitions and processes in order to understand how we code normative values into AI.
Understanding the part played by all determinants (genes, natural selection and environment) becomes even more important, as we are realizing that this AI may be so invested in rapid self-improvement that it may outgrow the values and controls that we put in place in the blink of an eye. So the only way we can know what this AI is up to is if we know how we have gotten to this stage and the exact process.
Since the question of post-scarcity economics, singularity and technological disruption is so inter-twined, it will inevitably be addressed together. That part of our assessment will be heuristic in nature, at best, because it is based on future probabilities.
Apart from the pressing problem of AI, there are two other areas that will concern everyone in the coming years. First, the inevitability of mass unemployment as a result of automation, and its effect on humans. Second, would be the economic model needed to tackle this issue, including the moral framework that needs to be brought to bear.
This would include such things as what does a human really want and find meaning from? Is productive work so essential to human well-being that simply getting paid for leisure (universal basic income) is a recipe for human misery? Is it moral to let the tiny elite control all emerging technology in the name of intellectual property, and how do we handle the resultant concentration of wealth and power?
P.S. : Since I type on my phone, there are a lot if typos. Kindly disregard these.