In many situations, a group of people needs to make a joint decision. For example, in business scenarios, an executive committee may need to choose an agreed strategic plan for the company, a hiring committee may need to converge on a candidate to make an offer to, and an acquisition team may need to agree on which company to acquire; in resource allocation, multiple organizations may need to decide the division of some limited resources. In AI this problem is known as group decision-making (a.k.a. collective decision-making or social choice).
There is a large literature in AI-aided group decision-making. The existing frameworks support people in aggregating and reconciling their preferences, which are the base on which choices are made. However, little is done about embedding possible ethical guidelines into group decision and combining them with the group members’ preferences.
Leveraging the teams expertise in group decision-making, AI ethics, explainable AI, and machine learning, we aim to define and study a framework to support preference-based ethical group decision making, where learned ethical guidelines can be used to infer the appropriate properties of the decision support system. We propose to achieve this goal by following three research thrusts towards establishing mathematical and learning foundations for embedding ethical guidelines in AI for group decision-making.