MIT & Harvard's Ethics & Governance of AI Initiative (2017) emphasizes the following 3 areas:
A.
AI & Justice: What legal & institutional structures should govern the adoption & maintenance of autonomy in public administration? How might approaches such as causal modeling rethink the role that autonomy has to play in areas such as criminal justice?
B.
Information Quality: Can we measure the influence that machine learning & autonomous systems have on the public sphere? What do effective structures of governance & collaborative development look like between platforms and the public? Can we better ground discussions around policy responses to disinformation in empirical research?
C.
Autonomy & Interaction: What are the moral & ethical intuitions that the public brings to bear in their interactions with autonomous systems? How might those intuitions be better integrated on a technical level into these systems? What role does design & interface - say, in autonomous vehicles - play in defining debates around interpretability & control?
D.
Stability of the AI Systems: How stable is the AI system? Can it run for a long time?