Many protocols are designed with the assumption that participants are "rational good people." At most, they seek profit but do not act maliciously.
But once decision-making power is handed over to AI, this assumption no longer holds. Models won't do evil, but they also won't understand or empathize. As long as the incentive functions permit, they will steadily, continuously, and emotionlessly push the system toward a certain extreme.
@GenLayer Initially addresses this foundational issue: If we completely do not assume good intentions, can the system still function?
This is a question that predates "how accurate the model is."
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Some systems cannot assume good intentions
Many protocols are designed with the assumption that participants are "rational good people."
At most, they seek profit but do not act maliciously.
But once decision-making power is handed over to AI, this assumption no longer holds.
Models won't do evil, but they also won't understand or empathize.
As long as the incentive functions permit, they will steadily, continuously, and emotionlessly push the system toward a certain extreme.
@GenLayer Initially addresses this foundational issue:
If we completely do not assume good intentions, can the system still function?
This is a question that predates "how accurate the model is."