"Large language models should never be allowed to access ___."
@miranetwork really knows how to ask the right questions.
My answer should be: training data. If an LLM directly accesses or leaks training data, all privacy is lost!
As an AI "quality inspector," ensuring a decentralized and verifiable network through a consensus mechanism to keep AI in check is pretty cool! @MiraNetworkCN what do you think?
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
"Large language models should never be allowed to access ___."
@miranetwork really knows how to ask the right questions.
My answer should be: training data.
If an LLM directly accesses or leaks training data, all privacy is lost!
As an AI "quality inspector," ensuring a decentralized and verifiable network through a consensus mechanism to keep AI in check is pretty cool!
@MiraNetworkCN what do you think?