Gemini Code Assist 和負責任的 AI
透過集合功能整理內容
你可以依據偏好儲存及分類內容。
本文將說明 Gemini Code Assist 的設計方式,以及與生成式 AI 相關的功能、限制和風險。
大型語言模型的功能和風險
大型語言模型 (LLM) 可執行許多實用的工作,例如:
- 翻譯語言。
- 產生文字摘要。
- 產生程式碼和創意寫作。
- 為聊天機器人和虛擬助理提供動力。
- 補足搜尋引擎和推薦系統的功能。
同時,LLM 不斷進步的技術能力也可能導致誤用、濫用,以及意料之外或無法預測的後果。
LLM 可能會產生您意料之外的輸出內容,包括令人反感、不敏感或違反事實的文字。由於 LLM 的用途非常廣泛,因此很難預測這些模型可能產生哪些非預期或意料之外的輸出內容。
考量到這些風險和複雜性,Gemini Code Assist 的設計理念是遵循 Google 的 AI 開發原則。不過,使用者必須瞭解 Gemini Code Assist 的部分限制,才能安全且負責任地使用這項工具。
Gemini Code Assist 限制
使用 Gemini Code Assist 時,您可能會遇到下列 (但不限於) 限制:
極端案例:邊緣情況是指訓練資料中未充分呈現的異常、罕見或特殊情況。這些情況可能會導致 Gemini Code Assist 模型的輸出內容受到限制,例如模型過度自信、誤解內容或輸出不當內容。
模擬幻覺、建立基準和事實性。Gemini Code Assist 模型可能缺乏真實世界的知識、物理性質或準確理解的基礎和事實性。這項限制可能會導致模型產生幻覺,Gemini Code Assist 可能會輸出看似合理,但實際上違反事實、無關、不當或毫無意義的內容。幻覺也可能包括捏造連結,連往不存在的網頁。詳情請參閱「為 Gemini for Google Cloud 撰寫更有效的提示」。
資料品質和調整。輸入 Gemini Code Assist 產品的提示資料品質、準確度和偏差,可能會對效能產生重大影響。如果使用者輸入不準確或錯誤的提示,Gemini Code Assist 可能會傳回次佳或錯誤的回覆。
偏差放大。語言模型可能會無意放大訓練資料中現有的偏誤,導致輸出內容進一步強化社會偏見,並對特定族群做出不公平的對待。
語言品質。雖然 Gemini Code Assist 在我們評估的基準測試中展現出令人印象深刻的多語言功能,但大多數的基準測試 (包括所有公平性評估) 都是以美式英文進行。
語言模型可能會為不同使用者提供不一致的服務品質。舉例來說,文字生成功能可能無法有效處理某些方言或語言變體,因為訓練資料中不常出現這類內容。對於較少人使用的非英文語言或英文變體,成效可能會較差。
公平性基準和子群組Google Research 對 Gemini 模型的平等性分析並未詳述各種潛在風險。舉例來說,我們會著重於性別、種族、族裔和宗教軸線的偏差,但只針對美國英語資料和模型輸出內容進行分析。
專業領域知識有限。Gemini 模型已在 Google Cloud 技術上完成訓練,但可能缺乏提供精確且詳細回覆所需的深度知識,因此無法針對專業或技術性主題提供正確資訊,導致提供膚淺或錯誤的資訊。
Gemini 安全性和有害內容篩選
Gemini Code Assist 會根據各用途適用的安全性屬性完整清單,檢查提示和回應。這些安全性屬性旨在篩除違反《使用限制政策》的內容。如果系統認為輸出內容有害,就會封鎖回應。
後續步驟
除非另有註明,否則本頁面中的內容是採用創用 CC 姓名標示 4.0 授權,程式碼範例則為阿帕契 2.0 授權。詳情請參閱《Google Developers 網站政策》。Java 是 Oracle 和/或其關聯企業的註冊商標。
上次更新時間:2025-08-31 (世界標準時間)。
[null,null,["上次更新時間:2025-08-31 (世界標準時間)。"],[[["\u003cp\u003eGemini, a large language model (LLM), is designed with Google's AI principles to offer capabilities like language translation, text summarization, and code generation, while acknowledging the risks of misapplication and unintended outputs.\u003c/p\u003e\n"],["\u003cp\u003eGemini for Google Cloud has limitations such as producing unexpected output in edge cases, potentially generating inaccurate information, and lacking factuality, which can include hallucinated information or links.\u003c/p\u003e\n"],["\u003cp\u003eThe quality of Gemini's output is influenced by the data quality and accuracy of user prompts, and there is a potential for the amplification of societal biases present in its training data.\u003c/p\u003e\n"],["\u003cp\u003eGemini's performance can vary across languages and dialects, as it primarily evaluates fairness in American English, potentially resulting in inconsistent service quality for underrepresented language varieties.\u003c/p\u003e\n"],["\u003cp\u003eDespite being trained on Google Cloud technology, Gemini may lack the specialized knowledge required to offer accurate details on highly technical topics, and it does not have awareness of the user's specific environment in the Google Cloud console.\u003c/p\u003e\n"]]],[],null,["# Gemini Code Assist and responsible AI\n\nThis document describes how Gemini Code Assist is designed in\nview of the capabilities, limitations, and risks that are associated with\ngenerative AI.\n\nCapabilities and risks of large language models\n-----------------------------------------------\n\nLarge language models (LLMs) can perform many useful tasks such as the\nfollowing:\n\n- Translate language.\n- Summarize text.\n- Generate code and creative writing.\n- Power chatbots and virtual assistants.\n- Complement search engines and recommendation systems.\n\nAt the same time, the evolving technical capabilities of LLMs create the\npotential for misapplication, misuse, and unintended or unforeseen consequences.\n\nLLMs can generate output that you don't expect, including text that's offensive,\ninsensitive, or factually incorrect. Because LLMs are incredibly versatile, it\ncan be difficult to predict exactly what kinds of unintended or unforeseen\noutputs they might produce.\n\nGiven these risks and complexities, Gemini Code Assist is\ndesigned with [Google's AI principles](https://ai.google/responsibility/principles/)\nin mind. However, it's important for users to understand some of the limitations\nof Gemini Code Assist to work safely and responsibly.\n\nGemini Code Assist limitations\n------------------------------\n\nSome of the limitations that you might encounter using\nGemini Code Assist include (but aren't limited to) the following:\n\n- **Edge cases.** Edge cases refer to unusual, rare, or exceptional situations\n that aren't well represented in the training data. These cases can lead to\n limitations in the output of Gemini Code Assist models, such as\n model overconfidence, misinterpretation of context, or inappropriate outputs.\n\n- **Model hallucinations, grounding, and factuality.**\n Gemini Code Assist models might lack grounding and factuality\n in real-world knowledge, physical properties, or accurate understanding. This\n limitation can lead to model hallucinations, where\n Gemini Code Assist might generate outputs that are\n plausible-sounding but factually incorrect, irrelevant, inappropriate, or\n nonsensical. Hallucinations can also include fabricating links to web pages\n that don't exist and have never existed. For more information, see\n [Write better prompts for Gemini for Google Cloud](https://cloud.google.com/gemini/docs/discover/write-prompts).\n\n- **Data quality and tuning.** The quality, accuracy, and bias of the prompt\n data that's entered into Gemini Code Assist products can have a\n significant impact on its performance. If users enter inaccurate or incorrect\n prompts, Gemini Code Assist might return suboptimal or false\n responses.\n\n- **Bias amplification.** Language models can inadvertently amplify existing\n biases in their training data, leading to outputs that might further reinforce\n societal prejudices and unequal treatment of certain groups.\n\n- **Language quality.** While Gemini Code Assist yields\n impressive multilingual capabilities on the benchmarks that we evaluated\n against, the majority of our benchmarks (including all of the fairness\n evaluations) are in American English.\n\n Language models might provide inconsistent service quality to different users.\n For example, text generation might not be as effective for some dialects or\n language varieties because they are underrepresented in the training data.\n Performance might be worse for non-English languages or English language\n varieties with less representation.\n- **Fairness benchmarks and subgroups.** Google Research's fairness analyses of\n Gemini models don't provide an exhaustive account of the various\n potential risks. For example, we focus on biases along gender, race,\n ethnicity, and religion axes, but perform the analysis only on the American\n English language data and model outputs.\n\n- **Limited domain expertise.** Gemini models have been trained on\n Google Cloud technology, but it might lack the depth of knowledge that's\n required to provide accurate and detailed responses on highly specialized or\n technical topics, leading to superficial or incorrect information.\n\nGemini safety and toxicity filtering\n------------------------------------\n\nGemini Code Assist prompts and responses are checked against a\ncomprehensive list of safety attributes as applicable for each use case. These\nsafety attributes aim to filter out content that violates our\n[Acceptable Use Policy](https://cloud.google.com/terms/aup). If an output is\nconsidered harmful, the response will be blocked.\n\nWhat's next\n-----------\n\n- Learn more about [how Gemini Code Assist cites sources when helps you generate code](/gemini-code-assist/docs/works#how-when-gemini-cites-sources)."]]