Skip to main content

Responsible use of GitHub Copilot CLI

Learn how to use GitHub Copilot in the CLI responsibly by understanding its purposes, capabilities, and limitations.

この機能を使用できるユーザーについて

GitHub Copilot in the CLI is available with the GitHub Copilot Pro, GitHub Copilot Pro+, GitHub Copilot Business and GitHub Copilot Enterprise plans.

If you receive Copilot from an organization, the CLI での Copilot policy must be enabled in the organization's settings.

About GitHub Copilot in the CLI

GitHub Copilot in the CLI provides a chat-like interface in the terminal that can autonomously create and modify files on your computer and execute commands. You can ask Copilot to perform any action on the files in the active directory.

GitHub Copilot in the CLI can generate tailored changes based on your description and configurations, including tasks like bug fixes, implementing incremental new features, prototyping, documentation, and codebase maintenance.

While working on your task, the Copilot agent has access to your local terminal environment where it can make changes to your code, execute automated tests, run linters, and execute commands available in your environment.

The agent has been evaluated across a variety of programming languages, with English as the primary supported language.

The agent works by using a combination of natural language processing and machine learning to understand your task and make changes in a codebase to complete the task. This process can be broken down into a number of steps.

Input processing

The input prompt from the user is combined with other relevant, contextual information to form a prompt. That prompt is sent to a large language model for processing. Inputs can take the form of plain natural language, code snippets, or references to files in your terminal.

Language model analysis

The prompt is then passed through a large language model, which is a neural network that has been trained on a large body of data. The language model analyzes the input prompt to help the agent reason on the task and leverage necessary tools.

Response generation

The language model generates a response based on its analysis of the prompt. This response can take the form of natural language suggestions, code suggestions, file modifications, and command executions.

Output formatting

The response generated by the agent is formatted and presented to you. GitHub Copilot in the CLI uses syntax highlighting, indentation, and other formatting features to add clarity to the generated response.

The agent might also want to execute commands in your local environment and create, edit, or delete files in your file system in order to complete your task.

You may provide feedback to the agent after it returns a response in the interactive chat window. The agent will then resubmit that feedback to the language model for further analysis. Once the agent completes changes based on feedback, the agent will return an additional response.

Copilot is intended to provide you with the most relevant solution for task resolution. However, it may not always provide the answer you are looking for. You are responsible for reviewing and validating responses generated by Copilot to ensure they are accurate and appropriate. For more information, see the section Improving the results from GitHub Copilot in the CLI, later in this article.

Use cases for GitHub Copilot in the CLI

You can delegate a task to Copilot in a variety of scenarios, including, but not limited to:

  • Codebase maintenance: Tackling security-related fixes, dependency upgrades, and targeted refactoring.
  • Documentation: Updating and creating new documentation.
  • Feature development: Implementing incremental feature requests.
  • Improving test coverage: Developing additional test suites for quality management.
  • Prototyping new projects: Greenfielding new concepts.
  • Setting up your environment: Running commands in your terminal to set up your local environment to work on existing projects
  • Find the right command to perform a task: Copilot can provide suggestions for commands to perform tasks you're trying to complete.
  • Explain an unfamiliar command: Copilot can provide a natural language description of a command's functionality and purpose.

Improving the results from GitHub Copilot in the CLI

GitHub Copilot in the CLI can support a wide range of tasks. To enhance the responses you receive, and address some of the limitations of the agent, there are various measures that you can adopt.

For more information about limitations, see the section Limitations of GitHub Copilot in the CLI, later in this article.

Ensure your tasks are well-scoped

GitHub Copilot in the CLI leverages your prompt as key context when generating a pull request. The more clear and well-scoped the prompt you assign to the agent, the better the results you will get. An ideal issue includes:

  • A clear description of the problem to be solved or the work required.
  • Complete acceptance criteria on what a good solution looks like (for example, should there be unit tests?).
  • Hints or pointers on what files need to be changed.

Customize your experience with additional context

GitHub Copilot in the CLI leverages your prompt, comments and the repository’s code as context when generating suggested changes. To enhance Copilot’s performance, consider implementing custom Copilot instructions to help the agent better understand your project and how to build, test and validate its changes. For more information, see "Add custom instructions to your repository" in タスクの作業での GitHub Copilot の使用に関するベスト プラクティス.

Use GitHub Copilot in the CLI as a tool, not a replacement

While GitHub Copilot in the CLI can be a powerful tool for generating code and documentation, it is important to use it as a tool, rather than a replacement for human programming. You should always review and verify commands generated by GitHub Copilot in the CLI to ensure that it meets your requirements and is free of errors or security concerns.

Use secure coding and code review practices

Although GitHub Copilot in the CLI can generate syntactically correct code, it may not always be secure. You should always follow best practices for secure coding, such as avoiding hard-coded passwords or SQL injection vulnerabilities, as well as following code review best practices, to address the agent’s limitations. You should always take the same precautions as you would with any code you write that uses material you did not independently originate, including precautions to ensure its suitability. These include rigorous testing, IP scanning, and checking for security vulnerabilities.

Provide feedback

If you encounter any issues or limitations with GitHub Copilot in the CLI, we recommend that you provide feedback using the /feedback command.

Security measures for GitHub Copilot in the CLI

Constraining Copilot’s permissions

By default, Copilot only has access to files and folders in, and below, the directory from which GitHub Copilot in the CLI was invoked. Ensure you trust the files in this directory. If Copilot wishes to access files outside the current directory, it will ask for permission. Only grant it permission if you trust the contents of that directory.

Copilot will ask for permission before modifying files. Ensure that it is modifying the correct files before granting permission.

Copilot will also ask for permission before executing commands that may be dangerous. Review these commands carefully before giving it permission to run.

For more information about security practices while using GitHub Copilot in the CLI, see "Security considerations" in About GitHub Copilot CLI.

Limitations of GitHub Copilot in the CLI

Depending on factors such as your codebase and input data, you may experience different levels of performance when using GitHub Copilot in the CLI. The following information is designed to help you understand system limitations and key concepts about performance as they apply to GitHub Copilot in the CLI.

Limited scope

The language model used by GitHub Copilot in the CLI has been trained on a large body of code but still has a limited scope and may not be able to handle certain code structures or obscure programming languages. For each language, the quality of suggestions you receive may depend on the volume and diversity of training data for that language.

Potential biases

The language model used by GitHub Copilot in the CLI’s training data and context gathered by the large language model may contain biases and errors that can be perpetuated by the tool. Additionally, GitHub Copilot in the CLI may be biased towards certain programming languages or coding styles, which can lead to suboptimal or incomplete suggestions.

Security risks

GitHub Copilot in the CLI generates code and natural language based on the context of an issue or comment within a repository, which can potentially expose sensitive information or vulnerabilities if not used carefully. You should be careful to review all outputs generated by GitHub Copilot in the CLI thoroughly prior to merging.

Inaccurate code

GitHub Copilot in the CLI may generate code that appears to be valid but may not actually be semantically or syntactically correct or may not accurately reflect the intent of the developer.

To mitigate the risk of inaccurate code, you should carefully review and test the generated code, particularly when dealing with critical or sensitive applications. You should also ensure that the generated code adheres to best practices and design patterns and fits within the overall architecture and style of the codebase.

Public code

GitHub Copilot in the CLI may generate code that is a match or near match of publicly available code, even if the "Suggestions matching public code" policy is set to "Block." See 個人のサブスクライバーとしての GitHub Copilot ポリシーの管理.

Users need to evaluate potential specific legal and regulatory obligations when using any AI services and solutions, which may not be appropriate for use in every industry or scenario. Additionally, AI services or solutions are not designed for and may not be used in ways prohibited in applicable terms of service and relevant codes of conduct.

Risk management and user accountability in command execution

Additional caution is required when asking or allowing GitHub Copilot in the CLI to execute a command, particularly regarding the potential destructiveness of some suggested commands. You may encounter commands for file deletion or hard drive formatting, which can cause problems if used incorrectly. While such commands may be necessary in certain scenarios, you need to be careful when accepting and running these commands.

Additionally, you are ultimately responsible for the commands executed by GitHub Copilot in the CLI. It is entirely your decision whether to use commands generated by GitHub Copilot in the CLI. Despite the presence of fail-safes and safety mechanisms, you must understand that executing commands carries inherent risks. GitHub Copilot in the CLI provides a powerful tool set, but you should approach its recommendations with caution and ensure that commands align with your intentions and requirements.

Further reading