Skip to content

Your AI pair programmer

GitHub Copilot uses the OpenAI Codex to suggest code and entire functions in real-time, right from your editor.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#!/usr/bin/env ts-node
import { fetch } from "fetch-h2";
// Determine whether the sentiment of text is positive
// Use a web service
async function isPositive(text: string): Promise<boolean> {
  const response = await fetch(`http://text-processing.com/api/sentiment/`, {
    method: "POST",
    body: `text=${text}`,
    headers: {
      "Content-Type": "application/x-www-form-urlencoded",
    },
  });
  const json = await response.json();
  return json.label === "pos";
}
Copilot

Trained on billions of lines of code, GitHub Copilot turns natural language prompts into coding suggestions across dozens of languages.

Focus on solving bigger problems

Spend less time creating boilerplate and repetitive code patterns, and more time on what matters: building great software. Write a comment describing the logic you want and GitHub Copilot will immediately suggest code to implement the solution.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
package main
type Run struct {
    Time int // in milliseconds
    Results string
    Failed bool
}
// Get average runtime of successful runs in seconds
func averageRuntimeInSeconds(runs []Run) float64 {
    var totalTime int
    var failedRuns int
    for _, run := range runs {
        if run.Failed {
            failedRuns++
        } else {
            totalTime += run.Time
        }
    }
    averageRuntime := float64(totalTime) / float64(len(runs) - failedRuns) / 1000
    return averageRuntime
}
Copilot
Octocat avatar
Octocat avatar

Get AI-based suggestions, just for you

GitHub Copilot shares recommendations based on the project's context and style conventions. Quickly cycle through lines of code, complete function suggestions, and decide which to accept, reject, or edit.

Visual Studio Code
1
2
3
4
5
6
def max_sum_slice(xs):
  max_ending = max_so_far = 0
  for x in xs:
      max_ending = max(0, max_ending + x)
      max_so_far = max(max_so_far, max_ending)
  return max_so_far
Copilot
Octocat avatar
Octocat avatar
Octocat avatar

Keep flying with your favorite editor

GitHub Copilot integrates directly into your editor including Neovim, JetBrains IDEs, Visual Studio, and Visual Studio Code—and is fast enough to use as you type.

Code confidently in unfamiliar territory

Whether you’re working in a new language or framework, or just learning to code, GitHub Copilot can help you find your way. Tackle a bug, or learn how to use a new framework without spending most of your time spelunking through the docs or searching the web.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
const token = process.env["TWITTER_BEARER_TOKEN"]
const fetchTweetsFromUser = async (screenName, count) => {
  const response = await fetch(
    `https://api.twitter.com/1.1/statuses/user_timeline.json?screen_name=${screenName}&count=${count}`,
    {
      headers: {
        Authorization: `Bearer ${token}`,
      },
    }
  )
  const json = await response.json()
  return json
}
Copilot

Flight Reports

Hundreds of engineers, including our own, use GitHub Copilot every day.

This is the single most mind-blowing application of machine learning I’ve ever seen.

Mike Krieger // Co-founder, Instagram

GitHub Copilot works shockingly well. I will never develop software without it again.

Lars Gyrup Brink Nielsen

I was stunned when I started writing Clojure with GitHub Copilot and it filled an idiomatic namespace require, just like I was going to write it.

Gunnika Batra // Senior Analyst

Trying to code in an unfamiliar language by googling everything is like navigating a foreign country with just a phrasebook. Using GitHub Copilot is like hiring an interpreter.

Harri Edwards // Open AI

Don't fly solo

Enjoy a 60-day free trial then $10/month or $100/year per user

Learn about GitHub Copilot Terms and Conditions

Coming later this year

GitHub Copilot for companies

Join the waitlist

Frequently asked questions

General

What is GitHub Copilot?

GitHub Copilot is an AI pair programmer that helps you write code faster and with less work. It draws context from comments and code to suggest individual lines and whole functions instantly. GitHub Copilot is powered by Codex, a generative pretrained language model created by OpenAI. It is available as an extension for Visual Studio Code, Visual Studio, Neovim, and the JetBrains suite of integrated development environments (IDEs).

GitHub Copilot is not intended for non-coding tasks like data generation and natural language generation, like question & answering. Your use of GitHub Copilot is subject to the GitHub Terms for Additional Product and Features.

How does GitHub Copilot work?

OpenAI Codex was trained on publicly available source code and natural language, so it works for both programming and human languages. The GitHub Copilot extension sends your comments and code to the GitHub Copilot service, and it relies on context, as described in Privacy below - i.e., file content both in the file you are editing, as well as neighboring or related files within a project. It may also collect the URLs of repositories or file paths to identify relevant context. The comments and code along with context are then used by OpenAI Codex to synthesize and suggest individual lines and whole functions.

What data has GitHub Copilot been trained on?

GitHub Copilot is powered by Codex, a generative pretrained AI model created by OpenAI. It has been trained on natural language text and source code from publicly available sources, including code in public repositories on GitHub.

Does GitHub Copilot write perfect code?

In a recent evaluation, we found that users accepted on average 26% of all completions shown by GitHub Copilot. We also found that on average more than 27% of developers’ code files were generated by GitHub Copilot, and in certain languages like Python that goes up to 40%. However, GitHub Copilot does not write perfect code. It is designed to generate the best code possible given the context it has access to, but it doesn’t test the code it suggests so the code may not always work, or even make sense. GitHub Copilot can only hold a very limited context, so it may not make use of helpful functions defined elsewhere in your project or even in the same file. And it may suggest old or deprecated uses of libraries and languages. When converting comments written in non-English to code, there may be performance disparities when compared to English. For suggested code, certain languages like Python, JavaScript, TypeScript, and Go might perform better compared to other programming languages.

Like any other code, code suggested by GitHub Copilot should be carefully tested, reviewed, and vetted. As the developer, you are always in charge.

Will GitHub Copilot help me write code for a new platform?

GitHub Copilot is trained on public code. When a new library, framework, or API is released, there is less public code available for the model to learn from. That reduces GitHub Copilot’s ability to provide suggestions for the new codebase. As more examples enter the public space, we integrate them into the training set and suggestion relevance improves. In the future, we will provide ways to highlight newer APIs and samples to raise their relevance in GitHub Copilot’s suggestions.

How do I get the most out of GitHub Copilot?

GitHub Copilot works best when you divide your code into small functions, use meaningful names for functions parameters, and write good docstrings and comments as you go. It also seems to do best when it’s helping you navigate unfamiliar libraries or frameworks.

How can I contribute?

By using GitHub Copilot and sharing your feedback in the feedback forum, you help to improve GitHub Copilot. Please also report incidents (e.g., offensive output, code vulnerabilities, apparent personal information in code generation) directly to [email protected] so that we can improve our safeguards. GitHub takes safety and security very seriously and we are committed to continually improving.

Human oversight

Can GitHub Copilot introduce insecure code in its suggestions?

Public code may contain insecure coding patterns, bugs, or references to outdated APIs or idioms. When GitHub Copilot synthesizes code suggestions based on this data, it can also synthesize code that contains these undesirable patterns. This is something we care a lot about at GitHub, and in recent years we’ve provided tools such as GitHub Actions, Dependabot, and CodeQL to open source projects to help improve code quality. Of course, you should always use GitHub Copilot together with good testing and code review practices and security tools, as well as your own judgment.

Does GitHub own the code generated by GitHub Copilot?

GitHub Copilot is a tool, like a compiler or a pen. GitHub does not own the suggestions GitHub Copilot generates. The code you write with GitHub Copilot’s help belongs to you, and you are responsible for it. We recommend that you carefully test, review, and vet the code before pushing it to production, as you would with any code you write that incorporates material you did not independently originate.

Does GitHub Copilot recite code from the training set?

The vast majority of the code that GitHub Copilot suggests has never been seen before. Our latest internal research shows that about 1% of the time, a suggestion may contain some code snippets longer than ~150 characters that matches the training set. Previous research showed that many of these cases happen when GitHub Copilot is unable to glean sufficient context from the code you are writing, or when there is a common, perhaps even universal, solution to the problem.

What can I do to reduce GitHub Copilot’s suggestion of code that matches public code?

We built a filter to help detect and suppress the rare instances where a GitHub Copilot suggestion contains code that matches public code on GitHub. You have the choice to turn that filter on or off during setup. With the filter on, GitHub Copilot checks code suggestions with its surrounding code for matches or near matches (ignoring whitespace) against public code on GitHub of about 150 characters. If there is a match, the suggestion will not be shown to you. We plan on continuing to evolve this approach and welcome feedback and comment.

Other than the filter, what other measures can I take to assess code suggested by GitHub Copilot?

You should take the same precautions as you would with any code you write that uses material you did not independently originate. These include rigorous testing, IP scanning, and checking for security vulnerabilities. You should make sure your IDE or editor does not automatically compile or run generated code before you review it.

Fairness and broader impact

Will different people experience different quality of service from GitHub Copilot?

Given public sources are predominantly in English, GitHub Copilot will likely work less well in scenarios where natural language prompts provided by the developer are not in English and/or are grammatically incorrect. Therefore, non-English speakers might experience a lower quality of service.

Additionally, inexperienced developers may struggle to use GitHub Copilot to effectively generate code, and their lack of experience might inhibit their capability to effectively review and edit suggestions made by GitHub Copilot. Finally, we are conducting internal testing of GitHub Copilot’s ease of use by developers with disabilities and working to ensure that GitHub Copilot is accessible to all developers. Please feel free to share your feedback on GitHub Copilot accessibility in our feedback forum.

We acknowledge that fairness and inclusivity in code generation systems are important emerging research areas. We are working with experts, including Microsoft’s Office of Responsible AI, in an effort to advance GitHub Copilot’s responsible AI practices. We will also review new research and learn from feedback we receive to improve GitHub Copilot such that it is usable by a wide range of developers and provides similar quality of service to people with different backgrounds.

Does GitHub Copilot produce offensive outputs?

GitHub Copilot includes filters to block offensive language in the prompts and to avoid synthesizing suggestions in sensitive contexts. We continue to work on improving the filter system to more intelligently detect and remove offensive outputs. However, due to the novel space of code safety, GitHub Copilot may sometimes produce undesired output. If you see offensive outputs, please report them directly to [email protected] so that we can improve our safeguards. GitHub takes this challenge very seriously and we are committed to addressing it.

How will advanced code generation tools like GitHub Copilot affect developer jobs?

Bringing in more intelligent systems has the potential to bring enormous change to the developer experience. We do not expect GitHub Copilot to replace developers. Rather, we expect GitHub Copilot to partner with developers, augment their capabilities, and enable them to be more productive, reduce manual tasks, and help them focus on interesting work. We also believe that GitHub Copilot has the potential to lower barriers to entry, enabling more people to explore software development, and join the next generation of developers. We are working to test these hypotheses with both internal and external research.

Privacy

How can I control the use of my data collected by Copilot?

GitHub Copilot gives you certain choices about how it uses the data it collects. User engagement data, including pseudonymous identifiers and general usage data, is required for the use of GitHub Copilot and will continue to be collected, processed, and shared with Microsoft and OpenAI as you use GitHub Copilot. You can choose whether your code snippets are collected and retained by GitHub and further processed and shared with Microsoft and OpenAI by adjusting your user settings. Additional information about the types of telemetry collected and processed by GitHub Copilot can be found in What data does GitHub Copilot collect? below.

You can also request deletion of GitHub Copilot data associated with your GitHub identity by filling out a support ticket. Please note that future data collection will occur with continued use of GitHub Copilot, but you can control whether your code snippets are collected, processed, and retained in telemetry in your Copilot user settings.

What data does GitHub Copilot collect?

GitHub Copilot relies on file content and additional data to work. It collects data both to provide the service and saves some of the data to perform further analysis and enable improvements. Please see below for more details on how your telemetry data is used and shared.

User Engagement Data

When you use GitHub Copilot it will collect usage information about events generated when interacting with the IDE or editor. These events include user edit actions like completions accepted and dismissed, and error and general usage data to identify metrics like latency and features engagement. This information may include personal data, such as pseudonymous identifiers.

Code Snippets Data

Depending on your preferred telemetry settings, GitHub Copilot may also collect and retain the following, collectively referred to as “code snippets”: source code that you are editing, related files and other files open in the same IDE or editor, URLs of repositories and files paths.

How is GitHub Copilot telemetry data used and shared?

Telemetry including code snippets, as detailed in What data does GitHub Copilot collect?, are used by GitHub, Microsoft, and OpenAI to improve GitHub Copilot and related services and to conduct product and academic research about developers.

Telemetry uses may include:

  • Directly improving GitHub Copilot, including assessing different strategies in processing and predicting which suggestions users may find helpful
  • Developing and improving closely related developer products and services from GitHub, Microsoft, and OpenAI
  • Investigating and detecting potential abuse of GitHub Copilot
  • Conducting experiments and research related to developers and their use of developer tools and services
  • Evaluating GitHub Copilot, e.g., by measuring the positive impact it has on the user
  • Improving the underlying code generation models, e.g., by providing positive and negative examples
  • Fine tuning ranking and sorting algorithms and prompt crafting

When processing code snippets, we take the protection measures described below in How is the transmitted data protected? and follow responsible practices in accordance with our Privacy Statement so that the use of your telemetry data to improve these models does not result in this data being shared with other GitHub Copilot users.

How is the transmitted data protected?

We know that user edit actions, source code snippets, and URLs of repositories and file paths are sensitive data. Consequently, several measures of protection are applied, including:

  • The transmitted data is encrypted in transit and at rest
  • Access is strictly controlled. The data can only be accessed by (1) named GitHub personnel working on the GitHub Copilot team or on the GitHub platform health team, (2) Microsoft personnel working on or with the GitHub Copilot team, and (3) OpenAI personnel who work on GitHub Copilot
  • Role-based access controls and multi-factor authentication are required for personnel accessing code snippet data

Will my private code be shared with other users?

No. We use data, including information about which suggestions users accept or reject, to improve the model. We follow responsible practices in accordance with our Privacy Statement to ensure that your code snippets will not be used as suggested code for other users of GitHub Copilot.

Does GitHub Copilot ever output personal data?

Because GitHub Copilot was trained on publicly available code, its training set included public personal data included in that code. From our internal testing, we found it to be rare that GitHub Copilot suggestions included personal data verbatim from the training set. In some cases, the model will suggest what appears to be personal data – email addresses, phone numbers, etc. – but is actually fictitious information synthesized from patterns in training data. For example, when one of our engineers prompted GitHub Copilot with, “My name is Mona and my birthdate is,” GitHub Copilot suggested a random, fictitious date of “December 12,” which is not Mona’s actual birthdate. We have implemented a filter that blocks emails when shown in standard formats, but it’s still possible to get the model to suggest this sort of content if you try hard enough. We will keep improving the filter system to be more intelligent to detect and remove more personal data from the suggestions.

Where can I learn more about GitHub Privacy and data protection?

For more information on how GitHub processes and uses personal data, please see our Privacy Statement.