GSoC 2026

Acceptable and Ethical AI Use Policy

Due to the overwhelming amount of AI use in our GSoC-related conversations and contributions, we have introduced an Acceptable and Ethical AI Use Policy, which is detailed below. We will keep improving this working document as we navigate this new territory. This document clarifies our stance on the unethical and undisclosed use of AI.

It is best to avoid using ChatGPT entirely in your proposal and discussions. It is okay to use ChatGPT to resolve issues/bugs, the same way we used the web search in the pre-LLM era. However, we have to avoid creating a large amount of code that is not human-generated. Specifically, do not create code and documentation with ChatGPT when you do not understand what it is producing. Computer Science education is not dead yet. We are still educating and training young software engineers. Google Summer of Code (GSoC) is not an opportunity to test AI-generated slop in a trial-and-error approach. We want to use GSoC as a way to foster high-quality and well-trained future software engineers rather than those "prompt engineers."

The so-called "vibe coding" and its impact on code quality

The growing access to AI/ML technologies such as LLM tools (ChatGPT, etc.) has given rise to a phenomenon now known as Vibe Coding. In simple words, it is a practice of writing code by simply using AI without understanding what it is actually doing. While vibe coding might help in certain toy projects, it does not fit in a collaborative open-source organization or a competitive landscape such as the Google Summer of Code (GSoC). This is not to ban any and all the uses of AI. But to use it sparingly. Even before the LLM-era, we all copy-pasted Stackoverflow fixes. So, this is not entirely new. However, the barrier to entry is largely reduced, thus allowing non-technical people to write bloated code and documentation for simple matters.

LLM-generated lengthy conversations

A worse form of vibe coding is ChatGPT-based conversations. Instead of short, specific questions ("My concore build is failing with this error. See attached. Can you help with what went wrong?"), we end up receiving bloated essays with several irrelevant texts and questions.

For example, we receive emails (despite our repeated mentions to communicate through just GitHub rather than private emails. Emails are reserved only for proposal drafts) such as the below:

"My Concore build is failing with this error (see attached). Can you help identify what went wrong? Have you encountered this specific Concore build error before, and are there any common causes or solutions? Based on the error message, do you think this could be related to a misconfiguration in the pipeline, a missing dependency, or perhaps a network issue? Could it indicate a problem with the Concore worker or the build environment itself? Are there any known issues with the Concore version I’m using that might be causing this failure? I’m also unsure how to interpret the logs—could you help me identify which part of the output reveals the root cause? Is there a way to rerun the build with more detailed debugging information to diagnose the problem more effectively? Do you know if Concore has any specific caching issues that could trigger this type of failure? Could this be related to resource versions or external services the pipeline depends on? Are there any permissions, environment variables, or authentication issues that might be interfering with the build process? Could this be caused by conflicting or outdated dependencies in the build environment? Are there any recent changes to the infrastructure, such as server updates or network configurations, that could be impacting the build? Have you seen this error occur in similar CI/CD pipelines or other build systems? Lastly, what steps should I take to isolate whether this is a Concore-related problem or an issue with the underlying infrastructure?"

The above question was generated by a ChatGPT prompt: Expand on this question. Make more questions. "My concore build is failing with this error. See attached. Can you help with what went wrong?"

Such questions waste the mentoring resources. While some of us are well-experienced in ChatGPT to spot these fake questions, many are not. Our mentors often end up typing detailed answers to AI Slop, such as in the above paragraph. AI-slop questions and discussions waste the readers' time and energy.

AI Slop in code submissions, discussions, and GSoC proposals.

We have decided not to accept anyone who sends obvious AI slop in GitHub discussions, emails (private emails are discouraged, except for sharing the proposal draft - even in that case, add all the mentors into a single email!), or code.

It is totally okay to code with ChatGPT/LLM help. But make sure you acknowledge the use. Do not simply use the "vibe code" to send pull requests or code challenge solutions you have not tested yourself. GSoC is more about open-source contributions than merely getting accepted as a contributor. If your contribution to an open-source organization during this application period depletes our mentor resources and code quality, that is not a good use of your and our resources and time.

Feel free to make mistakes and learn. It is totally okay to send a message with typos and grammar mistakes. No one is judging you for those. Language/grammar checkers such as Grammarly (excluding their GenAI capabilities) are recommended if you are uncomfortable with your writing (in English) skills.

On a related note, while GSoC is a learning opportunity, do not choose a project idea if you are unfamiliar with the programming language. For example, if you are uncomfortable coding in Python, the Coastline Extraction project may not fit you well. GSoC is not an opportunity to learn a programming language or try vibe coding without knowing the internals of the used programming languages/frameworks. Oftentimes, badly contributed codes will cost us even more time to debug and fix later.

We try our best to make our projects easy for newbie developers. Feel free to let us know (through GitHub discussions) when our code or documentation is incomplete, ideally through the respective issue tracker. It would be better if you also submit a pull request to address that shortcoming. At the same time, it is unrealistic for the README of a Python project to mention how to install Python. That level of documentation is great for non-technical users. However, we are not looking to accept non-technical users as GSoC contributors. A certain level of experience in the programming languages and frameworks used by the respective projects is required.

We also have identified the difficulty level for each project idea. For example, the L4SBOA and [XLNS](https://github.com/uaanchorage/GSoC/blob/main/ https:/github.com/xlnsresearch/) project ideas are labeled "hard;" Beehive ideas are labeled "medium;" and the Diomede album idea is labeled "easy." This could help you decide whether you aim to apply for an easier but (likely) more competitive project idea or a difficult (but likely with fewer applicants) idea.

Please definitely AVOID using AI/LLM/ChatGPT in any research reports you produce as part of this GSoC. Those should be in your own words.

Acceptable vs Unacceptable Use of AI (ChatGPT, etc.)

Here is a simple table that shows what kind of AI usage is okay and what should be avoided.

Acceptable Use of AIUnacceptable Use of AI
Using LLM to fix specific bugs or errors in your code (like "Why is this Python error happening?")Generating full code or documentation using LLM without understanding what it does
Using AI to help understand a difficult concept (e.g., “What is a Git rebase?”)Submitting AI-written code as your own without testing or modifying it
Using LLM to improve your writing tone or grammar (like checking an email)Writing entire GitHub discussions, proposals, or PR descriptions using AI that sound overly formal or bloated
Asking AI for help with troubleshooting (like how to install a tool or debug a config file)Copy-pasting long, AI-generated explanations/questions that waste mentor time
Briefly summarizing or organizing your thoughts using AI before writing in your own wordsSending in GSoC proposals or research reports that are heavily written by ChatGPT
Getting ideas or suggestions for solving a problem, then implementing it yourselfSubmitting “vibe code”, that looks good but you don’t understand or cannot explain

Please feel free to discuss this Acceptable and Ethical AI Use Policy in the associated GitHub Discussion forum with your comments and questions.

Command Palette

Search for a command to run...