← Back to blog
April 2026 · 14 min read

How to Spot Fake GitHub Activity: 8 Red Flags Every Recruiter Should Know

GitHub is one of the most honest signals in recruiting because it's behavioral, not self-reported. But contributions can be gamed. Automated commits, forked repos with trivial edits, star-farming rings, and AI-generated padding all create the appearance of engineering quality where none exists. This guide walks through the 8 red flags to check before you reach out to any candidate.

GitHub is the largest publicly visible dataset of developer activity in the world. More than 30 million contribution events hit public repositories every month. Commits, pull requests, code reviews, issue discussions. Each one is a real data point about what a developer is actually building. That is what makes GitHub so valuable for recruiting. It shows what people do, not what they say they do.

But that value only holds if the contributions are genuine.

But GitHub activity can be faked. And as more recruiters start using GitHub for sourcing, more candidates have an incentive to make their profiles look better than they are. If you are evaluating developers based on their commit history and contribution patterns, you need to know what real activity looks like and what doesn't.

This guide covers the 8 most common red flags, how to distinguish AI-assisted work from AI-generated padding, and a 3-minute verification workflow you can run on any candidate before reaching out.

Why GitHub activity manipulation is increasing

GitHub profile gaming is more common than it was even two years ago, and there are three reasons why.

The green squares culture. GitHub's contribution graph, the grid of green squares on every profile, has become a status symbol in developer communities. Hiring managers glance at it. Recruiters use it as a quick proxy for activity. Developers know this, and some feel pressure to keep their graph green even when they don't have meaningful work to commit. Open source tools now exist specifically for generating backdated commits to fill in contribution graphs. A developer can create a year of perfectly green squares in about five minutes.

AI coding tools lower the effort floor. GitHub Copilot, Claude Code, Cursor, and similar tools make it trivial to generate large volumes of code quickly. A developer can scaffold an entire repository in an afternoon. That is genuinely useful for real engineering work. But it also means that someone can create a portfolio of 20 repos with working code without having deep understanding of any of it. The barrier between "I built this" and "AI built this while I watched" has never been thinner.

GitHub profiles are being evaluated more often. As companies move toward contribution-based sourcing, candidates have a direct incentive to inflate their profiles. This is the same dynamic that made LinkedIn endorsements meaningless. When a signal is used for evaluation, people optimize for the signal rather than the underlying quality. The good news is that GitHub's signals are much harder to fake convincingly than LinkedIn's. The bad news is that some candidates try anyway, and a recruiter who doesn't know what to look for can be fooled.

8 red flags to check before you reach out

None of these red flags is disqualifying on its own. A legitimate developer might trigger one or two for innocent reasons. But if a profile shows three or more of these patterns, treat it with skepticism.

1. A perfectly uniform contribution graph

Real developers have messy, human contribution patterns. They have bursts of activity during sprints and quieter periods between projects. They take vacations. They have weekends where they don't write code. A contribution graph that shows exactly 1 to 3 commits every single day, including weekends and holidays, with no variation in intensity, is almost certainly automated.

What to look for: natural variation. Clusters of darker green during what look like workweeks. Lighter patches during what might be vacations or between-project gaps. The occasional completely blank day. Human activity is irregular. Machine activity is uniform.

2. Repositories that are all forks with trivial changes

Forking a popular repository is a one-click action on GitHub. Some developers fork dozens of well-known projects, make a minor change (editing the README, fixing a typo, adding a comment), and never submit a pull request back to the original. This creates the visual impression of contributing to important projects without actually doing any engineering work.

Check the candidate's repositories tab. If every repo is a fork, look at the actual changes they made. A single commit that edits a README file is not a contribution. Compare this to a candidate who has forked a repo, made substantive code changes, and submitted a pull request that was reviewed and merged by the project maintainers. The difference is obvious once you look.

3. High commit counts but no merged pull requests from others

A developer with 5,000 commits sounds impressive until you realize every commit is to repositories they own, and no other human has ever reviewed or merged their work. Solo project commits show initiative but not collaboration. And collaboration is what you are hiring for.

The strongest GitHub signal for engineering quality is external contribution: pull requests submitted to repositories the candidate does not own, reviewed by engineers the candidate does not know, and merged by maintainers who judged the work worthy of inclusion. If a profile has thousands of commits but zero merged PRs in external projects, the candidate's ability to work with a team is unverified.

4. Star-farming patterns

Stars on GitHub are like likes on social media. They are a vanity metric that is easy to inflate. Some developers participate in mutual starring rings: communities where members agree to star each other's repositories. Others use bot services that add stars automatically for a fee.

A star-farmed repo has a recognizable pattern: a high star count (50, 100, 200+) relative to its actual quality, zero or very few forks, no open issues or pull requests from outside contributors, and a stargazer list full of accounts with minimal activity or that were all created around the same time. If a candidate has a repo with 150 stars but the README is two sentences and the last commit was six months ago, be skeptical.

Stars are not a useful recruiting signal in general. A well-maintained project with 15 stars and active contributors is a far better indicator of engineering quality than a neglected project with 500 stars and no community.

5. Commits only to personal repositories

Personal projects show initiative and curiosity, and there is nothing wrong with building things on your own. But a profile that consists entirely of solo repositories and has zero interaction with any other developer or project is a limited signal for how someone will perform on a team.

Look for external contributions: PRs to open source projects, code reviews on other people's work, issue discussions in community repos. These are evidence that the developer can read other people's code, communicate technical ideas, receive and incorporate feedback, and ship work that meets someone else's standards. None of that is visible in solo repos.

6. Massive commit counts with tiny diffs

Some developers inflate their commit count by splitting trivial work across many commits. Changing one variable name becomes three commits. Updating a configuration file becomes five. The contribution graph turns green, and the commit count looks impressive, but the actual engineering output is minimal.

You can check this quickly. Click into a few commits and look at the diff. Does each commit represent a meaningful unit of work, or is it a one-line change that could have been part of a larger commit? A developer who makes 50 thoughtful, well-structured commits in a month is doing more real work than one who makes 500 one-line commits.

The inverse is also a useful signal. Developers who write clean, atomic commits with descriptive messages that explain the reasoning behind the change show seniority-level engineering discipline.

7. No code review activity

This might be the single most telling red flag on this list. Code review, both giving and receiving, is how engineers collaborate in real-world engineering teams. If a candidate has hundreds of commits but has never left a code review comment on anyone else's pull request and has never received a review on their own work, they are either working entirely alone or deliberately avoiding the collaborative part of software development.

You can check this easily. Search GitHub for is:pr reviewed-by:username to see PRs they have reviewed. Search is:pr author:username and click into a few to see whether their PRs received review comments. The presence of substantive code review, detailed feedback, questions about edge cases, suggestions for improvement, is one of the strongest signals available on GitHub. Its absence is a meaningful gap.

8. Activity only during job-hunting season

Developer job searches tend to cluster around January (new year, new job) and September (post-summer, new budget cycles). Some candidates have GitHub profiles that are intensely active during these months and completely silent the rest of the year. The implication is clear: they are not using GitHub as a work tool. They are using it as a portfolio prop during active job searches.

This is not necessarily dishonest. Some developers genuinely do personal projects when they are between jobs. But a candidate with consistent year-round contribution activity is a very different profile than one who has one intense week in January and 51 weeks of silence. The consistent contributor is demonstrating work habits. The seasonal contributor is demonstrating interview prep.

AI-assisted vs. AI-generated: where to draw the line

More than 70% of developers now use AI coding tools daily. Copilot, Claude Code, Cursor, Aider, Cody. This is the new normal, not a red flag. The question for recruiters is not "did this developer use AI?" but "how did they use it?"

AI-assisted work is legitimate. A developer who uses Copilot to generate boilerplate, then applies their own judgment to architecture, error handling, edge cases, and testing strategy, is doing exactly what a senior engineer should do: using tools to move faster. This is no different from using an IDE's autocomplete or copying a code snippet from documentation. The human judgment is the value. The tool is an accelerator.

AI-generated padding is different. This is when entire repositories are created by AI with minimal human involvement. The developer types a prompt, accepts the output, commits it, and moves on. The code works, but the developer may not fully understand it, could not modify it under pressure, and did not make the architectural decisions that produced it. These repos look polished on the surface but lack the signs of human iteration that indicate real engineering work.

How to tell the difference. Look for signs of human decision-making that AI tools cannot fake.

Debugging commits are a strong signal. When something breaks, a developer has to understand the system well enough to diagnose and fix it. A repo with a clean initial implementation followed by several debugging and refinement commits shows human engagement. A repo with a single perfect commit and no follow-up suggests the developer accepted generated output without testing it in real conditions.

Iterative refinement is another signal. Real development is messy. A PR with an initial commit, followed by revisions based on code review feedback, followed by a final cleanup, shows a developer who is engaged in the process. A single massive commit with perfectly formatted code and comprehensive tests suggests generation, not engineering.

Code review discussions remain almost entirely human. Reading someone else's code, identifying issues, explaining why something should change, and defending a design decision require understanding that AI tools do not provide. If a candidate has extensive code review activity, their engineering judgment is verified regardless of how much AI assisted their commit authorship.

The meta-signal: how someone talks about AI tools tells you a lot. A developer who says "I used Claude Code to scaffold the project, then spent two days refactoring the data layer and adding error handling" is describing a healthy workflow. A developer who cannot explain the reasoning behind their own code is not.

A 3-minute verification workflow

You do not need to spend 30 minutes evaluating every candidate. The following workflow takes about three minutes and catches most inflated profiles.

Step 1: Check the contribution graph for natural variation (30 seconds). Open the candidate's GitHub profile. Look at the contribution graph. Is the pattern human (irregular, with bursts and gaps) or mechanical (perfectly uniform)? If it looks mechanical, that is your first warning sign.

Step 2: Scan the repositories tab (30 seconds). Are the repos original work or forks? Do any of them have README files with real documentation? Are there repos with multiple contributors, or is everything solo? You are looking for evidence of original work and collaboration, not just volume.

Step 3: Read one or two pull request descriptions (60 seconds). Search is:pr author:username in GitHub search. Click into a PR. Does the description explain the reasoning behind the change, not just what changed? Is there a linked issue? Are there testing notes? One well-written PR description tells you more about a developer's engineering quality than a thousand green squares.

Step 4: Check for code review activity (30 seconds). Search is:pr reviewed-by:username. Has this person reviewed anyone else's code? If yes, click into one review. Is the feedback substantive or just "LGTM"? Code review quality is the single best predictor of how someone will work on your engineering team.

Step 5: Verify recent activity (30 seconds). Back on the profile, click into the contribution graph to see recent repos. Is the candidate active in the last three to six months? Is the activity in real, maintained projects or abandoned experiments? Recency plus quality equals a candidate worth reaching out to.

If a profile passes all five checks, it is a genuine profile. Reach out. If it fails two or more, either skip the candidate or investigate further before spending time on outreach.

What legitimate profiles look like

It helps to know what you are looking for, not just what you are looking to avoid. Strong GitHub profiles share a few characteristics that are difficult to fake.

Consistency over time. The best developers have contribution activity that spans months or years, not just a few intense weeks. It does not need to be daily. A developer who contributes steadily across the year, with natural breaks, is someone who writes code regularly, not just when they need a portfolio.

External contributions. Pull requests submitted to repositories the candidate does not own, reviewed and merged by maintainers, are the most reliable proof of skill. Someone else looked at their code and judged it good enough to include in a real project. You cannot fake that.

Substantive code review. Comments that explain trade-offs, catch edge cases, suggest alternatives, or teach a concept to a less experienced contributor are evidence of senior-level engineering judgment. This is visible in the PR comment threads and cannot be generated by filling in a contribution graph.

Depth in a few areas rather than breadth across many. A developer with deep contributions to two or three projects they clearly understand is a stronger signal than one with shallow commits across 50 repos. Depth indicates expertise. Breadth without depth indicates browsing.

Well-written PR descriptions and commit messages. Engineers who explain the "why" behind their changes, not just the "what," are demonstrating the communication skills you need on your team. This is one of the 9 signals that predict engineering quality and one of the hardest to manufacture.

Volume metrics (commit count, contribution graph density, number of repos, star count) are easy to inflate. Quality metrics (PR descriptions, code review comments, external contributions, iterative development patterns) are not. When evaluating a GitHub profile, weight quality over quantity every time.

Frequently asked questions

How can you tell if GitHub contributions are fake?

Look for patterns that indicate automation or padding rather than real engineering work. The clearest signals are a perfectly uniform contribution graph (real developers have natural variation), repositories that are all forks with trivial changes, high commit counts with tiny diffs (one-line changes inflated across hundreds of commits), and zero code review activity. Genuine contributors have messy, human patterns: clusters of activity during work hours, quiet weekends, occasional gaps, and evidence of collaboration through pull request discussions and code reviews.

Do employers actually check GitHub profiles when hiring?

Yes, and increasingly so. A growing number of engineering teams review candidate GitHub activity as part of their evaluation process. Technical recruiters use GitHub to verify skills that candidates claim on their resumes, assess code quality and collaboration style, and identify passive candidates who aren't actively job hunting. Tools like riem.ai automate this process by scoring developers across multiple contribution quality dimensions.

Can you fake a GitHub contribution graph?

Yes, and it is surprisingly easy. Open source tools exist that let anyone generate backdated commits to fill in their contribution graph with green squares. Some developers use automated scripts to commit trivial changes daily, creating the appearance of consistent activity. This is why experienced recruiters look beyond the contribution graph to pull request quality, code review participation, and the substance of actual commits rather than just their frequency.

Should you penalize developers who use AI coding tools like Copilot?

No. Using AI coding tools is the norm in 2026, not a red flag. The distinction that matters is between AI-assisted work, where a developer uses tools like Copilot or Claude Code to accelerate their workflow while still making architectural decisions and reviewing output, and AI-generated padding, where entire repositories are created by AI with no human iteration or judgment. Look for signs of human decision-making: debugging commits, iterative refinement, thoughtful code review, and architectural choices that require understanding of the broader system.

What is star farming on GitHub?

Star farming is the practice of artificially inflating the star count on GitHub repositories. Developers participate in mutual starring groups where members agree to star each other's repos, or they use bot services to add stars automatically. A star-farmed repo typically has a high star count relative to its quality, zero or very few forks, no open issues or pull requests from outside contributors, and a stargazer list full of accounts with minimal activity. Stars are a vanity metric. A well-maintained project with 15 stars and active contributors is a far better indicator of engineering quality than a neglected project with 500 stars. For a fuller picture of what actually indicates quality, see our guide to the 9 GitHub signals that predict engineering quality.