March 2026

Software Engineer Hiring Is Back. The Old Playbook Isn't.

Job postings are up 11% year over year. But the candidates worth hiring have never been harder to find using traditional methods, and the noise competing with them has never been louder.

Software engineer job postings are up 11% year over year — a number that should feel like relief after two brutal years of budget freezes, headcount reductions, and the phrase "we're pausing hiring." But the headline hides something that's going to catch a lot of recruiting teams off guard: the candidates actually worth hiring have never been harder to find using traditional methods, and the noise competing with them has never been louder.

This isn't a return to the 2021 market. It's a harder game with familiar-looking pieces.

What the 11% rebound actually means

The data comes from analysis by Citadel Securities based on Indeed job posting trends, covered in Benzinga earlier this month, and it's real. Software engineer demand is recovering. Technology roles broadly saw a 10.6% monthly increase in postings after two years of contraction. OpenAI and Anthropic are hiring junior software engineers for the first time. Shopify is onboarding 1,000 interns a year. The freeze is thawing.

But look at what's being hired for. More than 53% of U.S. tech job postings now require AI/ML skills. The proportion of new hires in AI and ML roles grew 88% in 2025 compared to the year before. The jobs are back, but they've shifted hard. Companies that spent two years eliminating generalist roles are now hiring for specialists who can build and ship AI-native features. The median full-stack engineer who's spent the last three years maintaining a Rails monolith is in a very different position than the engineer who's been contributing to open source ML tooling on weekends.

There's also an entry-level collapse happening simultaneously with this rebound. Entry-level software engineering positions saw a 73% decrease in hiring rates over the past year. Companies have found that AI coding tools let experienced engineers do work that previously required junior hires. So the market is heating up at the top and collapsing at the bottom at the same time. For recruiters, this means the candidates sitting at the intersection of "experienced" and "AI-fluent" are the most competed-for people in tech right now, and they are not sitting on LinkedIn waiting for your InMail.

The resume problem nobody wants to say out loud

Resumes are nearly useless for technical hiring now, and most recruiting workflows haven't caught up to that fact.

Hiring managers at companies like Converge Resources have reported receiving 400+ applications for a single mid-level engineering role, with at least half generated or heavily polished by ChatGPT. The one-page PDF has always been a crude proxy for competence. Now it's actively misleading. When a candidate can describe their experience at any level of sophistication with a few well-crafted prompts, resume-based screening filters for "knows how to use AI to write a convincing summary" rather than "can actually write production code."

Seventy-three percent of hiring managers report increased difficulty identifying genuine technical skills from application materials, according to Converge Resources. That's not surprising. The signal is degraded. If you're still running a sourcing workflow that starts with keyword-matching resumes, you're operating on bad data.

The recruiters filling engineering roles efficiently in 2026 share one thing in common: they've stopped treating the resume as the primary filter and started looking at what candidates have actually built. I know that sounds obvious. Somehow most teams still aren't doing it.

The GitHub shift isn't hype, but most sourcers are doing it wrong

Eighty-seven percent of tech recruiters say they review GitHub profiles during hiring. That's a meaningful number. The follow-up, though, is that most of them are doing it wrong.

The typical recruiter approach to GitHub is to open a profile, look at the contribution graph (green squares), note the number of repos, and close the tab. This tells you almost nothing useful. A profile with 500 commits and 40 repos might belong to a developer who commits "asdfgh" three times a day, or it might belong to someone who's been the primary maintainer of a widely-used open source library for two years. The numbers look the same. The engineers are completely different.

Consistency over spikes. Contribution patterns that show regular, sustained activity across months are more indicative of a working engineer than massive commit clusters followed by silence. Anyone can push a project during a three-week sprint. Showing up consistently is harder.

Code review activity is the most underused signal in developer sourcing. Pull request comments, review feedback, and issue discussions reveal how an engineer thinks, how they communicate with teammates, and how they handle disagreement. You cannot fake this. A senior engineer who writes thorough, constructive code reviews is showing you something a resume never could.

Commit quality matters. Messages like "fix: resolve hydration mismatch in MDX image component" versus "fixed stuff" reveal engineering discipline. One person is thinking about their teammates and future maintainers. The other isn't. I've used this as a filter more than once and it's rarely wrong.

Beyond that, the repos someone contributes to matter more than how many repos they have. Contributing meaningfully to an active, real-world project used by other people is qualitatively different from maintaining a collection of tutorial clones. And breadth across languages and tooling can signal curiosity and adaptability, not because polyglots are inherently better engineers, but because it suggests someone picks up what the problem actually requires rather than reaching for what they already know.

The challenge is that reading this signal manually across dozens or hundreds of candidates doesn't scale. Most sourcing tools weren't built to surface it. This is the gap riem.ai was built to fill: instead of clicking through profiles, you search for "engineers who've contributed to real-time data pipelines" or "developers with WebSocket systems experience" and get candidates surfaced by their actual commit history.

Why "AI/ML skills required" is a sourcing trap

The 53% of job postings requiring AI/ML skills are competing with each other for the same 5-10% of candidates who genuinely have depth in those areas. The math doesn't work.

The smarter move, and the one most hiring teams aren't making, is to look for engineers who haven't labeled themselves as AI engineers but whose work shows they can learn and apply new techniques fast. These candidates are often engineers who've contributed to projects that adopted new frameworks or languages mid-stream. They show up in the commit history of multiple adjacent projects, suggesting they move fluidly between domains. Some of them have been doing "boring" infrastructure work in databases, networking, or build systems and have started incorporating ML-adjacent tooling in recent work. Others show strong fundamentals plus clear evidence of recent learning, like someone who started contributing to a Python ML library after years of backend Go work.

This is the hiring intelligence that Boolean searches can't surface. You're not looking for "Python + TensorFlow" on a resume. You're looking for a pattern of behavior over time that predicts adaptability. That's a fundamentally different kind of search, and it requires fundamentally different tools.

The senior engineer attention problem

Stack Overflow's 2024 Developer Survey found that 67% of senior engineers receive multiple offers before their resume even gets posted publicly. This is the other side of the market split: the candidates who are most valuable aren't job hunting in any conventional sense.

They're employed. They're busy. If they're contributing to open source, it's because they find it genuinely interesting, not because they're broadcasting availability. They are, in the most literal sense, invisible to any sourcing workflow that depends on candidates taking action by posting a resume, updating LinkedIn, or applying to a job board.

This is why passive sourcing from public contribution data is where competitive recruiting teams are finding leverage right now. The engineer who's been a consistent contributor to a major open source project for three years is a known quantity. You can see exactly how they write code, how they collaborate, how they handle problems. That's information you can't get from any screening call in the first 20 minutes. And if your outreach is based on what they've actually worked on rather than a generic "exciting opportunity" template, you have a real shot at a response.

Most sourcers lose the advantage they've built through good research right at this step. Even when you've found a strong candidate through their GitHub activity, the email that lands in their inbox often reads like it was sent to 500 people. Because it was.

Engineers, especially senior ones who get a lot of recruiter contact, have extremely well-calibrated radar for templated outreach. They can smell it instantly. The ones who actually respond to cold contact are almost always responding to messages that demonstrate the recruiter did real homework: referencing a specific project, a specific contribution, something that couldn't have come from a keyword match. Platforms like riem.ai generate outreach drafts directly from a candidate's actual commits and PRs for this reason. Whether you use a tool or write it yourself, the principle is the same. Show them you know what they built.

What a modern engineering sourcing workflow looks like in 2026

I've watched enough recruiting teams operate over the past few years to have a clear picture of what separates the ones filling roles in 30-45 days from the ones stuck in 90-day searches.

Start from behavior, not credentials. The first question isn't "does this person have X years of experience?" It's "has this person actually done work that resembles what we need?" Those questions often produce very different candidate lists. Experience years on a resume and observable skill demonstrated through real work are two different things, and they frequently don't correlate the way we assume they do.

Do the math on seniority versus availability. Senior engineers with high social visibility (big social media following, popular blog, lots of GitHub stars) are the most competed-for candidates in any given search. They receive a lot of recruiter contact and they're selective. The better move is often to look one level down: candidates who are excellent but not famous. These are people who show up consistently in important repos without making noise about it. They often have faster response rates and are more genuinely open to conversation because they're not fielding fifteen outreach messages a week.

Make your outreach have a specific point of view. The best recruiting messages I've seen read less like solicitations and more like: "I noticed you did X, and we're working on something related — here's why I think you'd find it interesting." It's specific enough to be credible and interesting enough to prompt a response.

Treat enrichment as an investment. Spending $5 to get a candidate's actual contact information and a deep coding analysis before reaching out is only wasteful if your outreach is going to be generic anyway. If you're doing real research, getting to someone's actual email rather than hoping they respond to a LinkedIn message pays back fast.

Measure what you can actually control. Not "how long did it take to fill this role" — that has too many variables. More like: "what percentage of sourced candidates converted to a first conversation?" and "which sourcing signals predicted which candidates moved forward?" This kind of feedback loop is how sourcing instincts become repeatable systems.

The next six months

The 11% rebound in software engineering job postings is real and will continue. Companies that froze hiring in 2023 and 2024 didn't make their underlying need for engineers go away. They deferred it. Those deferred needs are coming due now, especially at companies trying to build AI-native products and discovering they need people who can actually ship code, not just prototype with API wrappers.

The talent market split isn't going to resolve quickly. At the top, experienced engineers with real AI/ML depth or strong full-stack engineers who've shown adaptability, demand is outpacing supply and will stay that way for a while. At the entry level, the market will stay tight as long as AI coding tools keep improving. The engineers in the middle, experienced but not specialized, will face real competition from both directions.

For recruiting teams, the implication is simple: the tools and tactics from 2020 and 2021 won't work in this market. The candidates who matter aren't looking. The resumes that exist are increasingly unreliable. The outreach templates that got 20% response rates four years ago are getting deleted on arrival.

The teams winning right now are sourcing from contribution data, writing personalized outreach, and treating candidate research as a real competitive advantage rather than a checkbox. That's the playbook. Everything else is just volume.

Data sourced from Citadel Securities/Indeed job posting analysis (via Benzinga, March 2026), Stack Overflow Developer Survey 2024, LinkedIn Workforce Report, and Converge Resources 2026 hiring analysis.

Find the engineers who've already built it

Search 30M+ monthly GitHub events. Match on real code, not resumes.

Get started