April 2026 · 18 min read

How to hire Julia developers in 2026: A sourcing guide

Julia was built to solve the two-language problem in scientific computing. It now powers COVID vaccine dosing at Moderna, macroeconomic modeling at the Federal Reserve Bank of New York, and climate simulations at Caltech. The developer pool is small and concentrated in academia and research labs, but these engineers are building systems that directly affect drug approvals, financial policy, and climate science. Here's how to find them.

Julia was created at MIT in 2012 by four researchers who wanted one language that was as fast as C, as general as Python, as statistical as R, and as readable as MATLAB. That sounds like a sales pitch, but the results back it up. NASA achieved a 15,000x speedup over MATLAB for satellite conjunction analysis. The Federal Reserve Bank of New York replaced their DSGE macroeconomic models with Julia in 2015 and has used it since. Moderna used the Pumas framework (built in Julia) to model COVID-19 vaccine dosing regimens. Twenty of the world's largest pharmaceutical companies now use JuliaHub's platform for drug development.

Julia's ecosystem lives almost entirely on GitHub. The core language, DifferentialEquations.jl, Flux.jl, JuMP.jl, Makie.jl, CUDA.jl, Turing.jl — every major package is developed in the open. That makes GitHub the best sourcing channel for Julia talent. We touched on scientific computing languages in our niche language sourcing guide, but Julia deserves its own treatment because of how different the candidate profile is from typical software engineering hires.

This guide covers where Julia developers work on GitHub, what separates a domain scientist who writes scripts from an engineer who builds production Julia systems, why the SciML ecosystem matters for pharma and climate recruiting, and how to build a sourcing workflow that reaches Julia engineers before they start looking.

The Julia developer market in 2026

Julia sits at 1.1% usage in the Stack Overflow 2024 Developer Survey. The language has been downloaded more than 100 million times. The package registry lists over 12,000 packages. More than 10,000 companies and 1,500 universities use Julia. By mainstream standards, Julia is small. But in scientific computing, quantitative finance, and pharmaceutical modeling, it dominates.

The candidate pool is unusual. Most Julia developers are not career software engineers. They are computational scientists, applied mathematicians, physicists, quantitative analysts, pharmacometricians, and climate researchers who need to write fast numerical code. Many hold PhDs. Many came from MATLAB, R, Fortran, or Python and switched because they were tired of prototyping in one language and rewriting for performance in another. This background shapes how you source and evaluate them. You are often hiring a domain expert who programs, not a programmer who knows a domain.

Compensation reflects this split. The average US salary for Julia developers is roughly $107,448, but that average hides wide variation. A Julia developer at a quant fund or pharma company doing pharmacometric modeling might earn $180,000 or more, because the role demands both Julia proficiency and domain knowledge that takes years to build. A Julia developer building internal tooling at a research lab might earn less. The premium lives at the intersection of Julia skill and specialized domain expertise: climate science, drug development, financial modeling, or computational biology.

Where do Julia developers work? Mostly in a few specific sectors. Pharmaceutical companies use Julia for pharmacokinetic and pharmacodynamic modeling: Moderna, Pfizer, and AstraZeneca all use Julia-based tools for drug development. Pumas-AI, the company behind the Pumas pharmacometrics framework, has supported 26 drugs submitted to the FDA. Quant firms use Julia for portfolio optimization, risk modeling, and derivatives pricing (BlackRock uses it for financial optimization). CliMA at Caltech is building the next generation of climate models entirely in Julia. And the Federal Reserve Bank of New York has run DSGE macroeconomic models in Julia since 2015.

The academic footprint matters for sourcing. Julia has deep roots at MIT (where it was created), Stanford, Caltech, and research institutions across Europe. JuliaCon, the annual conference, drew 43,000 unique viewers for its 2021 virtual event. Julia Discourse and Zulip are where contributors discuss technical problems in the open. Academic researchers who contribute to Julia packages often have public profiles, published papers, and conference talks that make them easy to find. Many are open to industry roles if the problem is interesting enough.

Hiring timelines run 60 to 120 days for Julia roles, and specialized positions (pharmacometrics, climate modeling, scientific ML) can take longer. The bottleneck is not that Julia developers are hard to reach. It's that the overlap between "strong Julia programmer" and "deep expertise in your specific domain" is small. A Julia developer who builds GPU kernels for computational fluid dynamics is not interchangeable with one who builds probabilistic models for drug dosing, even though both write Julia.

Where Julia developers contribute on GitHub

The strongest Julia engineers build in public. Because the community is smaller than Python or R, individual contributions carry more weight and are easier to trace. GitHub is one of the most effective sourcing channels for Julia engineers because nearly every tool in the ecosystem is developed openly.

The language itself. JuliaLang/julia has 48,500 stars and is one of the most-starred language repositories on GitHub. Contributors here work on the compiler, type system, standard library, and LLVM code generation. This is a small group. Even meaningful issue discussions and triage on this repo indicate someone who understands Julia at its foundations: the type inference engine, method dispatch, and compilation pipeline. Core language contributors are rare and strong candidates for any Julia role.

Scientific machine learning. The SciML organization is the heart of Julia's scientific computing ecosystem. SciML/DifferentialEquations.jl (3,100 stars) is the standard library for solving differential equations: ordinary, stochastic, delay, differential-algebraic, and more. The broader SciML organization maintains over 200 packages covering neural ODEs, sensitivity analysis, surrogate modeling, and physics-informed neural networks. Pharmaceutical companies, climate scientists, and aerospace engineers all depend on these tools. A contributor to DifferentialEquations.jl or NeuralPDE.jl understands both the math and the engineering needed to make solvers fast and reliable.

Machine learning. FluxML/Flux.jl (4,700 stars) is Julia's primary deep learning framework. Unlike PyTorch or TensorFlow, Flux is written entirely in Julia with no C++ backend. Contributors can read, modify, and optimize every layer of the stack in one language. Flux handles both traditional deep learning and more experimental architectures that benefit from Julia's composability. Contributors to Flux tend to understand automatic differentiation, GPU computation, and compiler optimization on top of machine learning.

Mathematical optimization. JuMP-dev/JuMP.jl (2,400 stars) is a domain-specific modeling language for mathematical optimization embedded in Julia. It supports linear, mixed-integer, conic, semidefinite, and nonlinear programming. Operations research teams, supply chain companies, energy planners, and portfolio managers all use JuMP. Contributors understand optimization theory, solver interfaces, and how to design expressive APIs for mathematical modeling. BlackRock's use of Julia for financial optimization runs through tools in this ecosystem.

Visualization. MakieOrg/Makie.jl (2,700 stars) is Julia's GPU-powered visualization library, capable of interactive 2D and 3D plots, animations, and publication-quality figures. It replaced the Plots.jl ecosystem with a full rewrite optimized for performance and interactivity. Contributors here understand graphics programming, GPU rendering pipelines, and scientific visualization, a combination that is hard to find outside the Julia community.

Interactive notebooks. fonsp/Pluto.jl (5,300 stars) is a reactive notebook environment for Julia. Unlike Jupyter, Pluto notebooks are reactive: changing one cell automatically updates all dependent cells, and notebooks are stored as plain Julia files (not JSON). Pluto is widely used in university courses and research. Contributors to Pluto understand reactive programming, UI design, and how developer tools work in educational settings.

GPU computing. JuliaGPU/CUDA.jl lets developers write GPU kernels directly in Julia — no CUDA C required. This is one of Julia's most distinctive capabilities. A developer who writes custom CUDA.jl kernels understands GPU architecture, memory hierarchies, and parallel computation at a level well beyond calling pre-built GPU functions from Python. The JuliaGPU organization also includes packages for AMD GPUs, Intel GPUs, and Apple Metal.

Probabilistic programming. TuringLang/Turing.jl (2,100 stars, 450+ citations in academic papers) is a probabilistic programming framework developed at the Alan Turing Institute in the UK. It handles Bayesian inference, MCMC sampling, and variational inference. Contributors understand statistical modeling, sampling algorithms, and computational statistics. Turing.jl shows up in academic research, pharmaceutical modeling, and Bayesian analysis of real-world data.

Ecosystem packages. Beyond the marquee projects, Julia's General registry contains over 12,000 packages. Package authors who maintain well-documented, well-tested libraries with active users are strong candidates. Beacon Biosignals, a neuroscience company, maintains 29 Julia packages. Companies like this are both talent pools and proof that Julia runs in production at scale. Cross-referencing the General registry with GitHub profiles is manual but effective.

Quality signals in Julia code

Evaluating Julia code requires understanding what makes it different from Python, R, or MATLAB. A recruiter who looks at Julia through the lens of general-purpose programming will miss the signals that matter. Seniority signals on GitHub apply broadly, but Julia has its own markers of expertise tied to the type system, compilation model, and scientific computing roots.

Multiple dispatch mastery. Multiple dispatch is Julia's central design pattern. Instead of methods belonging to classes (like Python or Java), Julia functions have multiple methods selected based on the types of all arguments. An experienced Julia developer designs packages around multiple dispatch: defining abstract types, writing methods that specialize on type combinations, using dispatch to achieve polymorphism without inheritance. A developer who writes Julia like Python (one big function with if isinstance(x, ...) checks translated to Julia) is showing beginner habits. Clean, composable multiple dispatch is the strongest single signal of Julia fluency.

Type system and type stability. Julia's type system is not like TypeScript's or Java's. Types in Julia exist for performance, not safety. A function is "type stable" when the compiler can infer the return type from the input types at compile time. Type-stable code runs orders of magnitude faster because the compiler generates specialized machine code. An experienced Julia developer writes type-stable functions, uses abstract types to define interfaces, creates concrete type hierarchies for data structures, and knows when to reach for parametric types. If a developer's code triggers frequent runtime dispatch (visible via @code_warntype), they are writing slow Julia.

Metaprogramming. Julia has a Lisp-like macro system that operates on the abstract syntax tree. Experienced Julia developers use macros to eliminate boilerplate, create domain-specific languages, and generate specialized code at compile time. JuMP.jl is built on macros: the @variable, @constraint, and @objective macros transform mathematical expressions into solver-compatible data structures. But macros are a power tool with sharp edges. Overuse is a negative signal. An experienced developer reaches for macros only when functions and multiple dispatch fall short. Well-designed macros that generate clean, debuggable code are a strong quality marker.

GPU kernel development. Writing custom GPU kernels with CUDA.jl is one of the most technically demanding things you can do in Julia. It requires understanding GPU memory hierarchies (global, shared, local), thread and block organization, memory coalescing, and synchronization primitives. A developer who writes custom CUDA.jl kernels rather than calling pre-built GPU functions has systems-level thinking and scientific computing knowledge. This is a small group, and they are in high demand for scientific ML, computational physics, and high-performance computing roles.

Package development and ecosystem contribution. Julia has a well-defined package development workflow: the Pkg module handles environments, dependencies, and version resolution. An experienced Julia developer structures packages properly: src/ and test/ directories, Project.toml with correct dependency bounds, docstrings that render in Julia's help system, and CI configured for multiple Julia versions. Packages registered in the General registry go through a review process. A developer whose packages are registered, maintained, and depended on by others is building the ecosystem's infrastructure.

Performance profiling and optimization. Julia promises C-like speed, but getting there requires understanding the compilation model. An experienced developer uses @time, @benchmark (from BenchmarkTools.jl), @profile, and @code_warntype to find allocations, type instabilities, and performance bottlenecks. They know the difference between heap and stack allocations, know when to use StaticArrays over standard arrays, and can read LLVM IR output to verify that the compiler is generating the code they expect. Performance-aware Julia code separates engineers who use Julia for its speed from those who just like the syntax.

Testing and documentation. Julia's test framework is built into the standard library. Strong Julia developers write tests with @testset blocks, test type stability explicitly, and include doctests (examples in docstrings that run automatically). Documentation generated by Documenter.jl with clear examples, mathematical notation where appropriate, and cross-references to related functions shows a developer who builds for others, not just themselves.

Scientific ML and the SciML ecosystem

The SciML ecosystem deserves its own section because it is where Julia has the strongest advantage over any other language, and where the highest-value hiring opportunities are. SciML (Scientific Machine Learning) sits at the intersection of traditional scientific computing (differential equations, optimization, simulation) and modern machine learning. Julia is the dominant language in this space, and the SciML organization on GitHub is the center of gravity.

The SciML organization maintains over 200 packages. DifferentialEquations.jl is the flagship: it solves ordinary, stochastic, delay, and differential-algebraic equations with a unified interface and automatic algorithm selection. This package alone made Julia the default choice for computational scientists who need to solve differential equations at scale. It runs 10-100x faster than SciPy or MATLAB equivalents and composes cleanly with automatic differentiation, making it the best tool in any language for this class of problems.

Neural ODEs and physics-informed neural networks are where SciML gets interesting for hiring. NeuralPDE.jl implements physics-informed neural networks (PINNs) that embed physical laws directly into neural network training. Instead of learning everything from data, PINNs enforce that solutions satisfy known differential equations, which cuts data requirements and improves generalization. Pharmaceutical companies modeling drug absorption, climate scientists modeling atmospheric dynamics, and aerospace companies modeling fluid flow all need this. Contributors to NeuralPDE.jl understand both deep learning and applied mathematics at a level that few engineers in any language reach.

Pharma is the clearest example of SciML's real-world impact. Pumas-AI built the Pumas framework on top of Julia's SciML ecosystem for pharmacokinetic and pharmacodynamic modeling. Twenty of the world's largest pharma companies use JuliaHub's platform, which runs Pumas. Twenty-six drugs have been submitted to the FDA with Pumas-supported analyses. Moderna used Pumas for COVID-19 vaccine dosing optimization. Pfizer and AstraZeneca are users. If you are hiring for pharma and biotech Julia roles, SciML and Pumas contributors are the primary talent pool. These developers understand both the computational methods and the regulatory context their code operates in.

Climate science is another area where SciML has concentrated impact. CliMA (Climate Modeling Alliance) at Caltech is building the next generation of Earth system models entirely in Julia. This is not a small academic project. It is a multi-institution effort to replace the Fortran climate models used for decades with modern, GPU-accelerated, differentiable simulations. Contributors to CliMA and related packages (Oceananigans.jl for ocean modeling, ClimaAtmos.jl for atmospheric modeling) are doing work that will directly inform climate policy. These are some of the most capable scientific programmers in any language.

Sensitivity analysis and uncertainty quantification round out the SciML ecosystem. Packages like SciMLSensitivity.jl compute how model outputs change with respect to parameters, which is essential for optimization, inverse problems, and understanding model reliability. In pharma, sensitivity analysis tells you how drug dosing affects patient outcomes. In climate science, it tells you which model parameters most affect temperature projections. In finance, it tells you which market factors most affect portfolio risk. Contributors who work on these tools tend to have strong backgrounds in numerical methods, automatic differentiation, and applied statistics.

For hiring, the SciML ecosystem is where many recruiters first realize how different Julia sourcing is. These are not hobby projects or tutorial exercises. They are production tools used by NASA, the Federal Reserve, Moderna, and Caltech. The developers who build and maintain them are solving real problems in the open on GitHub. If your company works in any domain that involves differential equations, simulation, optimization, or scientific modeling, this ecosystem is your primary sourcing target.

How to search for Julia developers on GitHub

GitHub search works for Julia, but the candidate profile is different enough from mainstream languages that you need to adjust your approach. The niche language sourcing strategies we covered previously apply, with modifications for Julia's academic-leaning community.

Language filter. Filtering by language:julia in GitHub repository or code search surfaces developers who actively write Julia. At 1.1% Stack Overflow usage, the signal-to-noise ratio is high. Almost everyone who shows up chose Julia deliberately rather than stumbling into it. But many Julia developers have Python, R, or MATLAB as their primary language on GitHub, with Julia as a secondary or growing presence. Filtering only by primary language will miss developers who are transitioning.

Julia General registry. Julia's package registry is the equivalent of npm, PyPI, or hex.pm. Every registered package has a public GitHub repository, a listed author, and version history. Packages with many dependents indicate developers who build infrastructure the ecosystem relies on. The registry itself is a GitHub repository (JuliaRegistries/General), and the PR history shows who is actively publishing and maintaining packages.

Organization-based search. Julia's GitHub presence is organized around topical organizations: JuliaLang (core language), SciML (scientific ML), FluxML (machine learning), JuMP-dev (optimization), JuliaGPU (GPU computing), JuliaStats (statistics), JuliaData (data manipulation), JuliaDiff (automatic differentiation), TuringLang (probabilistic programming). Each organization has its own contributor base. Searching within these organizations rather than across all of GitHub gives you a pre-filtered list of developers who work in specific domains.

Academic cross-referencing. Many Julia contributors have academic profiles (Google Scholar pages, ORCID IDs, university affiliations) linked from their GitHub profiles. A developer whose Julia packages are cited in peer-reviewed papers has demonstrated both technical quality and domain relevance. For pharma and climate roles, checking whether a candidate's work has been cited in the relevant literature is a signal no other sourcing method provides. This is unusual for software engineering hiring but normal for Julia hiring, where research and engineering blur together.

Community sources. JuliaCon talks are recorded and publicly available (the 2021 virtual event had 43,000 unique viewers). Speaker lists identify active community members who can present their work clearly. Julia Discourse (discourse.julialang.org) is the primary discussion forum, and frequent posters with helpful technical answers are easy to spot. Julia Zulip is more real-time. Slack is another hub, though less publicly searchable. Conference speakers and forum contributors are underused sourcing channels because they are publicly available lists of engaged practitioners.

Adjacent language signals. Julia developers frequently come from Python (NumPy/SciPy users who hit performance walls), MATLAB (researchers who needed open-source tooling or better performance), R (statisticians who needed speed), and Fortran (scientists who wanted a modern alternative to legacy codebases). Search for developers whose primary language is Python, MATLAB, or R but who have Julia repositories. They may be actively transitioning or bilingual. A computational physicist with 50 Python repos and 5 Julia repos is probably further along the Julia learning curve than their profile suggests.

A practical Julia sourcing workflow

Here is what works, from discovery to first outreach.

Step 1: Define the domain, not just the language. "Julia developer" is almost never the right search. Julia is a means to an end, and the end is always domain-specific. Are you hiring for pharmacometrics and drug modeling (SciML, Pumas, DifferentialEquations.jl)? Quantitative finance (JuMP.jl, portfolio optimization)? Climate science (CliMA, Oceananigans.jl)? Machine learning research (Flux.jl, Turing.jl)? High-performance computing and GPU programming (CUDA.jl, KernelAbstractions.jl)? Each maps to different GitHub organizations, package ecosystems, and candidate backgrounds. A pharmacometrician who models drug absorption has almost nothing in common with a GPU kernel developer who optimizes fluid dynamics simulations, even though both write Julia.

Step 2: Identify target repositories and organizations. Based on the domain, list the GitHub organizations and specific repositories where your ideal candidate would contribute. For pharma: SciML, PumasAI, DifferentialEquations.jl, and related sensitivity analysis packages. For quant finance: JuMP-dev, Optim.jl, and related optimization packages. For climate: CliMA repositories, Oceananigans.jl, ClimaAtmos.jl. For ML research: FluxML, TuringLang, Zygote.jl (automatic differentiation). For general infrastructure: JuliaLang, DataFrames.jl, Makie.jl. Map the role to the repos.

Step 3: Extract contributors. Use GitHub's contributor graphs or tools like riem.ai that index GitHub event data at scale. Focus on recent contributors (last 3 to 6 months) with meaningful contributions: code changes, reviews, and technical discussions, not documentation typos. Because Julia's community is small, your initial list may be 15 to 40 people for a specific domain. That is normal. The quality-to-volume ratio at this community size is much higher than for Python or JavaScript.

Step 4: Evaluate profiles with domain awareness. Review each candidate's GitHub profile for the quality signals described above: multiple dispatch design, type stability, GPU kernels, package development. General seniority signals like code review participation and cross-project contributions apply, but add domain-specific evaluation. For pharma roles, check whether the candidate's work has been cited in pharmacometrics journals. For climate roles, check for connections to CliMA or IPCC-related projects. For quant roles, look for optimization modeling experience with real constraints. Julia hiring is domain hiring. The language is the tool, not the point.

Step 5: Look at academia. This step does not exist in most software engineering sourcing workflows, but it matters for Julia. Many of the strongest Julia developers are postdocs, research scientists, or professors who might consider industry roles. University lab pages, Google Scholar profiles, and JuliaCon speaker bios often include GitHub links. A postdoc who maintains a well-used Julia package and has published papers using it brings technical depth and domain credibility that is hard to replicate. Do not skip academic candidates because they lack "industry experience." In Julia's world, the research lab is often more technically demanding than industry.

Step 6: Expand to adjacent backgrounds. Do not limit sourcing to pure Julia developers. Python developers with strong NumPy/SciPy backgrounds who have Julia side projects are viable candidates. The transition from Python's scientific stack to Julia is well-trodden. MATLAB users in engineering and physics often adopt Julia as a faster, open-source alternative. Fortran developers in climate science and computational physics bring decades of numerical computing experience. R users in statistics and biostatistics can transition to Julia for performance-sensitive work. The key signal is whether they have started writing Julia at all. If a Python developer has even two or three Julia repositories, they have already crossed the activation energy barrier.

Step 7: Craft domain-specific outreach. Generic messages get ignored. Effective developer outreach references specific contributions, but for Julia developers you also need to reference the domain. "I noticed your contributions to DifferentialEquations.jl, particularly the stiff ODE solver improvements" is good. "We're building pharmacokinetic models for a novel oncology drug and need someone who understands both the biology and the numerical methods" is better. Julia developers respond to interesting problems, not job titles or salary numbers. Many chose Julia because they care about the science, not the software engineering career track. Lead with the problem.

Step 8: Scale with tooling. The manual workflow above works but runs out of runway quickly in a small community. Tools like riem.ai automate the discovery and evaluation steps by analyzing 30 million-plus GitHub events per month and surfacing Julia developers based on actual contribution patterns. Instead of manually browsing SciML organizations and cross-referencing package registries, you describe the technical and domain profile in natural language ("Julia developers who contribute to pharmacometric modeling packages or differential equation solvers with experience in drug development") and get a ranked list with contribution summaries and quality scores.

Frequently asked questions

How many Julia developers are there?

Julia has been downloaded over 100 million times and the package registry lists more than 12,000 packages. The Stack Overflow 2024 Developer Survey puts Julia usage at 1.1% of respondents. The active community is concentrated in scientific computing, quantitative finance, and pharmaceutical modeling, with over 10,000 companies and 1,500 universities using the language. The developer pool skews heavily toward PhDs and domain scientists who program rather than traditional software engineers, which changes how you source them.

What salary should I expect to pay Julia developers?

The average US salary for Julia developers is roughly $107,448, but specialized roles in scientific computing, quantitative finance, and pharmaceutical modeling pay more. Julia developers at quant firms or pharma companies doing pharmacometric modeling often earn well above $150,000. The salary reflects both technical skill and domain expertise. Most Julia developers bring deep knowledge of mathematics, physics, biology, or finance alongside their programming ability.

What makes Julia different from Python for scientific computing?

Julia was designed to solve the "two-language problem," where scientists prototype in Python, then rewrite performance-critical code in C or Fortran. Julia gives you Python-like syntax with C-like speed through just-in-time compilation via LLVM. It runs 10-100x faster than Python for numerical computing without a rewrite in a second language. NASA reported a 15,000x speedup over MATLAB for certain simulations. For hiring, this means Julia developers write the prototype and the production code in the same language, so there are fewer handoffs and faster iteration.

Should I hire Julia developers or train Python developers to use Julia?

It depends on what you need. For roles requiring deep use of Julia's type system, multiple dispatch, metaprogramming, or GPU kernel development with CUDA.jl, you want someone with real Julia experience. These features have no Python equivalent and take time to learn. For roles that are mainly about domain expertise (climate modeling, pharmacometrics, optimization) where Julia is the implementation language, a strong Python scientist with Julia side projects can ramp effectively. Check GitHub for Julia packages or SciML contributions as evidence of self-directed learning. The question is whether the role demands Julia-specific engineering skill or domain expertise implemented in Julia.

What Julia projects and repositories should I look for on GitHub?

For the language itself: JuliaLang/julia (48,500 stars). For scientific ML: SciML/DifferentialEquations.jl and the broader SciML organization (200+ packages). For machine learning: FluxML/Flux.jl. For mathematical optimization: JuMP-dev/JuMP.jl. For visualization: MakieOrg/Makie.jl. For interactive notebooks: fonsp/Pluto.jl. For GPU computing: JuliaGPU/CUDA.jl. For probabilistic programming: TuringLang/Turing.jl (2,100 stars, 450+ citations). Contributors to any of these repositories are working on core infrastructure that the Julia ecosystem depends on. Package authors listed in the Julia General registry with well-maintained, documented libraries are strong candidates.

How long does it take to hire a Julia developer?

Expect 60 to 120 days for specialized Julia roles, especially those requiring both language expertise and domain knowledge (pharmacometrics, climate science, quantitative finance). The bottleneck is the small overlap of people who are strong Julia programmers and also have the relevant scientific or industry background. Sourcing from GitHub contribution data, JuliaCon speaker lists, Julia Discourse participants, and SciML ecosystem contributors can shorten the timeline by finding candidates who are actively building but not on job boards. For less specialized roles where Julia is one of several acceptable languages, timelines are shorter.

Find the engineers who've already built it

Search 30M+ monthly GitHub events. Match on real code, not resumes.

Get started