m1) * run_segment(m1, e)
return path_a + path_b
end
The result is **median 24.167 μs (4 allocs: 10.094 KiB)**
So, by using a vector of vectors, results stored in one block of Int128, making sure there is no allocation needed by calling the functions without external arguments, took the whole thing from 580 to 24(!) milliseconds.
I learned a lot! Hope you enjoyed this trip down the performance rabbit hole! Is there something else I could have done?
https://redd.it/1pls1tj
@r_Julia
return path_a + path_b
end
The result is **median 24.167 μs (4 allocs: 10.094 KiB)**
So, by using a vector of vectors, results stored in one block of Int128, making sure there is no allocation needed by calling the functions without external arguments, took the whole thing from 580 to 24(!) milliseconds.
I learned a lot! Hope you enjoyed this trip down the performance rabbit hole! Is there something else I could have done?
https://redd.it/1pls1tj
@r_Julia
Reddit
From the Julia community on Reddit
Explore this post and more from the Julia community
Beginner Julia: Installing on Windows
Hi,
I'm trying to set up Julia on Windows 11 and the recommended way seems to be Juliaup, but when installed like this either via the download or MS store, whenever when I invoke Julia, App Installer runs automatically to check for updates — this surely can't be intentional can it? Firstly, it could just break dependencies right? And it's so annoying to have this huge lag every time I open the terminal. I tried disabling the "Auto Updates" for Julia through the Windows Settings App to no avail.
I also tried the standalone installer, which doesn't have this problem, so I think to roll with this. I just wanted to double check if it's a good idea, is there smth I should be aware of?
https://redd.it/1pmd06p
@r_Julia
Hi,
I'm trying to set up Julia on Windows 11 and the recommended way seems to be Juliaup, but when installed like this either via the download or MS store, whenever when I invoke Julia, App Installer runs automatically to check for updates — this surely can't be intentional can it? Firstly, it could just break dependencies right? And it's so annoying to have this huge lag every time I open the terminal. I tried disabling the "Auto Updates" for Julia through the Windows Settings App to no avail.
I also tried the standalone installer, which doesn't have this problem, so I think to roll with this. I just wanted to double check if it's a good idea, is there smth I should be aware of?
https://redd.it/1pmd06p
@r_Julia
Reddit
From the Julia community on Reddit
Explore this post and more from the Julia community
Help with work
I’ve got a project I need help with, on Julia language due in a few weeks. If someone can help me with it I can pay , and also guide me. Thanks
https://redd.it/1pmenxg
@r_Julia
I’ve got a project I need help with, on Julia language due in a few weeks. If someone can help me with it I can pay , and also guide me. Thanks
https://redd.it/1pmenxg
@r_Julia
Reddit
From the Julia community on Reddit
Explore this post and more from the Julia community
Some tricks for
### Problems when using
I've just found out that, a lot of stuffs does not work with
1. Somewhere in the code compiler just cannot find out what the return type is
2. Some pretty printing work on handling exceptions.
### The trick
For the 1st problem, it is quite easy for the code that you write. Just add type assertions. For example:
But adding this makes your code crashes when exceptions occur (e.g. the
But then,
Now, this piece of code will compile nicely. The only caveat is that you have to add a throwaway
The same principle applies for some other cases. Examples:
Patching
Using these tricks, I built a toy predator-prey-grass simulation (similar to the one from
### My suggestions
- I think it is necessary to have a public API to tell when the code is being compiled. So instead of the users monkey patching packages, the authors can write
- Having tools to check for
I think that if a wide range of packages support
https://redd.it/1pqd58z
@r_Julia
--trim### Problems when using
--trimI've just found out that, a lot of stuffs does not work with
juliac --trim. But after digging into the rabbit hole. I've found that a lot of "not work" is either:1. Somewhere in the code compiler just cannot find out what the return type is
2. Some pretty printing work on handling exceptions.
### The trick
For the 1st problem, it is quite easy for the code that you write. Just add type assertions. For example:
# Although `MyConfig.width` has concrete type,
# the compiler won't assume that the key "width"
# exists and has the correct type
MyConfig(width = config_file["width"] :: Int)
But adding this makes your code crashes when exceptions occur (e.g. the
config_file does not have the key width). So you add @assert. But then,
@assert does not work with --trim because exception handling does not work. The solution is to just... monkey patch it.macro Base.assert(condition, message)
return quote
if $(esc(condition))
else
println(Core.stderr, $(esc(message)))
exit(1)
end
end
end
Now, this piece of code will compile nicely. The only caveat is that you have to add a throwaway
haskey so that the compiler knows that you are using it._ = haskey(cfg, "display_width")
@assert haskey(cfg, "display_width") "Missing config key 'display_width'"
@assert haskey(cfg, "display_height") "Missing config key 'display_height'"
The same principle applies for some other cases. Examples:
# Make printing works without having to specify Core.stdout
Base.print(x) = print(Core.stdout, x)
Base.println(x) = println(Core.stdout, x)
Patching
FixedPointNumbers:@noinline function FixedPointNumbers.throw_converterror(
::Type{X}, x
) where {X <: FixedPoint}
print(Core.stderr, "ConversionError: Cannot convert $x to $X\n")
exit(2)
return nothing
end
Using these tricks, I built a toy predator-prey-grass simulation (similar to the one from
Agents.jl) mostly from scratch with Raylib.jl and it compiles nicely to a tiny 5MB file.### My suggestions
- I think it is necessary to have a public API to tell when the code is being compiled. So instead of the users monkey patching packages, the authors can write
--trim compatible codes.- Having tools to check for
--trim compatibility (most if not all the function calls in the package can be compiled with --trim). I think that if a wide range of packages support
--trim is it safe to say the static compilation problem is solved.https://redd.it/1pqd58z
@r_Julia
Reddit
From the Julia community on Reddit
Explore this post and more from the Julia community
Am I doing something wrong?
Context: I am a Data Scientist who works mostly in R library development so dont judge me here.
I always wanted to give julia a real shot, so I tried this weekendand used it for EDA on a for fun project that I do every year by this time of the year.
I dont want to develop a library, so, for a normal DS or EDA project, I did, after mkdir and cd
> $ julia
> julia$ using Pkg; Pkg.activate(".")
So, now, for library importing I do, still on julia repl,
> julia$ Pkg.add("DataFrames")
And then after this runs, should I use "import DataFrames" or "using DataFrames" in my /projectroot/main.jl file?
And how am I supposed to run the project? Just inside helix
> :sh julia main.jl
?
I got some errors with dependencies like "cannot read from file" iirc. I am on Fedora
Am I missing sonething? Is this the supposed way of doing this?
Edit: formatting of MD blocks
https://redd.it/1psy78d
@r_Julia
Context: I am a Data Scientist who works mostly in R library development so dont judge me here.
I always wanted to give julia a real shot, so I tried this weekendand used it for EDA on a for fun project that I do every year by this time of the year.
I dont want to develop a library, so, for a normal DS or EDA project, I did, after mkdir and cd
> $ julia
> julia$ using Pkg; Pkg.activate(".")
So, now, for library importing I do, still on julia repl,
> julia$ Pkg.add("DataFrames")
And then after this runs, should I use "import DataFrames" or "using DataFrames" in my /projectroot/main.jl file?
And how am I supposed to run the project? Just inside helix
> :sh julia main.jl
?
I got some errors with dependencies like "cannot read from file" iirc. I am on Fedora
Am I missing sonething? Is this the supposed way of doing this?
Edit: formatting of MD blocks
https://redd.it/1psy78d
@r_Julia
Reddit
From the Julia community on Reddit
Explore this post and more from the Julia community
ANN Ark.jl v0.3.0: Archetype-based ECS, now with entity relationships and batch operations
Ark.jl v0.3 is our biggest feature release yet. It introduces first‑class entity relationships, expands batch operations far beyond entity creation, and delivers substantial performance improvements.
## Why ECS?
Skip this of you know it already!
Entity Component Systems (ECS) offer a clean, scalable way to build individual- and agent-based models by separating agent data from behavioral logic. Agents are simply collections of components, while systems define how those components interact, making simulations modular, extensible, and efficient even with millions of heterogeneous individuals.
Ark.jl brings this architecture to Julia with a lightweight, performance-focused implementation that empowers scientific modellers to design complex and performant simulations without the need for deep software engineering expertise.
## Release highlights
### Entity relationships
This release adds first‑class support for entity relationships, allowing you to express connections between entities directly using ECS primitives. While it is possible to express relations by storing entities inside components, the tight integration into the ECS provides several benefits. Most importantly, relationship can be queried now as efficiently as component queries. In addition, relationships become more ergonomic, more consistent, and safer to use.
For details, see the user manual's chapter on Entity relationships.
### Batch operations
Previous versions of Ark.jl already offered blazing‑fast batch entity creation. This release generalizes the concept to all operations that modify entities or their components. You can now remove all entities matching a filter, add components to all matching entities, and more, using a single batched call. These operations are typically at least an order of magnitude faster than performing the same changes individually.
For details, see the user manual's chapter on Batch operations
### Cached queries
Queries in archetype‑based ECS are already highly efficient, but this release introduces cached queries for even greater performance, especially in worlds with many archetypes. Instead of checking the components of all archetypes in the pre-selection (which is based on the most "rare" component in a query), cached queries maintain a list of all matching archetypes. This means matching checks are only needed when a new archetype is created, eliminating overhead during query iteration.
### Performance improvements
Numerous optimizations to component operations and the archetype graph yield significant speedups. Component operations are now 1.5–2× faster, and entity creation is up to 3× faster than before.
### More
For a full list of all changes, see the CHANGELOG.
See the release announcement in the Julia Discourse for discussions.
As always, your feedback contributions are highly appreciated!
https://redd.it/1pyl8b1
@r_Julia
Ark.jl v0.3 is our biggest feature release yet. It introduces first‑class entity relationships, expands batch operations far beyond entity creation, and delivers substantial performance improvements.
## Why ECS?
Skip this of you know it already!
Entity Component Systems (ECS) offer a clean, scalable way to build individual- and agent-based models by separating agent data from behavioral logic. Agents are simply collections of components, while systems define how those components interact, making simulations modular, extensible, and efficient even with millions of heterogeneous individuals.
Ark.jl brings this architecture to Julia with a lightweight, performance-focused implementation that empowers scientific modellers to design complex and performant simulations without the need for deep software engineering expertise.
## Release highlights
### Entity relationships
This release adds first‑class support for entity relationships, allowing you to express connections between entities directly using ECS primitives. While it is possible to express relations by storing entities inside components, the tight integration into the ECS provides several benefits. Most importantly, relationship can be queried now as efficiently as component queries. In addition, relationships become more ergonomic, more consistent, and safer to use.
For details, see the user manual's chapter on Entity relationships.
### Batch operations
Previous versions of Ark.jl already offered blazing‑fast batch entity creation. This release generalizes the concept to all operations that modify entities or their components. You can now remove all entities matching a filter, add components to all matching entities, and more, using a single batched call. These operations are typically at least an order of magnitude faster than performing the same changes individually.
For details, see the user manual's chapter on Batch operations
### Cached queries
Queries in archetype‑based ECS are already highly efficient, but this release introduces cached queries for even greater performance, especially in worlds with many archetypes. Instead of checking the components of all archetypes in the pre-selection (which is based on the most "rare" component in a query), cached queries maintain a list of all matching archetypes. This means matching checks are only needed when a new archetype is created, eliminating overhead during query iteration.
### Performance improvements
Numerous optimizations to component operations and the archetype graph yield significant speedups. Component operations are now 1.5–2× faster, and entity creation is up to 3× faster than before.
### More
For a full list of all changes, see the CHANGELOG.
See the release announcement in the Julia Discourse for discussions.
As always, your feedback contributions are highly appreciated!
https://redd.it/1pyl8b1
@r_Julia
GitHub
GitHub - ark-ecs/Ark.jl: Ark.jl -- Archetype-based Entity Component System (ECS) for Julia.
Ark.jl -- Archetype-based Entity Component System (ECS) for Julia. - GitHub - ark-ecs/Ark.jl: Ark.jl -- Archetype-based Entity Component System (ECS) for Julia.
ANN Ark.jl v0.3.0: Archetype-based ECS, now with entity relationships and batch operations
Ark.jl v0.3 is our biggest feature release yet. It introduces first‑class entity relationships, expands batch operations far beyond entity creation, and delivers substantial performance improvements.
## Why ECS?
Skip this of you know it already!
Entity Component Systems (ECS) offer a clean, scalable way to build individual- and agent-based models by separating agent data from behavioral logic. Agents are simply collections of components, while systems define how those components interact, making simulations modular, extensible, and efficient even with millions of heterogeneous individuals.
Ark.jl brings this architecture to Julia with a lightweight, performance-focused implementation that empowers scientific modellers to design complex and performant simulations without the need for deep software engineering expertise.
## Release highlights
### Entity relationships
This release adds first‑class support for entity relationships, allowing you to express connections between entities directly using ECS primitives. While it is possible to express relations by storing entities inside components, the tight integration into the ECS provides several benefits. Most importantly, relationship can be queried now as efficiently as component queries. In addition, relationships become more ergonomic, more consistent, and safer to use.
For details, see the user manual's chapter on Entity relationships.
### Batch operations
Previous versions of Ark.jl already offered blazing‑fast batch entity creation. This release generalizes the concept to all operations that modify entities or their components. You can now remove all entities matching a filter, add components to all matching entities, and more, using a single batched call. These operations are typically at least an order of magnitude faster than performing the same changes individually.
For details, see the user manual's chapter on Batch operations
### Cached queries
Queries in archetype‑based ECS are already highly efficient, but this release introduces cached queries for even greater performance, especially in worlds with many archetypes. Instead of checking the components of all archetypes in the pre-selection (which is based on the most "rare" component in a query), cached queries maintain a list of all matching archetypes. This means matching checks are only needed when a new archetype is created, eliminating overhead during query iteration.
### Performance improvements
Numerous optimizations to component operations and the archetype graph yield significant speedups. Component operations are now 1.5–2× faster, and entity creation is up to 3× faster than before.
### More
For a full list of all changes, see the CHANGELOG.
See the release announcement in the Julia Discourse for discussions.
As always, your feedback contributions are highly appreciated!
https://redd.it/1pyl8b1
@r_Julia
Ark.jl v0.3 is our biggest feature release yet. It introduces first‑class entity relationships, expands batch operations far beyond entity creation, and delivers substantial performance improvements.
## Why ECS?
Skip this of you know it already!
Entity Component Systems (ECS) offer a clean, scalable way to build individual- and agent-based models by separating agent data from behavioral logic. Agents are simply collections of components, while systems define how those components interact, making simulations modular, extensible, and efficient even with millions of heterogeneous individuals.
Ark.jl brings this architecture to Julia with a lightweight, performance-focused implementation that empowers scientific modellers to design complex and performant simulations without the need for deep software engineering expertise.
## Release highlights
### Entity relationships
This release adds first‑class support for entity relationships, allowing you to express connections between entities directly using ECS primitives. While it is possible to express relations by storing entities inside components, the tight integration into the ECS provides several benefits. Most importantly, relationship can be queried now as efficiently as component queries. In addition, relationships become more ergonomic, more consistent, and safer to use.
For details, see the user manual's chapter on Entity relationships.
### Batch operations
Previous versions of Ark.jl already offered blazing‑fast batch entity creation. This release generalizes the concept to all operations that modify entities or their components. You can now remove all entities matching a filter, add components to all matching entities, and more, using a single batched call. These operations are typically at least an order of magnitude faster than performing the same changes individually.
For details, see the user manual's chapter on Batch operations
### Cached queries
Queries in archetype‑based ECS are already highly efficient, but this release introduces cached queries for even greater performance, especially in worlds with many archetypes. Instead of checking the components of all archetypes in the pre-selection (which is based on the most "rare" component in a query), cached queries maintain a list of all matching archetypes. This means matching checks are only needed when a new archetype is created, eliminating overhead during query iteration.
### Performance improvements
Numerous optimizations to component operations and the archetype graph yield significant speedups. Component operations are now 1.5–2× faster, and entity creation is up to 3× faster than before.
### More
For a full list of all changes, see the CHANGELOG.
See the release announcement in the Julia Discourse for discussions.
As always, your feedback contributions are highly appreciated!
https://redd.it/1pyl8b1
@r_Julia
GitHub
GitHub - ark-ecs/Ark.jl: Ark.jl -- Archetype-based Entity Component System (ECS) for Julia.
Ark.jl -- Archetype-based Entity Component System (ECS) for Julia. - GitHub - ark-ecs/Ark.jl: Ark.jl -- Archetype-based Entity Component System (ECS) for Julia.
Am I doing something wrong?
Context: I am a Data Scientist who works mostly in R library development so dont judge me here.
I always wanted to give julia a real shot, so I tried this weekendand used it for EDA on a for fun project that I do every year by this time of the year.
I dont want to develop a library, so, for a normal DS or EDA project, I did, after mkdir and cd
> $ julia
> julia$ using Pkg; Pkg.activate(".")
So, now, for library importing I do, still on julia repl,
> julia$ Pkg.add("DataFrames")
And then after this runs, should I use "import DataFrames" or "using DataFrames" in my /projectroot/main.jl file?
And how am I supposed to run the project? Just inside helix
> :sh julia main.jl
?
I got some errors with dependencies like "cannot read from file" iirc. I am on Fedora
Am I missing sonething? Is this the supposed way of doing this?
Edit: formatting of MD blocks
https://redd.it/1psy78d
@r_Julia
Context: I am a Data Scientist who works mostly in R library development so dont judge me here.
I always wanted to give julia a real shot, so I tried this weekendand used it for EDA on a for fun project that I do every year by this time of the year.
I dont want to develop a library, so, for a normal DS or EDA project, I did, after mkdir and cd
> $ julia
> julia$ using Pkg; Pkg.activate(".")
So, now, for library importing I do, still on julia repl,
> julia$ Pkg.add("DataFrames")
And then after this runs, should I use "import DataFrames" or "using DataFrames" in my /projectroot/main.jl file?
And how am I supposed to run the project? Just inside helix
> :sh julia main.jl
?
I got some errors with dependencies like "cannot read from file" iirc. I am on Fedora
Am I missing sonething? Is this the supposed way of doing this?
Edit: formatting of MD blocks
https://redd.it/1psy78d
@r_Julia
Reddit
From the Julia community on Reddit
Explore this post and more from the Julia community
Some tricks for
### Problems when using
I've just found out that, a lot of stuffs does not work with
1. Somewhere in the code compiler just cannot find out what the return type is
2. Some pretty printing work on handling exceptions.
### The trick
For the 1st problem, it is quite easy for the code that you write. Just add type assertions. For example:
But adding this makes your code crashes when exceptions occur (e.g. the
But then,
Now, this piece of code will compile nicely. The only caveat is that you have to add a throwaway
The same principle applies for some other cases. Examples:
Patching
Using these tricks, I built a toy predator-prey-grass simulation (similar to the one from
### My suggestions
- I think it is necessary to have a public API to tell when the code is being compiled. So instead of the users monkey patching packages, the authors can write
- Having tools to check for
I think that if a wide range of packages support
https://redd.it/1pqd58z
@r_Julia
--trim### Problems when using
--trimI've just found out that, a lot of stuffs does not work with
juliac --trim. But after digging into the rabbit hole. I've found that a lot of "not work" is either:1. Somewhere in the code compiler just cannot find out what the return type is
2. Some pretty printing work on handling exceptions.
### The trick
For the 1st problem, it is quite easy for the code that you write. Just add type assertions. For example:
# Although `MyConfig.width` has concrete type,
# the compiler won't assume that the key "width"
# exists and has the correct type
MyConfig(width = config_file["width"] :: Int)
But adding this makes your code crashes when exceptions occur (e.g. the
config_file does not have the key width). So you add @assert. But then,
@assert does not work with --trim because exception handling does not work. The solution is to just... monkey patch it.macro Base.assert(condition, message)
return quote
if $(esc(condition))
else
println(Core.stderr, $(esc(message)))
exit(1)
end
end
end
Now, this piece of code will compile nicely. The only caveat is that you have to add a throwaway
haskey so that the compiler knows that you are using it._ = haskey(cfg, "display_width")
@assert haskey(cfg, "display_width") "Missing config key 'display_width'"
@assert haskey(cfg, "display_height") "Missing config key 'display_height'"
The same principle applies for some other cases. Examples:
# Make printing works without having to specify Core.stdout
Base.print(x) = print(Core.stdout, x)
Base.println(x) = println(Core.stdout, x)
Patching
FixedPointNumbers:@noinline function FixedPointNumbers.throw_converterror(
::Type{X}, x
) where {X <: FixedPoint}
print(Core.stderr, "ConversionError: Cannot convert $x to $X\n")
exit(2)
return nothing
end
Using these tricks, I built a toy predator-prey-grass simulation (similar to the one from
Agents.jl) mostly from scratch with Raylib.jl and it compiles nicely to a tiny 5MB file.### My suggestions
- I think it is necessary to have a public API to tell when the code is being compiled. So instead of the users monkey patching packages, the authors can write
--trim compatible codes.- Having tools to check for
--trim compatibility (most if not all the function calls in the package can be compiled with --trim). I think that if a wide range of packages support
--trim is it safe to say the static compilation problem is solved.https://redd.it/1pqd58z
@r_Julia
Reddit
From the Julia community on Reddit
Explore this post and more from the Julia community
A review of trimming in Julia
https://viralinstruction.com/posts/aoc2025/
https://redd.it/1pmay55
@r_Julia
https://viralinstruction.com/posts/aoc2025/
https://redd.it/1pmay55
@r_Julia
Beginner Julia: Installing on Windows
Hi,
I'm trying to set up Julia on Windows 11 and the recommended way seems to be Juliaup, but when installed like this either via the download or MS store, whenever when I invoke Julia, App Installer runs automatically to check for updates — this surely can't be intentional can it? Firstly, it could just break dependencies right? And it's so annoying to have this huge lag every time I open the terminal. I tried disabling the "Auto Updates" for Julia through the Windows Settings App to no avail.
I also tried the standalone installer, which doesn't have this problem, so I think to roll with this. I just wanted to double check if it's a good idea, is there smth I should be aware of?
https://redd.it/1pmd06p
@r_Julia
Hi,
I'm trying to set up Julia on Windows 11 and the recommended way seems to be Juliaup, but when installed like this either via the download or MS store, whenever when I invoke Julia, App Installer runs automatically to check for updates — this surely can't be intentional can it? Firstly, it could just break dependencies right? And it's so annoying to have this huge lag every time I open the terminal. I tried disabling the "Auto Updates" for Julia through the Windows Settings App to no avail.
I also tried the standalone installer, which doesn't have this problem, so I think to roll with this. I just wanted to double check if it's a good idea, is there smth I should be aware of?
https://redd.it/1pmd06p
@r_Julia
Reddit
From the Julia community on Reddit
Explore this post and more from the Julia community
Going down the performance rabbit hole - AOC 2025 day 11
This is my first post here, but just wanted to show how avoiding allocations and using some clever optimizations can take Julia to MONSTER speed. Please feel free to comment and criticize. Day 11 of AOC is a clear example of dynamic programming with a potentially monstrous result (quintillions?)
Naively one could do a life of the universe time
function find_length(input,start_node,end_node)
d=Dict()
for line in input
ss=split(line," ")
push!(d, ss[1][1:end-1] => ss[2:end] )
end
queue=[]
paths=[[start_node]]
while !isempty(paths)
path=popfirst!(paths)
last_visited=path[end]
if last_visited==end_node
push!(queue,path)
else
for v in d[last_visited]
new_path=copy(path)
push!(new_path,v)
push!(paths,new_path)
end
end
end
return length(queue)
end
But then (adding milestones as per part 2)
function part2(input,start_node,end_node,milestone1, milestone2)
d=Dict{String,Vector{String}}()
for line in input
ss=split(line," ")
push!(d, String(ss[1][1:end-1]) => String.(ss[2:end]))
end
memo=Dict{Tuple{String,String},BigInt}()
function get_segment_count(s_node,e_node)
if haskey(memo,(s_node,e_node))
return memo[(s_node,e_node)]
end
if s_node==e_node
return 1
end
if !haskey(d,s_node)
return 0
end
total=BigInt(0)
for v in d[s_node]
total+=get_segment_count(v,e_node)
end
memo[(s_node,e_node)]=total
return total
end
s_to_m1=get_segment_count(start_node,milestone1)
s_to_m2=get_segment_count(start_node,milestone2)
m1_to_m2=get_segment_count(milestone1,milestone2)
m2_to_m1=get_segment_count(milestone2,milestone1)
m2_to_end=get_segment_count(milestone2,end_node)
m1_to_end=get_segment_count(milestone1,end_node)
return s_to_m1*m1_to_m2*m2_to_end+s_to_m2*m2_to_m1*m1_to_end
end
This is quick code, it parses a file, creates a Dict and calculates everything in 847.000 μs (20105 allocs: 845.758 KiB), the result by the way is 371113003846800.
Now... I am storing the Dict as String => Vector{String} so I am incurring a penalty by hashing strings all the time. First improvement, map to Ints.
Doing this improvement (write a Dict that keeps the Ids, and the memo that takes tuples of Ints) the benchmark is
median 796.792 μs (20792 allocs: 960.773 KiB)
So it seems that the overhead of keeping Ids outweighs the benefits. Also, more allocs.
Building the graph takes around 217.709 μs, and then solving is the rest 580ish.
Now, reading from a Dict might be slow? What if I return a Vector{Vector{Int}}(undef, num\_nodes), preallocating the length and then reading in O(1) time?
function build_graph_v2(input)
id_map = Dict{String, Int}()
next_id = 1
# Helper to ensure IDs start at 1 and increment correctly
function get_id(s)
if !haskey(id_map, s)
id_map[s] = next_id
next_id += 1
end
return id_map[s]
end
# Temporary Dict for building (easier than resizing vectors dynamically)
adj_temp = Dict{Int, Vector{Int}}()
for line in input
parts = split(line, " ")
# key 1 is the source
u = get_id(string(parts[1][1:end-1]))
if !haskey(adj_temp, u)
adj_temp[u] = Int[]
end
# keys 2..end are the neighbors
for p in parts[2:end]
v = get_id(string(p))
push!(adj_temp[u], v)
end
end
# Convert to flat Vector{Vector{Int}} for speed
# length(id_map) is the exact number of unique nodes
num_nodes = length(id_map)
adj =
This is my first post here, but just wanted to show how avoiding allocations and using some clever optimizations can take Julia to MONSTER speed. Please feel free to comment and criticize. Day 11 of AOC is a clear example of dynamic programming with a potentially monstrous result (quintillions?)
Naively one could do a life of the universe time
function find_length(input,start_node,end_node)
d=Dict()
for line in input
ss=split(line," ")
push!(d, ss[1][1:end-1] => ss[2:end] )
end
queue=[]
paths=[[start_node]]
while !isempty(paths)
path=popfirst!(paths)
last_visited=path[end]
if last_visited==end_node
push!(queue,path)
else
for v in d[last_visited]
new_path=copy(path)
push!(new_path,v)
push!(paths,new_path)
end
end
end
return length(queue)
end
But then (adding milestones as per part 2)
function part2(input,start_node,end_node,milestone1, milestone2)
d=Dict{String,Vector{String}}()
for line in input
ss=split(line," ")
push!(d, String(ss[1][1:end-1]) => String.(ss[2:end]))
end
memo=Dict{Tuple{String,String},BigInt}()
function get_segment_count(s_node,e_node)
if haskey(memo,(s_node,e_node))
return memo[(s_node,e_node)]
end
if s_node==e_node
return 1
end
if !haskey(d,s_node)
return 0
end
total=BigInt(0)
for v in d[s_node]
total+=get_segment_count(v,e_node)
end
memo[(s_node,e_node)]=total
return total
end
s_to_m1=get_segment_count(start_node,milestone1)
s_to_m2=get_segment_count(start_node,milestone2)
m1_to_m2=get_segment_count(milestone1,milestone2)
m2_to_m1=get_segment_count(milestone2,milestone1)
m2_to_end=get_segment_count(milestone2,end_node)
m1_to_end=get_segment_count(milestone1,end_node)
return s_to_m1*m1_to_m2*m2_to_end+s_to_m2*m2_to_m1*m1_to_end
end
This is quick code, it parses a file, creates a Dict and calculates everything in 847.000 μs (20105 allocs: 845.758 KiB), the result by the way is 371113003846800.
Now... I am storing the Dict as String => Vector{String} so I am incurring a penalty by hashing strings all the time. First improvement, map to Ints.
Doing this improvement (write a Dict that keeps the Ids, and the memo that takes tuples of Ints) the benchmark is
median 796.792 μs (20792 allocs: 960.773 KiB)
So it seems that the overhead of keeping Ids outweighs the benefits. Also, more allocs.
Building the graph takes around 217.709 μs, and then solving is the rest 580ish.
Now, reading from a Dict might be slow? What if I return a Vector{Vector{Int}}(undef, num\_nodes), preallocating the length and then reading in O(1) time?
function build_graph_v2(input)
id_map = Dict{String, Int}()
next_id = 1
# Helper to ensure IDs start at 1 and increment correctly
function get_id(s)
if !haskey(id_map, s)
id_map[s] = next_id
next_id += 1
end
return id_map[s]
end
# Temporary Dict for building (easier than resizing vectors dynamically)
adj_temp = Dict{Int, Vector{Int}}()
for line in input
parts = split(line, " ")
# key 1 is the source
u = get_id(string(parts[1][1:end-1]))
if !haskey(adj_temp, u)
adj_temp[u] = Int[]
end
# keys 2..end are the neighbors
for p in parts[2:end]
v = get_id(string(p))
push!(adj_temp[u], v)
end
end
# Convert to flat Vector{Vector{Int}} for speed
# length(id_map) is the exact number of unique nodes
num_nodes = length(id_map)
adj =
Going down the performance rabbit hole - AOC 2025 day 11
This is my first post here, but just wanted to show how avoiding allocations and using some clever optimizations can take Julia to MONSTER speed. Please feel free to comment and criticize. Day 11 of AOC is a clear example of dynamic programming with a potentially monstrous result (quintillions?)
Naively one could do a life of the universe time
function find_length(input,start_node,end_node)
d=Dict()
for line in input
ss=split(line," ")
push!(d, ss[1][1:end-1] => ss[2:end] )
end
queue=[]
paths=[[start_node]]
while !isempty(paths)
path=popfirst!(paths)
last_visited=path[end]
if last_visited==end_node
push!(queue,path)
else
for v in d[last_visited]
new_path=copy(path)
push!(new_path,v)
push!(paths,new_path)
end
end
end
return length(queue)
end
But then (adding milestones as per part 2)
function part2(input,start_node,end_node,milestone1, milestone2)
d=Dict{String,Vector{String}}()
for line in input
ss=split(line," ")
push!(d, String(ss[1][1:end-1]) => String.(ss[2:end]))
end
memo=Dict{Tuple{String,String},BigInt}()
function get_segment_count(s_node,e_node)
if haskey(memo,(s_node,e_node))
return memo[(s_node,e_node)]
end
if s_node==e_node
return 1
end
if !haskey(d,s_node)
return 0
end
total=BigInt(0)
for v in d[s_node]
total+=get_segment_count(v,e_node)
end
memo[(s_node,e_node)]=total
return total
end
s_to_m1=get_segment_count(start_node,milestone1)
s_to_m2=get_segment_count(start_node,milestone2)
m1_to_m2=get_segment_count(milestone1,milestone2)
m2_to_m1=get_segment_count(milestone2,milestone1)
m2_to_end=get_segment_count(milestone2,end_node)
m1_to_end=get_segment_count(milestone1,end_node)
return s_to_m1*m1_to_m2*m2_to_end+s_to_m2*m2_to_m1*m1_to_end
end
This is quick code, it parses a file, creates a Dict and calculates everything in 847.000 μs (20105 allocs: 845.758 KiB), the result by the way is 371113003846800.
Now... I am storing the Dict as String => Vector{String} so I am incurring a penalty by hashing strings all the time. First improvement, map to Ints.
Doing this improvement (write a Dict that keeps the Ids, and the memo that takes tuples of Ints) the benchmark is
median 796.792 μs (20792 allocs: 960.773 KiB)
So it seems that the overhead of keeping Ids outweighs the benefits. Also, more allocs.
Building the graph takes around 217.709 μs, and then solving is the rest 580ish.
Now, reading from a Dict might be slow? What if I return a Vector{Vector{Int}}(undef, num\_nodes), preallocating the length and then reading in O(1) time?
function build_graph_v2(input)
id_map = Dict{String, Int}()
next_id = 1
# Helper to ensure IDs start at 1 and increment correctly
function get_id(s)
if !haskey(id_map, s)
id_map[s] = next_id
next_id += 1
end
return id_map[s]
end
# Temporary Dict for building (easier than resizing vectors dynamically)
adj_temp = Dict{Int, Vector{Int}}()
for line in input
parts = split(line, " ")
# key 1 is the source
u = get_id(string(parts[1][1:end-1]))
if !haskey(adj_temp, u)
adj_temp[u] = Int[]
end
# keys 2..end are the neighbors
for p in parts[2:end]
v = get_id(string(p))
push!(adj_temp[u], v)
end
end
# Convert to flat Vector{Vector{Int}} for speed
# length(id_map) is the exact number of unique nodes
num_nodes = length(id_map)
adj =
This is my first post here, but just wanted to show how avoiding allocations and using some clever optimizations can take Julia to MONSTER speed. Please feel free to comment and criticize. Day 11 of AOC is a clear example of dynamic programming with a potentially monstrous result (quintillions?)
Naively one could do a life of the universe time
function find_length(input,start_node,end_node)
d=Dict()
for line in input
ss=split(line," ")
push!(d, ss[1][1:end-1] => ss[2:end] )
end
queue=[]
paths=[[start_node]]
while !isempty(paths)
path=popfirst!(paths)
last_visited=path[end]
if last_visited==end_node
push!(queue,path)
else
for v in d[last_visited]
new_path=copy(path)
push!(new_path,v)
push!(paths,new_path)
end
end
end
return length(queue)
end
But then (adding milestones as per part 2)
function part2(input,start_node,end_node,milestone1, milestone2)
d=Dict{String,Vector{String}}()
for line in input
ss=split(line," ")
push!(d, String(ss[1][1:end-1]) => String.(ss[2:end]))
end
memo=Dict{Tuple{String,String},BigInt}()
function get_segment_count(s_node,e_node)
if haskey(memo,(s_node,e_node))
return memo[(s_node,e_node)]
end
if s_node==e_node
return 1
end
if !haskey(d,s_node)
return 0
end
total=BigInt(0)
for v in d[s_node]
total+=get_segment_count(v,e_node)
end
memo[(s_node,e_node)]=total
return total
end
s_to_m1=get_segment_count(start_node,milestone1)
s_to_m2=get_segment_count(start_node,milestone2)
m1_to_m2=get_segment_count(milestone1,milestone2)
m2_to_m1=get_segment_count(milestone2,milestone1)
m2_to_end=get_segment_count(milestone2,end_node)
m1_to_end=get_segment_count(milestone1,end_node)
return s_to_m1*m1_to_m2*m2_to_end+s_to_m2*m2_to_m1*m1_to_end
end
This is quick code, it parses a file, creates a Dict and calculates everything in 847.000 μs (20105 allocs: 845.758 KiB), the result by the way is 371113003846800.
Now... I am storing the Dict as String => Vector{String} so I am incurring a penalty by hashing strings all the time. First improvement, map to Ints.
Doing this improvement (write a Dict that keeps the Ids, and the memo that takes tuples of Ints) the benchmark is
median 796.792 μs (20792 allocs: 960.773 KiB)
So it seems that the overhead of keeping Ids outweighs the benefits. Also, more allocs.
Building the graph takes around 217.709 μs, and then solving is the rest 580ish.
Now, reading from a Dict might be slow? What if I return a Vector{Vector{Int}}(undef, num\_nodes), preallocating the length and then reading in O(1) time?
function build_graph_v2(input)
id_map = Dict{String, Int}()
next_id = 1
# Helper to ensure IDs start at 1 and increment correctly
function get_id(s)
if !haskey(id_map, s)
id_map[s] = next_id
next_id += 1
end
return id_map[s]
end
# Temporary Dict for building (easier than resizing vectors dynamically)
adj_temp = Dict{Int, Vector{Int}}()
for line in input
parts = split(line, " ")
# key 1 is the source
u = get_id(string(parts[1][1:end-1]))
if !haskey(adj_temp, u)
adj_temp[u] = Int[]
end
# keys 2..end are the neighbors
for p in parts[2:end]
v = get_id(string(p))
push!(adj_temp[u], v)
end
end
# Convert to flat Vector{Vector{Int}} for speed
# length(id_map) is the exact number of unique nodes
num_nodes = length(id_map)
adj =
Vector{Vector{Int}}(undef, num_nodes)
for i in 1:num_nodes
# Some nodes might be leaves (no outgoing edges), so we give them empty vectors
adj[i] = get(adj_temp, i, Int[])
end
return adj, id_map, num_nodes
end
function solve_vectorized_memo(adj, id_map, num_nodes, start_s, end_s, m1_s, m2_s)
s, e = id_map[start_s], id_map[end_s]
m1, m2 = id_map[m1_s], id_map[m2_s]
# Pre-allocate one cache vector to reuse
# We use -1 to represent "unvisited"
memo = Vector{BigInt}(undef, num_nodes)
function get_segment(u, target)
# Reset cache: fill with -1
# (Allocating a new vector here is actually cleaner/safer for BigInt
# than mutating, and still cheaper than Dict)
fill!(memo, -1)
return count_recursive(u, target)
end
function count_recursive(u, target)
if u == target
return BigInt(1)
end
# O(1) Array Lookup
if memo[u] != -1
return memo[u]
end
# If node has no children (empty vector in adj)
if isempty(adj[u])
return BigInt(0)
end
total = BigInt(0)
# skips bounds checking for extra speed
u/inbounds for v in adj[u]
total += count_recursive(v, target)
end
memo[u] = total
return total
end
# Path A
s_m1 = get_segment(s, m1)
if s_m1 == 0
path_a = BigInt(0)
else
path_a = s_m1 * get_segment(m1, m2) * get_segment(m2, e)
end
# Path B
s_m2 = get_segment(s, m2)
if s_m2 == 0
path_b = BigInt(0)
else
path_b = s_m2 * get_segment(m2, m1) * get_segment(m1, e)
end
return path_a + path_b
end
The graph takes now median 268.959 μs (7038 allocs: 505.672 KiB) and the path solving takes median 522.583 μs (18086 allocs: 424.039 KiB). Basically no gain... :(
What if BigInt is the culprit? Now I know the result fits in an Int128... Make the changes and now median 240.333 μs (10885 allocs: 340.453 KiB) (!) far fewer allocations and twice as fast! The graph building is the same as before.
So one thing remains, allocs. The fact is that my path solver calls the "external" memo and adjacency graph at every step. And the compiler probably does not know about it's type and stability... So let's make both of them an internal call.
function count_recursive_inner(u::Int, target::Int, memo::Vector{Int128}, adj::Vector{Vector{Int}})
if u == target
return Int128(1)
end
# is safe here because u is guaranteed to be a valid ID
val = memo[u]
if val != -1
return val
end
# If no children, dead end
if isempty(adj[u])
return Int128(0)
end
total = Int128(0)
for v in adj[u]
total += count_recursive_inner(v, target, memo, adj)
end
u/inbounds memo[u] = total
return total
end
# 2. The Solver Wrapper
function solve_zero_alloc(adj::Vector{Vector{Int}}, id_map, num_nodes, start_s, end_s, m1_s, m2_s)
s, e = id_map[start_s], id_map[end_s]
m1, m2 = id_map[m1_s], id_map[m2_s]
# ONE allocation for the whole run
memo = Vector{Int128}(undef, num_nodes)
# Helper to clean up the logic (this closure is fine as it's not recursive)
function run_segment(u, v)
fill!(memo, -1)
return count_recursive_inner(u, v, memo, adj)
end
# Path A
path_a = run_segment(s, m1) * run_segment(m1, m2) * run_segment(m2, e)
path_b = run_segment(s, m2) * run_segment(m2,
for i in 1:num_nodes
# Some nodes might be leaves (no outgoing edges), so we give them empty vectors
adj[i] = get(adj_temp, i, Int[])
end
return adj, id_map, num_nodes
end
function solve_vectorized_memo(adj, id_map, num_nodes, start_s, end_s, m1_s, m2_s)
s, e = id_map[start_s], id_map[end_s]
m1, m2 = id_map[m1_s], id_map[m2_s]
# Pre-allocate one cache vector to reuse
# We use -1 to represent "unvisited"
memo = Vector{BigInt}(undef, num_nodes)
function get_segment(u, target)
# Reset cache: fill with -1
# (Allocating a new vector here is actually cleaner/safer for BigInt
# than mutating, and still cheaper than Dict)
fill!(memo, -1)
return count_recursive(u, target)
end
function count_recursive(u, target)
if u == target
return BigInt(1)
end
# O(1) Array Lookup
if memo[u] != -1
return memo[u]
end
# If node has no children (empty vector in adj)
if isempty(adj[u])
return BigInt(0)
end
total = BigInt(0)
# skips bounds checking for extra speed
u/inbounds for v in adj[u]
total += count_recursive(v, target)
end
memo[u] = total
return total
end
# Path A
s_m1 = get_segment(s, m1)
if s_m1 == 0
path_a = BigInt(0)
else
path_a = s_m1 * get_segment(m1, m2) * get_segment(m2, e)
end
# Path B
s_m2 = get_segment(s, m2)
if s_m2 == 0
path_b = BigInt(0)
else
path_b = s_m2 * get_segment(m2, m1) * get_segment(m1, e)
end
return path_a + path_b
end
The graph takes now median 268.959 μs (7038 allocs: 505.672 KiB) and the path solving takes median 522.583 μs (18086 allocs: 424.039 KiB). Basically no gain... :(
What if BigInt is the culprit? Now I know the result fits in an Int128... Make the changes and now median 240.333 μs (10885 allocs: 340.453 KiB) (!) far fewer allocations and twice as fast! The graph building is the same as before.
So one thing remains, allocs. The fact is that my path solver calls the "external" memo and adjacency graph at every step. And the compiler probably does not know about it's type and stability... So let's make both of them an internal call.
function count_recursive_inner(u::Int, target::Int, memo::Vector{Int128}, adj::Vector{Vector{Int}})
if u == target
return Int128(1)
end
# is safe here because u is guaranteed to be a valid ID
val = memo[u]
if val != -1
return val
end
# If no children, dead end
if isempty(adj[u])
return Int128(0)
end
total = Int128(0)
for v in adj[u]
total += count_recursive_inner(v, target, memo, adj)
end
u/inbounds memo[u] = total
return total
end
# 2. The Solver Wrapper
function solve_zero_alloc(adj::Vector{Vector{Int}}, id_map, num_nodes, start_s, end_s, m1_s, m2_s)
s, e = id_map[start_s], id_map[end_s]
m1, m2 = id_map[m1_s], id_map[m2_s]
# ONE allocation for the whole run
memo = Vector{Int128}(undef, num_nodes)
# Helper to clean up the logic (this closure is fine as it's not recursive)
function run_segment(u, v)
fill!(memo, -1)
return count_recursive_inner(u, v, memo, adj)
end
# Path A
path_a = run_segment(s, m1) * run_segment(m1, m2) * run_segment(m2, e)
path_b = run_segment(s, m2) * run_segment(m2,
m1) * run_segment(m1, e)
return path_a + path_b
end
The result is **median 24.167 μs (4 allocs: 10.094 KiB)**
So, by using a vector of vectors, results stored in one block of Int128, making sure there is no allocation needed by calling the functions without external arguments, took the whole thing from 580 to 24(!) milliseconds.
I learned a lot! Hope you enjoyed this trip down the performance rabbit hole! Is there something else I could have done?
https://redd.it/1pls1tj
@r_Julia
return path_a + path_b
end
The result is **median 24.167 μs (4 allocs: 10.094 KiB)**
So, by using a vector of vectors, results stored in one block of Int128, making sure there is no allocation needed by calling the functions without external arguments, took the whole thing from 580 to 24(!) milliseconds.
I learned a lot! Hope you enjoyed this trip down the performance rabbit hole! Is there something else I could have done?
https://redd.it/1pls1tj
@r_Julia
Reddit
From the Julia community on Reddit
Explore this post and more from the Julia community
Help with work
I’ve got a project I need help with, on Julia language due in a few weeks. If someone can help me with it I can pay , and also guide me. Thanks
https://redd.it/1pmenxg
@r_Julia
I’ve got a project I need help with, on Julia language due in a few weeks. If someone can help me with it I can pay , and also guide me. Thanks
https://redd.it/1pmenxg
@r_Julia
Reddit
From the Julia community on Reddit
Explore this post and more from the Julia community
so, WTH is wrong with Julia?
Hi. Sorry, but this is a rant-y post.
So, new, fresh install of Julia using the installer from official website. Fine.
First thing I do, is \] -> add DifferentialEquations -> a century of downloading and precompiling -> dozens of warning messages -> read, can't figure everything out so ask AI, got told it was fine, just warning messages but should be able to use package -> try to use package (using DifferentialEquations) -> another century of precompiling -> again, dozens of warning messages -> I'm done.
Why does Julia do that so much? It feels like the time it takes to precompile and stuff largely exceeds the actual calculation time of other languages (like Python or Octave)... so what's the point? I thought Julia was fast, but this (supposed) quickness is completely wiped out by the precompiling steps. Am I using it wrong? What can I do to open Julia and actually start to work, not precompile stuff?
Everytime DifferentialEquations is used, dozens of messages like this appear during precompilation:
┌ OrdinaryDiffEqNonlinearSolve
│ WARNING: Method definition init_cacheval(LinearSolve.QRFactorization{P} where P, SciMLOperators.AbstractSciMLO
perator{T} where T, Any, Any, Any, Any, Int64, Any, Any, Union{Bool, LinearSolve.LinearVerbosity{__T_default_lu_f
allback, __T_no_right_preconditioning, __T_using_IterativeSolvers, __T_IterativeSolvers_iterations, __T_KrylovKit
_verbosity, __T_KrylovJL_verbosity, __T_HYPRE_verbosity, __T_pardiso_verbosity, __T_blas_errors, __T_blas_invalid
_args, __T_blas_info, __T_blas_success, __T_condition_number, __T_convergence_failure, __T_solver_failure, __T_ma
x_iters} where __T_max_iters where __T_solver_failure where __T_convergence_failure where __T_condition_number wh
ere __T_blas_success where __T_blas_info where __T_blas_invalid_args where __T_blas_errors where __T_pardiso_verb
osity where __T_HYPRE_verbosity where __T_KrylovJL_verbosity where __T_KrylovKit_verbosity where __T_IterativeSol
vers_iterations where __T_using_IterativeSolvers where __T_no_right_preconditioning where __T_default_lu_fallback
}, LinearSolve.OperatorAssumptions{T} where T) in module LinearSolve at /home/jrao/.julia/packages/LinearSolve/WR
utJ/src/factorization.jl:338 overwritten in module LinearSolveSparseArraysExt at /home/jrao/.julia/packages/Linea
rSolve/WRutJ/ext/LinearSolveSparseArraysExt.jl:315.
│ ERROR: Method overwriting is not permitted during Module precompilation. Use `__precompile__(false)` to opt-ou
t of precompilation.
WTH does that even mean?
https://redd.it/1pka4gw
@r_Julia
Hi. Sorry, but this is a rant-y post.
So, new, fresh install of Julia using the installer from official website. Fine.
First thing I do, is \] -> add DifferentialEquations -> a century of downloading and precompiling -> dozens of warning messages -> read, can't figure everything out so ask AI, got told it was fine, just warning messages but should be able to use package -> try to use package (using DifferentialEquations) -> another century of precompiling -> again, dozens of warning messages -> I'm done.
Why does Julia do that so much? It feels like the time it takes to precompile and stuff largely exceeds the actual calculation time of other languages (like Python or Octave)... so what's the point? I thought Julia was fast, but this (supposed) quickness is completely wiped out by the precompiling steps. Am I using it wrong? What can I do to open Julia and actually start to work, not precompile stuff?
Everytime DifferentialEquations is used, dozens of messages like this appear during precompilation:
┌ OrdinaryDiffEqNonlinearSolve
│ WARNING: Method definition init_cacheval(LinearSolve.QRFactorization{P} where P, SciMLOperators.AbstractSciMLO
perator{T} where T, Any, Any, Any, Any, Int64, Any, Any, Union{Bool, LinearSolve.LinearVerbosity{__T_default_lu_f
allback, __T_no_right_preconditioning, __T_using_IterativeSolvers, __T_IterativeSolvers_iterations, __T_KrylovKit
_verbosity, __T_KrylovJL_verbosity, __T_HYPRE_verbosity, __T_pardiso_verbosity, __T_blas_errors, __T_blas_invalid
_args, __T_blas_info, __T_blas_success, __T_condition_number, __T_convergence_failure, __T_solver_failure, __T_ma
x_iters} where __T_max_iters where __T_solver_failure where __T_convergence_failure where __T_condition_number wh
ere __T_blas_success where __T_blas_info where __T_blas_invalid_args where __T_blas_errors where __T_pardiso_verb
osity where __T_HYPRE_verbosity where __T_KrylovJL_verbosity where __T_KrylovKit_verbosity where __T_IterativeSol
vers_iterations where __T_using_IterativeSolvers where __T_no_right_preconditioning where __T_default_lu_fallback
}, LinearSolve.OperatorAssumptions{T} where T) in module LinearSolve at /home/jrao/.julia/packages/LinearSolve/WR
utJ/src/factorization.jl:338 overwritten in module LinearSolveSparseArraysExt at /home/jrao/.julia/packages/Linea
rSolve/WRutJ/ext/LinearSolveSparseArraysExt.jl:315.
│ ERROR: Method overwriting is not permitted during Module precompilation. Use `__precompile__(false)` to opt-ou
t of precompilation.
WTH does that even mean?
https://redd.it/1pka4gw
@r_Julia
Reddit
From the Julia community on Reddit
Explore this post and more from the Julia community
What do you think about Tongyuan Softcontrol’s MWorks software from China?
https://redd.it/1pixsa2
@r_Julia
https://redd.it/1pixsa2
@r_Julia
Reddit
From the Julia community on Reddit
Explore this post and more from the Julia community
Where Should I Use Julia ?
Hi, I'm a backend developer and I usually work with Python. Lately I've been using Julia, and I'd like to know where it fits in a real project and what the major benefits are when combining it with Python
https://redd.it/1pf26uf
@r_Julia
Hi, I'm a backend developer and I usually work with Python. Lately I've been using Julia, and I'd like to know where it fits in a real project and what the major benefits are when combining it with Python
https://redd.it/1pf26uf
@r_Julia
Reddit
From the Julia community on Reddit
Explore this post and more from the Julia community